Robotics Project: Snuggles, The Wolf

AIUI an LLM basically responds with its estimate of the most likely answer that's plausible. ChatGPT has extensions to handle programming and some other topics that need handling differently (so it says). It's said to be best at python; on a niche language like openscad it helped me with an outline but details needed fixing. That's quite good (Claude 3 Haiku gave me five out of five wrong answers on openscad) and certainly better than starting from scratch.
 
ChatGPT is a wonderful thing, not just for low-level coding tasks, but for overall project assessment and feasibility studies. Try this query in your ChatGPT:
"I need assistance in determining the skill levels and scope of doing a robotics type project. The project is a dog which looks semi-realistic and has servos operating its paws and basic body movements, including perhaps some body movements in its face. It needs to be able to have realistic and comforting motions. I would also like it to have some sound capabilities, possibly based on some modern AI techniques for its dog-like responses. The comfort dog does not need to walk,Although I've been exploring gyros and things of this nature, so let's include that as a possible part of its feature set. Give me a practical and realistic determination of what skill levels are required to attempt a project like this, and also approximately how much coding would be required in man hours as a very rough guide and estimate. What disciplines and skill levels are required for the team to work on such a project? for the various roles also indicate how many years of experience would be a minimum for them to even attempt that particular role."
 
@marka-ee : I appreciate the thought, but knowing skill levels / etc required doesn't help me, since there isn't a way to find and get others to help with the project, regardless of their skill level. I don't have money to pay them, and in all the years of trying I haven't been able to get anyone with the required skills to actually want to work on it directly. (just advice on various things, of wildly varying usefulness)

So I have to use whatever skill levels I myself have, regardless of what they are, and just keep working on it until I learn the things necessary to do the various parts of it. ;)

Knowing how far I myself might be from the necessary skill levels isn't useful to me, since I'm already going to just learn what I need to know as I go, quickly or slowly, until it's done or I run out of lifetime to do it in. :lol:


I already know the scope of the project, since I've been working on designing it for so many years. :oops:


There is almost certainly a few hundred times the amount of detail in my head about this than I have managed to type up here in the thread, including unsuccesful tests / technologies, bad ideas replaced by better ones, etc. I expect it would take me a few years to type up everything already explored and researched (any notes (physical or digital), sketches, bits and pieces, etc., that I had before the housefire were lost in that fire since it was in my bedroom where I had all my important stuff like that. But most of that stuff was irrelevant even at that time, and almost all of it would be now for sure, given changes in available current technology that caused me to start this thread in the first place, being able to "see the light at the end of the tunnel" now that things are so close to being "easy" to create this from, or at least the closest to being possible that it's ever been. And after the fire I didn't write down most things about it, just kept them in my head for the most part, other than occasional doodles on napkins and placemats etc when out and about).
 
Ambitious <> infeasible and there's drive behind his project. When I subbed to ES I thought motors had a fixed voltage, now I develop fast chargers and control my ebike using micros. BTW I like the idea of flexible materials in the toes. On a bike cable ties have several engineering advantages over bolts.
EDIT: for <> read != which is clearer, both are: 'not equal to'
 
Last edited:
Ambitious <> infeasible and there's drive behind his project.
Honestly this isn't all that ambitious a project. I can imagine much more complex things (have done so, but they're all well beyond my present or future abilities and budget so they just stay in my head, no point in noting any of them down).

I'm sure that any half-decent robotics guru (with knowledge in that area equivalent to what I have for ebikes, or music/sound creation/editing) could whip this thing up pretty quick. I just don't know any.

The only thing that might take some specialist expertise would be the behavioral response learning system, which would probably work "best" with some form of "AI", but even that could probably be based on one of the various opensource AIs out there if one is appropriate to this type of system.

Even without the ability to learn new things on it's own, the basic behavior set itself could be created once built simply by manually moving the various parts by hand to record the movements from the IMU sensors and whatever motor feedback system, and assigning various sets of responses to the various behaviors, and then assigning the required input conditions to trigger those (also recorded from the sensors). (ideally a GUI would be created so that any end-user of the system could do this themselves, to add behaviors not in the basic set).


For myself, the hardware is not that much of a challenge, the things I don't know are relatively easy to pick up as I go.

But software...while I understand the principles, learning the actual coding is very hard for me, working on my own. (If I were working *with* someone and learning as I go, I would learn faster, but it would still not be easy). My brain just doesn't work right for certain types of things, and apparently that's one of them.
 
Here's a Deepseek version of the behavioral AI description I gave it and the code and other data it gave back, for future reference.
I can't tell if it's given me anything useful (for instance mxlemming said that the ChatGPT code is just garbage that isn't worth saving).

Hopefully (even though I asked a lot of it and probably provided it far too little info) this version will be more useful, once I understand enough more to be able to use it. I guess if it's just garbage too, then I'll have to find some other way of doing this, or wait until one of the AIs is good enough to teach me how, in detail, step by step.


Amberwolf said:
Can you use your own code to provide me with a complete behavioral-learning AI? This AI will need to send and receive data to/from another system that then reads sensors from and controls a realistic robotic wolf (henceforth wolfy) motion via motors and creates realistic sounds. The AI needs to take the data from the robot control system's (henceforth RCS) user inputs (henceforth UI) and learn how to do the things the user is teaching it, just like a real dog or wolf would. The RCS has multi-axis IMU sensors on all parts of the wolfy that provide data to the AI on where the wolfy is being touched, and how hard the touch is. This includes data on the wolfy's own touches of the world around it, as well as UI of the user touching, petting, or holding any part of the wolfy. The RCS also senses the position and velocity and direction of movement of each part of the wolfy, both for the wolfy's own movements and for those where the user is moving the wolfy's limbs, body, head, or tail. The RCS also has image sensors in the eyes, and microphones in the ears, that feed their data in a stream to the AI. The AI needs to read those inputs, and learn to correctly respond to different types of input. For one example, a touch that is above a certain high threshold, with a velocity change that is a sudden drop or spike, indicates an impact. If the wolfy was moving a limb and the limb stops moving and this kind of detection occurs, it means the wolfy's limb hit something and stopped, so it should learn not to do that, especially in situations where it was moving the paw toward the user and the impact is on the side facing the user; it means it hit the user too hard. It should also learn that this requires a bashful response, and react in a contrite way with it's movements and sounds. If the wolfy was moving the paw toward the user and the impact was on the top side of the paw, or there is a sustained push detected over surface areas facing the user , it means the user is providing the wolfy negative feedback, slapping the paw away or pushing the whole wolfy away, so that the wolfy won't injure the user. If the microphones detect the user saying "no" then that tells the wolfy the same thing. If the microphones detect the user saying "sit" then the wolfy needs to perform all the actions required to position itself in a sitting position. There are many more behavioral examples, but they should be obvious from how a dog or a wolf actually behaves. The behavioral AI will need to have the basic behaviors already embedded in it, and learn when and how to apply them from user interactions like the above. It will also need to learn new behaviors from user interactions. For isntance, it needs to be able to visually follow user movements, so that it can see the user is pointing to a spot, and then listen to the user sounds so that it can hear the command the user is giving, and also respond to user touches to move toward or away from the touch depending on the user input and the situation. There will also be a Programming Mode the user can tell the wolfy AI to enter, where the user can position the wolfy, or move it in whatever way is needed, while telling the wolfy that this is a new behavior. That behavior is then recorded as a new behavior that will occur whenever needed or called for by the user. There will be a second part of the Programming Mode where the user can modify existing behaviors. The user's command will call up a behavior, and the wolfy will play it back, and the user then manipulates the wolfy or tells it to do something differently or visually indicates some change, and the wolfy modifies the behavior until the user says it matches the desired change, and saves the behavior as a new version of that behavior. The same process can be used to create new behaviors from old ones. Other parts of the wolfy behaviors include going to sleep like a dog whenever the user isn't interacting with it for some amount of time. This sleep mode will include typical dog sleep behaviors, like puppy dreaming, snoring, etc. When it wakes it will be sleepy and slow to respond to behaviors for a short but variable time. All of the behaviors and responses will be varied each time they are used, so that it appears more alive to the user.
Creating a complete behavioral-learning AI for a robotic wolf like "Wolfy" is a complex and multidisciplinary task that involves robotics, machine learning, sensor integration, and real-time control systems. Below, I’ll outline a high-level architecture and provide a Python-based framework to get you started. This framework will include basic behaviors, learning mechanisms, and integration with the RCS (Robot Control System).

Deepseek said:
High-Level Architecture
Sensors and Inputs:
IMU sensors (touch, position, velocity, direction).
Image sensors (vision).
Microphones (audio input).
User inputs (commands, gestures, etc.).
Outputs:
Motor controls (movement of limbs, head, tail, etc.).
Sound generation (barks, whines, growls, etc.).
Behavioral AI:
Predefined Behaviors: Sitting, sleeping, following, etc.
Learning Mechanism: Reinforcement learning, supervised learning, and imitation learning.
Behavior Modification: Programming Mode for creating and editing behaviors.
Communication:
Real-time data exchange between the AI and RCS.
Python Framework
Below is a simplified Python framework to implement the behavioral-learning AI. This framework assumes you have access to the RCS APIs for sensor data and motor control.

Code:
import time
import random
import numpy as np
from sklearn.linear_model import SGDClassifier  # For simple learning tasks

# Constants
TOUCH_THRESHOLD = 5.0  # Example threshold for touch sensitivity
SLEEP_TIMEOUT = 300  # Time in seconds before Wolfy goes to sleep

# Predefined Behaviors
class Behaviors:
    @staticmethod
    def sit():
        print("Wolfy is sitting.")
        # Send motor commands to RCS to position Wolfy in a sitting pose

    @staticmethod
    def sleep():
        print("Wolfy is sleeping.")
        # Send motor commands to RCS for sleeping pose
        while True:
            print("Zzz...")
            time.sleep(2)
            if random.random() < 0.1:  # Simulate dreaming
                print("Wolfy is dreaming...")
                # Twitch motors slightly

    @staticmethod
    def follow_user():
        print("Wolfy is following the user.")
        # Use vision and audio inputs to track and follow the user

# Behavioral AI Class
class WolfyAI:
    def __init__(self):
        self.behaviors = Behaviors()
        self.last_interaction_time = time.time()
        self.is_sleeping = False
        self.learning_model = SGDClassifier()  # Simple model for learning

    def process_sensor_data(self, touch_data, imu_data, audio_data, vision_data):
        if self.is_sleeping:
            return

        # Check for user interaction timeout
        if time.time() - self.last_interaction_time > SLEEP_TIMEOUT:
            self.behaviors.sleep()
            self.is_sleeping = True
            return

        # Example: Detect touch impact
        if touch_data['force'] > TOUCH_THRESHOLD and abs(touch_data['velocity']) > 1.0:
            print("Wolfy detected an impact!")
            self.respond_to_impact(touch_data)

        # Example: Process audio commands
        if "sit" in audio_data:
            self.behaviors.sit()
        elif "no" in audio_data:
            print("Wolfy understands 'no' and stops the current behavior.")

        # Example: Follow user if they move
        if vision_data['user_moving']:
            self.behaviors.follow_user()

    def respond_to_impact(self, touch_data):
        if touch_data['location'] == 'paw_top':
            print("Wolfy understands it was pushed away.")
            # Adjust behavior to avoid repeating the action
        elif touch_data['location'] == 'paw_side':
            print("Wolfy apologizes for hitting the user.")
            # Perform contrite movements and sounds

    def enter_programming_mode(self):
        print("Entering Programming Mode...")
        # Allow user to create or modify behaviors
        new_behavior = self.record_behavior()
        self.save_behavior(new_behavior)

    def record_behavior(self):
        print("Recording new behavior...")
        # Capture motor commands and sensor data during user manipulation
        behavior = {}  # Store the behavior data
        return behavior

    def save_behavior(self, behavior):
        print("Saving behavior...")
        # Save the behavior to a database or file

# Mock RCS Interface
class RCS:
    def get_sensor_data(self):
        # Simulate sensor data
        return {
            'touch': {'force': random.uniform(0, 10), 'velocity': random.uniform(-2, 2), 'location': 'paw_top'},
            'imu': {'position': [0, 0, 0], 'velocity': [0, 0, 0]},
            'audio': "sit",
            'vision': {'user_moving': True}
        }

    def send_motor_commands(self, commands):
        # Send commands to motors
        pass

# Main Loop
if __name__ == "__main__":
    wolfy_ai = WolfyAI()
    rcs = RCS()

    while True:
        sensor_data = rcs.get_sensor_data()
        wolfy_ai.process_sensor_data(**sensor_data)
        time.sleep(1)  # Simulate real-time processing


Deepseek said:
Key Features
Predefined Behaviors: Basic behaviors like sitting, sleeping, and following are implemented.
Sensor Data Processing: The AI processes touch, audio, and vision data to respond appropriately.
Learning Mechanism: A simple learning model (SGDClassifier) is included for basic tasks. You can replace this with more advanced models like deep reinforcement learning (e.g., using TensorFlow or PyTorch).
Programming Mode: Allows the user to create and modify behaviors.
Next Steps
Integrate with RCS: Replace the mock RCS interface with actual APIs for sensor data and motor control.
Expand Learning: Implement more advanced learning algorithms (e.g., deep Q-learning for reinforcement learning).
Add More Behaviors: Implement additional behaviors like barking, wagging the tail, etc.
Optimize Real-Time Performance: Ensure the AI can process data and control Wolfy in real-time.
This framework is a starting point and will need to be expanded and refined based on the specific requirements of your project. Let me know if you need further assistance!
 
I like to tackle unfamiliar problems from both high and low levels alternately. Here's some IMU code for Pi Pico:

import machine
i2c = machine.I2C(0, sda=machine.Pin(12), scl=machine.Pin(13)) # creating the object that allows for I2C communication in MicroPython
imu = MPU6050(i2c) # passing the i2c object to the MPU6050 class above. This class will handle all communications
imu.wake() # wakes up the MPU-6050 (it may have been in sleep mode)
gyro_data = imu.read_gyro_data()
print(gyro_data) # (0.346823, -0.198345, 0.023958)

Taken from Development Tutorial section 3 of:

GitHub - TimHanewich/scout: Python-based Quadcopter Flight Controller Software using a Raspberry Pi Pico, MPU-6050, and a FlySky radio transmitter & receiver

with a step by step build guide here:

How to use an MPU-6050 with a Raspberry Pi Pico using MicroPython | by Tim Hanewich | Medium
 
I appreciate the thought, but I already have code-stuff to individually read IMUs; I even built a little breadboarded test unit on a previous page that read the IMU 3-axis angles and move servos to match them, from a gimbal-cameramount tutorial.

What I need is a program that will do *all* the functions I've previously described with the data gathering and processing of all the IMU outputs. ;) I have some concept of what's required, as a kind of block diagram sort of thing, but digging into coding to implement it keeps exploding my brain; there's just too many things I don't know or "understand" yet. (I've previously mentioned my brain problems in being unable to learn this stuff linearly and the gaps that leaves me with)

ideally it would be something that uses the mpu6050's existing onboard mcu to preprocess data to help this along, but i also don't understand the documentation they provide for them (such as it is--it seems to be pretty minimal, even insufficient, to use it's features). it's probably me and not the docs, though.

my usage for the data is so different from the stuff i find out there that i don't yet know hotw o adapt the code taht is out there to what i'm doing. i'm sure eventaully i'll figure it out, probably once the ai stuff can be used to teach me the process the same way you can teach a really stupid dog to eventually stop peeing on it's own head.


ATM I've got some heavy duty cold (for days now) that i can't think thru, so 'm probably not seeing / saying something right here, or misunderstanding what your post was about, etc. i could probably manage that last part anyway, without the cold, but it isn'thelping at all.
 
sorry i am all grumpy....mostly i'm exhausted and feel awful, beren that way for days when i took the week off so i could work on stuff around here and couldn't do much of anything but lay here most of the time. :(
 
No sweat, hope you feel better soon. It sounds as though you've enough h/w built to check how much noise and drift you get from your IMU and how rapidly it lets you re-read it. Those are or were IMU weaknesses so this might be worthwhile just to make sure it performs well enough for you to build up from there in the way you intend to.

If you dry run the Deepseek code with pen and paper is it along the right lines? When you list what Deepseek left out (compared to your prompt) if there's anything fundamental (rather than enhancements) you could try feeding the code back into Deepseek to improve upon, stepwise.
 
Apologies for the poor quality of the responses below, especially the typing, i'm not presently able to go back and fix all the problems i made tyipng it up. :(

It sounds as though you've enough h/w built

not sure i'd call i t "built" ;) but it's on a breadboard.

to check how much noise and drift you get from your IMU and how rapidly it lets you re-read it. Those are or were IMU weaknesses so this might be worthwhile just to make sure it performs well enough for you to build up from there in the way you intend to.

I don't know. I expect there will be drift on each separate imu, and it'll probably need a separate board that has the ohter 3 axes, as these are only six not n9. so they can't compensate for it. i read of various techniques to use the searpate board to recalibratefor drift over time. taht's one of the parts of the wakefromsleep routine for the whole wolfy, where it will "stretch" and figure out where all it's parts are relative to each other and magnetic north, etc.


If you dry run the Deepseek code with pen and paper is it along the right lines?

I don't understand what this means. could you explain?

even if i could read my writing :oops: :/ it would take forever to write it all out on paper with a pen*** and i wouldn't get anything out of it that i don't get from seeing it on screen where it's actually readable.

Even if i wrote it out on paper i wouldn't know what it is doing any better than i would any other way?


***if i used a pen i couldn't correct mistakes and i make lots and lots of those with wrong letters and shapes that aren't the right leters, or end up wiht them on top of each ohter instead of in sequence, etc. I'd have to use a pencil so I could erase all those once i went thru it enough times to see all the mistakes, looking back and forth from screen to paper over and over.


When you list what Deepseek left out (compared to your prompt) if there's anything fundamental (rather than enhancements) you could try feeding the code back into Deepseek to improve upon, stepwise.
i have no way of knowing wheterh the code is right or not yet, i haven't learned enough to find out. if i knew enough coding to know if it was right or wrong, id' be able to write it myself. :oops:

to test the code i have ot build hardware that it can contorl and then run it to see what happens. but that's still being worked out. I need to build the test skeleton, then install basic servos for one limb, along with imus on that limb and the main body frame at least, so it has something to relate to position wise. then i migth be able to start testing the ai genertead code, and themmaybe start learning something from the process of figurong out what does and doesn'tw ork.


*****************

its not really right but you could think of the way i have to do things like this where i don't know all the details of exactly how to do something already as a jigsaw puzzle thats poured out onto the table and floor and the next room or two, and i just put together whichever peices i already have available whenever they fit. The puzzle is all there in my head, just that some of the pieces are still blank, cuz i don't know exactly what is supposed to go there, although i have the shape of it. i doubt that helps any, but...it's all i can think of atm.


if i had a partner to work with consistently whenever i could work on it, that laready knew how all these things worked, i could learn what i need from doing that. right now that will probably have to be the ai cuz they won't get upset by my weird brain that can't figure actual people out or sometimes can't deal with how they act towards me, etc. (this is why i need the wolfy in the first place because there's no people that would ever want to be or be able to be what i need and all of them "abandon" me sooner or later; dogs do accept me and want me around but i won't always bea ble to have a real one; someday i just wont' be able to handle losing another one).

i'm sure all this stuff makes me out to be a flaked-out weirdo, but...i guess i probably am.
 
That's what I meant by alternating: you've done a lot on the concept and functioning and the jigsaw is that top-down phase asking for a break. H/w is real-world and finnicky, so prototyping can often be better tackled bottom-up starting at the sensors, but if you can build a testable part of the skeleton then go for it. Alternating helps me to modularise a task that's taking shape and firm up the interfaces, and tends to expose 'minor' functionality that turns out to be daunting and needs working around.

https://google.com/?q=dry+run+a+program
Not only will you evolve Deepseek's code, but the more you skim or youtube code in an unfamiliar language the more you ingest how it works, making the language tutorials easier. There are things about Python I'm not keen on but it's the new English for what you're doing. Hang in there with the AI, it gained 2 IQ points while I wrote this ;)
 
Ok, that makes sense, I think. Not sure how much is the explanation and how much is my brain not being starved...my O2 stats on the little finger monitor are finally staying above the beeping-alarm range, and I don't have all the pretty sparkles coming and going in waves, so either it's better or I finally burned out all the weak braincells. :p

Still sometimes choking (well, gagging) on the crap the coughing is flinging out of my lungs...
 
Back
Top