Robotics Project: Snuggles, The Wolf

what does slow mean in your case?


i don't know how fast the imus can internally process data to spit out processed stuff, but they spit out the current raw data in realtime. ther'es probably some microseconds delay, but not enough to worry about most likely. processing the data at the main system will almost certainly take much longer than generating it does.

some of the stuff i've read about the various imu boards/chips that have internal processing ability indicate up to 100ms to spit out processed data, which is too slow for realtime positioning feedback, but i would just use the raw datastream in those cases to the local mpus that are doing the motor control.

thats part of the reason for distributed processing with tiny mpus; each one can take care of just one thing so it doesn't have a big long program handling lots of tasks, just one small subset of tightly integrated tasks that depend on the same dataset / feedback / command process, like running just one motor or just one limb, etc.

if the main processor that has to make all the behavioral decisions also had to process all the individual data and motor control all in realtime, it would probably not be able to do it reliably or at all, or the program's interrupt structure might be very complex and hard to code and troubleshoot. but if all the little stuff is handled locally by separate mpus they don['t have to be super powerful to deal with just one subset of sensor data and one or two motors. they just get told to do a particular thing by the main mpu and then do that. overall the code might end up more complex as in more separate programs running, but each one should be simpler to create and troubleshoot problems in as it's designed. more on this in a separate post



either way, the imus are necssary--remember that in my proprioception network (post on the first page describing its basics) the imus do multiple things

their first and most important dataset is the accelerometer response to vibrations, impacts, etc as all the physical input is detected this way because other normal touch and force sensors can't work under the fur correctly or at all

their second dataset is the position sensing, and movement sensing; some of the first dataset gets used to determine rate of change of movement as well.
 
Offload some processing to a high powered PC via ethernet? You could build an ethernet cable into the robot dog's tail? I don't want to imagine where the logical place for the ethernet port would be.
 
Offload some processing to a high powered PC via ethernet? You could build an ethernet cable into the robot dog's tail? I don't want to speculate where the logical place for ethernet port would be.

no reason to use the tail for that, the tail has to be a tail and do all the things a tail does, or else it's not dog or wolflike and doesn't do what's required.

as previously described in the thread all the motors, etc will be external and so will the processing except for those things like the imus that have to be phsyically embedded in teh wolfybot. so there will already be an umbilical (a rather large one), but it wont' be the tail. it will rpobably come out of the "butt" under the tail.

if there was any way to actually put all this stuff inside it along with a big enough power source, and have it all be quiet, and not melt or catch fire, that'd be great, but i haven't figured out any way to do that. remember that it's a wolf (or dog), and is covered in fur, and have foam or other padding under that to make it snuggly and squishable, so it will be very well thermally insulated, and it's highly likely to cause failure of all the things that generate heat, like motors and cpus and stuff. power source would pretty much have to plug in the wall to run all this stuff (guessing at least a thousand watts overall at full load, probably a lot less when laying still "asleep"), a battery that would fit inside along with all the stuff wouldn't be enough to run it for even an hour at a time.

even though the actual size of the wolfy is pretty huge (real life size), the only place to fit anything is the ribcage area, which is only a small volume, perhaps half that of a tidycats litter bucket, or perhaps 2 loaves of bread; the ribcage itself has to be flexible like a real one in order to move correctly so it's not nearly as much space inside . the skull space is already going to have the mechanics for moving hte ears and eyes and jaw and any other face parts i can manage to make work and still be squishable without breaking. the limbs have no space for anything as the pull cables for the motors to move them run thru them. if t he cables have to run outside the hollow tube bones then there will be space inside those if they're metal but ideally i'd like to print the bones to shape them similarly to real ones for the more realistic movement abilty and feel under the padding. the belly needs to be squishable and flexible like a belly, and cant' really have big solid pieces in it even if some stuff does go in there because the body has to be able to curl up and flex around realistically which it can't do if the belly has a big solid mass in it that gets in the way.

originally i was planning to run the main code on a spare old laptop or two or old desktop/towers i have hardware to build up, but now i've got some donated old hp rack servers that are much more powerful with way more ram that can be used for that. probably only take one to do everything, even if i have some form of ai for behavioral learning/response.


the umbilical will go to a box (probably rather large) that's soundproofed as well as possible given that it has to be cooled, that will hold all the motors and computing / processing. teh cooling will be ducted past/thru anything that needs it, to minimize the amount of fans that need to be used, and those fans will all be as large and slow as possible to give sufficient airflow with minimal noise. (done this before for recording-studio-type workstations, works great).
 
Reduce the robot dog's internal heat and external noise by offloading some processing to a high powered soundproofed PC via WiFi? Some WiFi spec or other might be fast enough perhaps? I 've never hugged a dog, but I guess hugging a cold one could be unsatisfying, might even be concerning and unsettling. I don't want to imagine where the logical place the hot exhaust gases to exit the robot dog might be, not to mention how it could sound.
 
Last edited:
Reduce the robot dog's internal heat and external noise by offloading some processing to a high powered soundproofed PC via WiFi? Some Wifi spec might be fast enough perhaps.


the noise is mostly the motors, as already stated those and all the processing would already be in an external box.



all the servomotors have a lot of gearing so they can be small, so the motors run really fast (in short bursts), and the gearing and motor make quite a lot of whining sounds. if you soundproof those, the motor and/or it's electronics get hot and fail over time; if the gearing is plastic it gets soft and strips. so if soundproofed some other form of cooling is needed (there are a number of possible ways but they all increase complexity and risk of failure grows dramatically--some of the failure modes of some of the systems can cause catastrophic damage to the wolfy itself requiring disassembly and replacement of padding/covering).

the big spine/hip/shoulder/neck motors are likely to be steppers instead of servos, as these can be easier to use to hold a position with less stress on the motor/driver, especially if using a gearbox (which i might have to do to keep the motor and driver size down). these also get hot, and are not all that quiet either (and any gearbox makes them louder).

i don't want to hear robotic sounds inside the wolfy, so....external soundproofed box for everything that gets hot or makes noise.



wifi might be fast enough, but there's not likley a need for it, the majority of stuff already interconnects via i2s or spi or other serial buses so has to be wired up via the umbilical. unless all that data is internally wired to a central mpu that then concatenates it into a stream that can go via some other connection/protocol to the external processing.

somewhere else in the thread there should be some mention of the possiblitiy of using hte wifi that's on many of the esp32 mpus i have to do the interconnect to the external processing.

the problem with having a lot of layers of comm protocols is that you can get comm failures or data loss at any of htem, so the more layers there are the harder it is to troulbeshoot and fix.


I don't want to imagine where the logical place the hot exhaust gases to exit the robot dog might be, or how it could sound.

except for breathing or panting, moving air in and out the mouth and nose, whcih there isn't a practical way to actually do with any really significant air movement, there's nowhere to vent from, or circuilate air thru. i worked out a bunch of possible ventilation systems about 30 years ago when i wanted to build a realistic-wolf-shaped pc case that would have been covered in fake fur and have a few wolf behaviors, though it would only have just sat upright and not move around except for a head that would pivot around to "watch" things and a jaw that would open and close with a tongue that could move in/out to simulate panting when the computer was hot. ;) most of the systems would have had to pull air thru the fur on the sides of the case and push it out the open mouth. some of them would have had belly slits to do it, one had a raised base that the wolf sat on that would have the vent slits in it. i never built the case partly because cooling was never sufficient in the realistic-looking ways i preferred, and partly because it was too expensive for the robotics parts needed to do the head movements and stuff. (i hadn't yet learned of all the alternate parts sources i know about today, salvaging stuff for free out of old printers, etc).


unfortuantely any notes i had locally stored about that project were lost in the housefire and/or the subsequent cleanup.. i had posted about it along with pics and sketches on an old version of the themanime forums in some discussion about interesting computer cases, but that version was hacked and destroyed by spambots or whatever, and completely lost as they didn't have any backups, and the wayback machine doesn't have copies either. :( so all i have left of that is whatever memories haven't fallen out of my head.
 
i don't want to hear robotic sounds inside the wolfy, so....external soundproofed box for everything that gets hot or makes noise.
How precise does movement have to be for a prone huggable wolfy? Use hydraulics for movement via cogs and gears instead of whiny servos, powered by external soundproofed motors? Cog and gear noises should be easier to dampen. If the resulting wolfy is too cold to hug, use hot hydraulic oil? The pipes could go up through the bed and mattress.
 
Last edited:
you'll have to trust me that i have already thought out the stuff to do this with.

i don't need help redesigning it.

i need help implementing the design ideas i already have.


hydraulics (anything using liquids) leak.


if they leak into the padding and body i have ot throw the whole thing away and start over, and it is a massive amount of work to do all that, and to make it look the wya i want it to.

plus tthat would be traumatizing, as if it had died.


heat for realism is mostly not necessary--fur (fake or real) reflects body heat well enough that unlike plastics and whatnot it doesn't feel cold to the touch in normal room temperatures.

for any parts that might need heat, like cast silicone paw pads, noses, maybe ears/face, maybe belly, it's very simple to add flexible wire heaters into those in short paralleled segments with thermal regulation controls (so that if one segment fails the rest still work). already done that on the non-robotic versions experimentally over the years.
 
fwiw, while i appreciate the alternative points of view, and the thought that went into them, if i have to explain every alternative that's already explored and why it can't be used, this thread is going to be a few thousand pages long, and take several years to write (or longer, possibly as long as it has taken me to come up with, research, ponder, and design them all). and all that time wasted instead of being spent on actually getting a working system.


so i am writing up the things i already "know" will (should) work, and the requirements for what the system and/or end-unit has to do or be able to do.


so for those that want to help, the best thing they can do is to help me implement the systems already devised.
 
thoughts on an example limb processor; first the limb has to be defined


define x / y / z axes overall as

x = side to side, or yaw

y = up and down or pitch

z = front to back, or forward / reverse


so any motor that moves things along the length of the wolfy is z axis, if it moves it to the side it's x axis, and if it moves it up or down it's y axis.


there are five motors per limb.

at the shoulder / hip there is both a z and x motor that rotate a heim joint (in place of a full ball joint to give motion limits without the skeletal muscles and tendons that normally restrict it.

the elbow only has a z motor; it only pivots in one way.

the "wrist"/ankle also has a z and x motor, but the x motor doesn't exaclty move those side to side, isntead it twists the radius/ulna / tibia/fibula at the paw end to change the angle at which the z motor does it's work, similar to the way that the skeletal muscles would.

this allows it to move the paws (especially the front ones) for all the things like gripping / grabbing / hugging they do, and the motions necessary to balance when sitting up, standing, getting up from a side-laying position, etc.

also for walking if i ever manage to do that (doubtful, not a requirement for this wolfybot since it's just meant to stay right there and be with me (or other users that choose to build one or have one built for them) and be cute and otherwise emotionally supportive and destressing).


in an ideal world each toe would also have a z motor, but i don't think i can fit that many cables thru the limb paths, and that would at double the umbilical size too. so most likely the toe mechanics would simply be built to move with the wrist/ankle z motor movements but with a slacking/spring mechanism that allows the toes to spread under load when pressed against a surface even when the wrist/ankle motor is commanding contraction, or curl up if not against a surface. this is likely to take a number of iterations of design and build to get to work as desired, but the first simple implementation will probably just be looser joints in the toes (ovoid holes for the pins instead of round) and elastic bands along the ventral surfaces of the toe bones (or just the silicone of the toes molded in the contracted position).



there would be an imu for positioning / feedback attached to each of the skeletal segments of the limb, so one on the shoulder-plate / pelvis, one on the humerus/femur at the elbow end, one on the tibia/fibula / radius/ulna pair (shouldnt need one on each bone) at the paw end, and one on the ventral surface of the paw "bones" under the main "pad".

there will be additional imus as needed for vibration/touch detection, if that avialable from the ones above are insufficient for sensitivity required.


a single mpu can probably handle the input from all the positioning imus and process it along with the motor command processing, but separating those functions can be done if needed. it probably isn't necessary to subdivide it any further per limb.

the limb mpu would get a command from the main behavioral mpu (mbm) that tells it to extend forward, retract backward, stay, move laterally, etc., to place the paw in a specific spatial location, and the limb mpu would use the imu data it has, plus the full-body-coordinates the main sensory mpu (msm) keeps track of (from all the local data vs a power-up calibration set), and do the math to move each motor the right way to get the paw where it should be.

it would also have a movement model to base those on, so that all movements are realistic. there would be various modes, such as hyper, sleepy, playful, bashful, etc., that it would refer to once the mbm has told it which mode ot use, so that the limb mpu moves the motors quickly, slowly, etc., to match the behavior at the time.
 
Hi, you pointed me to this thread... 5 pages I have skimmed but probably missed a lot.

If I was to embark on this, I would probably be following a similar architecture - ESP camera, control board and link to higher level processing board. However, I would probably be pushing for something better than an arduino uno... and would be avoiding the ARduino environment. It starts out good with the demo examples, but later gets irritating and restrcitive and has no debug ability. "Lob code over the wall and see what happens, then guess the errors".

I would definitely go towards a realtime processor and proper IDE. I would choose something for which there is a huge support network and which is quite readily portable between devices... This basically means STM32.

I generally default to the stm32f405, which happens to be the VESC MCU... because it is the cheap very well understood and yet has all features model.

With that in mind, I asked Chat GPT...
"If I have an stm32f405rg with access to all the pins, and want to connect 4 RC servos (1000-2000us 50Hz ppm signal) two step direction steppers and 4 MPU6050 accellerometers over i2c, could you suggest appropriate pin connections?"
Then
"Can you redo this assuming that I would like to use USB on pins PA11 and 12?"
Then
"I think for 4 MPU6050, we will require an extra I2C line since the AD0 pin only allows for two addresses if I understand right... Is there one available?"
It spat out after the prompts:

Summary of Pin Connections (Final Updated):​

ComponentPinSTM32F405RG PinFunction
Servo 1 PWMPWMPA0PWM (TIM2_CH1)
Servo 2 PWMPWMPA1PWM (TIM2_CH2)
Servo 3 PWMPWMPA2PWM (TIM2_CH3)
Servo 4 PWMPWMPA3PWM (TIM2_CH4)
Stepper 1 StepStepPB0GPIO
Stepper 1 DirDirectionPB1GPIO
Stepper 2 StepStepPB2GPIO
Stepper 2 DirDirectionPB3GPIO
MPU6050 1 SDADataPB7I2C1
MPU6050 1 SCLClockPB6I2C1
MPU6050 2 SDADataPB7I2C1
MPU6050 2 SCLClockPB6I2C1
MPU6050 3 SDADataPB11I2C2
MPU6050 3 SCLClockPB10I2C2
MPU6050 4 SDADataPB11I2C2
MPU6050 4 SCLClockPB10I2C2

Great, we know where to connect to physically. Can we enter this into the STM32 Cube environment? Well, after some piffling about with the ST code repo finder in help stm32cube updates and setting it up with the ST account and (ugh this used to be easier)... we get to the IOC...
1736145726769.png

Set the clock to 8M (usually for a board depending what you buy) and set the HCLK (main frequency) to 168. Press enter and it does it.

So let's set up the timer2 for th RC servos
Ask ChatGPT for some ARR and PSC values... and set pwm generation.

After telling CGPT we use 168M HCLK, and nudging it:
"For the tim2, what ARR and PSC should I use?"
"Can you redo that to have microsecond resolution?"
"Can we redo that assuming the HCLK is 168MHZ?"
we get:

Summary of Timer Settings:​

ParameterValue
PSC167
ARR19999
Timer Frequency1 MHz (calculated from HCLK divided by PSC)
PWM Frequency50 Hz (calculated from ARR)
Resolution1 µs per tick (due to 1 MHz timer frequency)
So we enter that...
1736146139800.png


We do similar for the I2C:
1736146189958.png

And we generate the project... which we see compiles.
1736146403161.png

So we now have a building stm32 project that has 4 available PWMs and 2 I2C channels. How do we use them? Well... we ask ChatGPT... it will happily make you simple drivers to read MPU and set the servos to twiddle.

I shall not go further with this. I do not have hardware. Good luck :p This is how I would approach it. Then similar for the ESP camera and the higher level stuff.

You can add USB/USART/...
 
As an aside Thanks mxlemming you pretty much answered a prayer!

Mature STM32F405RG development board for trustworthiness and support
Similar cheaper STM32F405RGT6 board for subsequent needs
Intro to STM32 Cube environment
Opinions on STM32 Cube environment. Suits pros and keen amateurs AIUI.
MicroPython for STM32. Easy prototyping, also specific uses e.g. NASA. Runs on Pi Pico etc.

RPI uses Broadcom or their own microcontrollers, but like many micros they're supported by many languages, and many IDEs host multiple languages too.
 
RPi is some other higher end MCU that is hideously complex to use at this level, so we end up using them with an operating system on top. There is the Rpi pico, but it is really quite low power. It's nice that they made their own MCU, but as far as I can see, it is far worse than the f405RG I suggested.

The boards AnthonyC found look reasonable for the stm32. Of course you can follow the ChatGPT methods with any boards. Micropython store one has decided to hide the SWD pins (they are PA13 and PA14) for debug btw.

Wiring the accellerometers and servos to this board with twisted sets of wires is fine. Have to be careful with any MCU not to go shorting 5V/power to io and grounds etc. MCUs are fragile. Sure you are familiar with this, I recall you have decapitated a few controllers to lebowski them so you're probably fine on that front.
 
Thank you both very much.

I've done electronics stuff for a long time (mid 1980s?), so plenty of experience with blowing things up (and knowing what not to do, to prevent that :lol:). Learned a lot about ESD prevention when I worked at Honeywell CFSG in final test/QC on the A320 and MD80/88 lines. (we even had special floors). I actually have a partial roll of RG58 and another of RG6? shielded coax that can be used for critical signals if necessary.

I expect that physical actual noise and vibration, corrupting the IMU data as it's acquired, is going to be more of a problem than electrical noise. :(

My mistake about STM vs RPI; I hadn't fully read the results of my quick searches about MPUs after reading your post. :( Looking at them now, despite what some of google's search results show, the various pages are asking which one to go with--RPI or STM32. :oops:

I will take a look around for what cheap STM32 boards exist that already have as much of the other interconnectivity/etc on them as I can get, with the MCU version youve pointed out. Maybe a compromise can be made if the STM32 code can be run better / faster than the RPI or Arduino code would over their OS, so that fewer units will be needed to process the IMU data, and I can spend more on them. I can still use the handful of assorted ESP32 (one with a cam), RPI Pico, and Arduino nanos I already have for other things, or for parts of this system that are suited for them.


Oddly, more than a decade ago (I think), I attended a renaisys (sp?) demo where they gave out a couple of STM8(?) "usb key" sized units (that I still have), along with the STM32 Circle (which I think I still have, but isn't with those for some reason). At the time I had planned to build a complete ebike control system that would've been a bit like some of these highly-integrated OEM ebikes that have come out the last few years (but didn't really exist then, other than maybe BionX). It would have been open source so anyone could build one from it. There might even be a thread here for it, but it was probably before I came here. I couldn't do the coding and tried to find help doing that, but had no luck then, and eventually I had given up on it because I didn't really need it, was just a nifty idea, and I had way too much else to do (including some of the early research into MPUs and IMUs for this wolfy project, which was much more important and complicated). Somewhere in there the housefire happened and I mostly forgot about the bike MCU system and these STM units I had.....

But if the STMs that are on the keys and/or Circle (if I can find it) are good enough, maybe I can use them to test-run some of the code bits with the IMU and other test hardware I have now.
 
AIUI an LLM basically responds with its estimate of the most likely answer that's plausible. ChatGPT has extensions to handle programming and some other topics that need handling differently (so it says). It's said to be best at python; on a niche language like openscad it helped me with an outline but details needed fixing. That's quite good (Claude 3 Haiku gave me five out of five wrong answers on openscad) and certainly better than starting from scratch.
 
ChatGPT is a wonderful thing, not just for low-level coding tasks, but for overall project assessment and feasibility studies. Try this query in your ChatGPT:
"I need assistance in determining the skill levels and scope of doing a robotics type project. The project is a dog which looks semi-realistic and has servos operating its paws and basic body movements, including perhaps some body movements in its face. It needs to be able to have realistic and comforting motions. I would also like it to have some sound capabilities, possibly based on some modern AI techniques for its dog-like responses. The comfort dog does not need to walk,Although I've been exploring gyros and things of this nature, so let's include that as a possible part of its feature set. Give me a practical and realistic determination of what skill levels are required to attempt a project like this, and also approximately how much coding would be required in man hours as a very rough guide and estimate. What disciplines and skill levels are required for the team to work on such a project? for the various roles also indicate how many years of experience would be a minimum for them to even attempt that particular role."
 
@marka-ee : I appreciate the thought, but knowing skill levels / etc required doesn't help me, since there isn't a way to find and get others to help with the project, regardless of their skill level. I don't have money to pay them, and in all the years of trying I haven't been able to get anyone with the required skills to actually want to work on it directly. (just advice on various things, of wildly varying usefulness)

So I have to use whatever skill levels I myself have, regardless of what they are, and just keep working on it until I learn the things necessary to do the various parts of it. ;)

Knowing how far I myself might be from the necessary skill levels isn't useful to me, since I'm already going to just learn what I need to know as I go, quickly or slowly, until it's done or I run out of lifetime to do it in. :lol:


I already know the scope of the project, since I've been working on designing it for so many years. :oops:


There is almost certainly a few hundred times the amount of detail in my head about this than I have managed to type up here in the thread, including unsuccesful tests / technologies, bad ideas replaced by better ones, etc. I expect it would take me a few years to type up everything already explored and researched (any notes (physical or digital), sketches, bits and pieces, etc., that I had before the housefire were lost in that fire since it was in my bedroom where I had all my important stuff like that. But most of that stuff was irrelevant even at that time, and almost all of it would be now for sure, given changes in available current technology that caused me to start this thread in the first place, being able to "see the light at the end of the tunnel" now that things are so close to being "easy" to create this from, or at least the closest to being possible that it's ever been. And after the fire I didn't write down most things about it, just kept them in my head for the most part, other than occasional doodles on napkins and placemats etc when out and about).
 
Ambitious <> infeasible and there's drive behind his project. When I subbed to ES I thought motors had a fixed voltage, now I develop fast chargers and control my ebike using micros. BTW I like the idea of flexible materials in the toes. On a bike cable ties have several engineering advantages over bolts.
EDIT: for <> read != which is clearer, both are: 'not equal to'
 
Last edited:
Ambitious <> infeasible and there's drive behind his project.
Honestly this isn't all that ambitious a project. I can imagine much more complex things (have done so, but they're all well beyond my present or future abilities and budget so they just stay in my head, no point in noting any of them down).

I'm sure that any half-decent robotics guru (with knowledge in that area equivalent to what I have for ebikes, or music/sound creation/editing) could whip this thing up pretty quick. I just don't know any.

The only thing that might take some specialist expertise would be the behavioral response learning system, which would probably work "best" with some form of "AI", but even that could probably be based on one of the various opensource AIs out there if one is appropriate to this type of system.

Even without the ability to learn new things on it's own, the basic behavior set itself could be created once built simply by manually moving the various parts by hand to record the movements from the IMU sensors and whatever motor feedback system, and assigning various sets of responses to the various behaviors, and then assigning the required input conditions to trigger those (also recorded from the sensors). (ideally a GUI would be created so that any end-user of the system could do this themselves, to add behaviors not in the basic set).


For myself, the hardware is not that much of a challenge, the things I don't know are relatively easy to pick up as I go.

But software...while I understand the principles, learning the actual coding is very hard for me, working on my own. (If I were working *with* someone and learning as I go, I would learn faster, but it would still not be easy). My brain just doesn't work right for certain types of things, and apparently that's one of them.
 
Here's a Deepseek version of the behavioral AI description I gave it and the code and other data it gave back, for future reference.
I can't tell if it's given me anything useful (for instance mxlemming said that the ChatGPT code is just garbage that isn't worth saving).

Hopefully (even though I asked a lot of it and probably provided it far too little info) this version will be more useful, once I understand enough more to be able to use it. I guess if it's just garbage too, then I'll have to find some other way of doing this, or wait until one of the AIs is good enough to teach me how, in detail, step by step.


Amberwolf said:
Can you use your own code to provide me with a complete behavioral-learning AI? This AI will need to send and receive data to/from another system that then reads sensors from and controls a realistic robotic wolf (henceforth wolfy) motion via motors and creates realistic sounds. The AI needs to take the data from the robot control system's (henceforth RCS) user inputs (henceforth UI) and learn how to do the things the user is teaching it, just like a real dog or wolf would. The RCS has multi-axis IMU sensors on all parts of the wolfy that provide data to the AI on where the wolfy is being touched, and how hard the touch is. This includes data on the wolfy's own touches of the world around it, as well as UI of the user touching, petting, or holding any part of the wolfy. The RCS also senses the position and velocity and direction of movement of each part of the wolfy, both for the wolfy's own movements and for those where the user is moving the wolfy's limbs, body, head, or tail. The RCS also has image sensors in the eyes, and microphones in the ears, that feed their data in a stream to the AI. The AI needs to read those inputs, and learn to correctly respond to different types of input. For one example, a touch that is above a certain high threshold, with a velocity change that is a sudden drop or spike, indicates an impact. If the wolfy was moving a limb and the limb stops moving and this kind of detection occurs, it means the wolfy's limb hit something and stopped, so it should learn not to do that, especially in situations where it was moving the paw toward the user and the impact is on the side facing the user; it means it hit the user too hard. It should also learn that this requires a bashful response, and react in a contrite way with it's movements and sounds. If the wolfy was moving the paw toward the user and the impact was on the top side of the paw, or there is a sustained push detected over surface areas facing the user , it means the user is providing the wolfy negative feedback, slapping the paw away or pushing the whole wolfy away, so that the wolfy won't injure the user. If the microphones detect the user saying "no" then that tells the wolfy the same thing. If the microphones detect the user saying "sit" then the wolfy needs to perform all the actions required to position itself in a sitting position. There are many more behavioral examples, but they should be obvious from how a dog or a wolf actually behaves. The behavioral AI will need to have the basic behaviors already embedded in it, and learn when and how to apply them from user interactions like the above. It will also need to learn new behaviors from user interactions. For isntance, it needs to be able to visually follow user movements, so that it can see the user is pointing to a spot, and then listen to the user sounds so that it can hear the command the user is giving, and also respond to user touches to move toward or away from the touch depending on the user input and the situation. There will also be a Programming Mode the user can tell the wolfy AI to enter, where the user can position the wolfy, or move it in whatever way is needed, while telling the wolfy that this is a new behavior. That behavior is then recorded as a new behavior that will occur whenever needed or called for by the user. There will be a second part of the Programming Mode where the user can modify existing behaviors. The user's command will call up a behavior, and the wolfy will play it back, and the user then manipulates the wolfy or tells it to do something differently or visually indicates some change, and the wolfy modifies the behavior until the user says it matches the desired change, and saves the behavior as a new version of that behavior. The same process can be used to create new behaviors from old ones. Other parts of the wolfy behaviors include going to sleep like a dog whenever the user isn't interacting with it for some amount of time. This sleep mode will include typical dog sleep behaviors, like puppy dreaming, snoring, etc. When it wakes it will be sleepy and slow to respond to behaviors for a short but variable time. All of the behaviors and responses will be varied each time they are used, so that it appears more alive to the user.
Creating a complete behavioral-learning AI for a robotic wolf like "Wolfy" is a complex and multidisciplinary task that involves robotics, machine learning, sensor integration, and real-time control systems. Below, I’ll outline a high-level architecture and provide a Python-based framework to get you started. This framework will include basic behaviors, learning mechanisms, and integration with the RCS (Robot Control System).

Deepseek said:
High-Level Architecture
Sensors and Inputs:
IMU sensors (touch, position, velocity, direction).
Image sensors (vision).
Microphones (audio input).
User inputs (commands, gestures, etc.).
Outputs:
Motor controls (movement of limbs, head, tail, etc.).
Sound generation (barks, whines, growls, etc.).
Behavioral AI:
Predefined Behaviors: Sitting, sleeping, following, etc.
Learning Mechanism: Reinforcement learning, supervised learning, and imitation learning.
Behavior Modification: Programming Mode for creating and editing behaviors.
Communication:
Real-time data exchange between the AI and RCS.
Python Framework
Below is a simplified Python framework to implement the behavioral-learning AI. This framework assumes you have access to the RCS APIs for sensor data and motor control.

Code:
import time
import random
import numpy as np
from sklearn.linear_model import SGDClassifier  # For simple learning tasks

# Constants
TOUCH_THRESHOLD = 5.0  # Example threshold for touch sensitivity
SLEEP_TIMEOUT = 300  # Time in seconds before Wolfy goes to sleep

# Predefined Behaviors
class Behaviors:
    @staticmethod
    def sit():
        print("Wolfy is sitting.")
        # Send motor commands to RCS to position Wolfy in a sitting pose

    @staticmethod
    def sleep():
        print("Wolfy is sleeping.")
        # Send motor commands to RCS for sleeping pose
        while True:
            print("Zzz...")
            time.sleep(2)
            if random.random() < 0.1:  # Simulate dreaming
                print("Wolfy is dreaming...")
                # Twitch motors slightly

    @staticmethod
    def follow_user():
        print("Wolfy is following the user.")
        # Use vision and audio inputs to track and follow the user

# Behavioral AI Class
class WolfyAI:
    def __init__(self):
        self.behaviors = Behaviors()
        self.last_interaction_time = time.time()
        self.is_sleeping = False
        self.learning_model = SGDClassifier()  # Simple model for learning

    def process_sensor_data(self, touch_data, imu_data, audio_data, vision_data):
        if self.is_sleeping:
            return

        # Check for user interaction timeout
        if time.time() - self.last_interaction_time > SLEEP_TIMEOUT:
            self.behaviors.sleep()
            self.is_sleeping = True
            return

        # Example: Detect touch impact
        if touch_data['force'] > TOUCH_THRESHOLD and abs(touch_data['velocity']) > 1.0:
            print("Wolfy detected an impact!")
            self.respond_to_impact(touch_data)

        # Example: Process audio commands
        if "sit" in audio_data:
            self.behaviors.sit()
        elif "no" in audio_data:
            print("Wolfy understands 'no' and stops the current behavior.")

        # Example: Follow user if they move
        if vision_data['user_moving']:
            self.behaviors.follow_user()

    def respond_to_impact(self, touch_data):
        if touch_data['location'] == 'paw_top':
            print("Wolfy understands it was pushed away.")
            # Adjust behavior to avoid repeating the action
        elif touch_data['location'] == 'paw_side':
            print("Wolfy apologizes for hitting the user.")
            # Perform contrite movements and sounds

    def enter_programming_mode(self):
        print("Entering Programming Mode...")
        # Allow user to create or modify behaviors
        new_behavior = self.record_behavior()
        self.save_behavior(new_behavior)

    def record_behavior(self):
        print("Recording new behavior...")
        # Capture motor commands and sensor data during user manipulation
        behavior = {}  # Store the behavior data
        return behavior

    def save_behavior(self, behavior):
        print("Saving behavior...")
        # Save the behavior to a database or file

# Mock RCS Interface
class RCS:
    def get_sensor_data(self):
        # Simulate sensor data
        return {
            'touch': {'force': random.uniform(0, 10), 'velocity': random.uniform(-2, 2), 'location': 'paw_top'},
            'imu': {'position': [0, 0, 0], 'velocity': [0, 0, 0]},
            'audio': "sit",
            'vision': {'user_moving': True}
        }

    def send_motor_commands(self, commands):
        # Send commands to motors
        pass

# Main Loop
if __name__ == "__main__":
    wolfy_ai = WolfyAI()
    rcs = RCS()

    while True:
        sensor_data = rcs.get_sensor_data()
        wolfy_ai.process_sensor_data(**sensor_data)
        time.sleep(1)  # Simulate real-time processing


Deepseek said:
Key Features
Predefined Behaviors: Basic behaviors like sitting, sleeping, and following are implemented.
Sensor Data Processing: The AI processes touch, audio, and vision data to respond appropriately.
Learning Mechanism: A simple learning model (SGDClassifier) is included for basic tasks. You can replace this with more advanced models like deep reinforcement learning (e.g., using TensorFlow or PyTorch).
Programming Mode: Allows the user to create and modify behaviors.
Next Steps
Integrate with RCS: Replace the mock RCS interface with actual APIs for sensor data and motor control.
Expand Learning: Implement more advanced learning algorithms (e.g., deep Q-learning for reinforcement learning).
Add More Behaviors: Implement additional behaviors like barking, wagging the tail, etc.
Optimize Real-Time Performance: Ensure the AI can process data and control Wolfy in real-time.
This framework is a starting point and will need to be expanded and refined based on the specific requirements of your project. Let me know if you need further assistance!
 
I like to tackle unfamiliar problems from both high and low levels alternately. Here's some IMU code for Pi Pico:

import machine
i2c = machine.I2C(0, sda=machine.Pin(12), scl=machine.Pin(13)) # creating the object that allows for I2C communication in MicroPython
imu = MPU6050(i2c) # passing the i2c object to the MPU6050 class above. This class will handle all communications
imu.wake() # wakes up the MPU-6050 (it may have been in sleep mode)
gyro_data = imu.read_gyro_data()
print(gyro_data) # (0.346823, -0.198345, 0.023958)

Taken from Development Tutorial section 3 of:

GitHub - TimHanewich/scout: Python-based Quadcopter Flight Controller Software using a Raspberry Pi Pico, MPU-6050, and a FlySky radio transmitter & receiver

with a step by step build guide here:

How to use an MPU-6050 with a Raspberry Pi Pico using MicroPython | by Tim Hanewich | Medium
 
I appreciate the thought, but I already have code-stuff to individually read IMUs; I even built a little breadboarded test unit on a previous page that read the IMU 3-axis angles and move servos to match them, from a gimbal-cameramount tutorial.

What I need is a program that will do *all* the functions I've previously described with the data gathering and processing of all the IMU outputs. ;) I have some concept of what's required, as a kind of block diagram sort of thing, but digging into coding to implement it keeps exploding my brain; there's just too many things I don't know or "understand" yet. (I've previously mentioned my brain problems in being unable to learn this stuff linearly and the gaps that leaves me with)

ideally it would be something that uses the mpu6050's existing onboard mcu to preprocess data to help this along, but i also don't understand the documentation they provide for them (such as it is--it seems to be pretty minimal, even insufficient, to use it's features). it's probably me and not the docs, though.

my usage for the data is so different from the stuff i find out there that i don't yet know hotw o adapt the code taht is out there to what i'm doing. i'm sure eventaully i'll figure it out, probably once the ai stuff can be used to teach me the process the same way you can teach a really stupid dog to eventually stop peeing on it's own head.


ATM I've got some heavy duty cold (for days now) that i can't think thru, so 'm probably not seeing / saying something right here, or misunderstanding what your post was about, etc. i could probably manage that last part anyway, without the cold, but it isn'thelping at all.
 
No sweat, hope you feel better soon. It sounds as though you've enough h/w built to check how much noise and drift you get from your IMU and how rapidly it lets you re-read it. Those are or were IMU weaknesses so this might be worthwhile just to make sure it performs well enough for you to build up from there in the way you intend to.

If you dry run the Deepseek code with pen and paper is it along the right lines? When you list what Deepseek left out (compared to your prompt) if there's anything fundamental (rather than enhancements) you could try feeding the code back into Deepseek to improve upon, stepwise.
 
Apologies for the poor quality of the responses below, especially the typing, i'm not presently able to go back and fix all the problems i made tyipng it up. :(

It sounds as though you've enough h/w built

not sure i'd call i t "built" ;) but it's on a breadboard.

to check how much noise and drift you get from your IMU and how rapidly it lets you re-read it. Those are or were IMU weaknesses so this might be worthwhile just to make sure it performs well enough for you to build up from there in the way you intend to.

I don't know. I expect there will be drift on each separate imu, and it'll probably need a separate board that has the ohter 3 axes, as these are only six not n9. so they can't compensate for it. i read of various techniques to use the searpate board to recalibratefor drift over time. taht's one of the parts of the wakefromsleep routine for the whole wolfy, where it will "stretch" and figure out where all it's parts are relative to each other and magnetic north, etc.


If you dry run the Deepseek code with pen and paper is it along the right lines?

I don't understand what this means. could you explain?

even if i could read my writing :oops: :/ it would take forever to write it all out on paper with a pen*** and i wouldn't get anything out of it that i don't get from seeing it on screen where it's actually readable.

Even if i wrote it out on paper i wouldn't know what it is doing any better than i would any other way?


***if i used a pen i couldn't correct mistakes and i make lots and lots of those with wrong letters and shapes that aren't the right leters, or end up wiht them on top of each ohter instead of in sequence, etc. I'd have to use a pencil so I could erase all those once i went thru it enough times to see all the mistakes, looking back and forth from screen to paper over and over.


When you list what Deepseek left out (compared to your prompt) if there's anything fundamental (rather than enhancements) you could try feeding the code back into Deepseek to improve upon, stepwise.
i have no way of knowing wheterh the code is right or not yet, i haven't learned enough to find out. if i knew enough coding to know if it was right or wrong, id' be able to write it myself. :oops:

to test the code i have ot build hardware that it can contorl and then run it to see what happens. but that's still being worked out. I need to build the test skeleton, then install basic servos for one limb, along with imus on that limb and the main body frame at least, so it has something to relate to position wise. then i migth be able to start testing the ai genertead code, and themmaybe start learning something from the process of figurong out what does and doesn'tw ork.


*****************

its not really right but you could think of the way i have to do things like this where i don't know all the details of exactly how to do something already as a jigsaw puzzle thats poured out onto the table and floor and the next room or two, and i just put together whichever peices i already have available whenever they fit. The puzzle is all there in my head, just that some of the pieces are still blank, cuz i don't know exactly what is supposed to go there, although i have the shape of it. i doubt that helps any, but...it's all i can think of atm.


if i had a partner to work with consistently whenever i could work on it, that laready knew how all these things worked, i could learn what i need from doing that. right now that will probably have to be the ai cuz they won't get upset by my weird brain that can't figure actual people out or sometimes can't deal with how they act towards me, etc. (this is why i need the wolfy in the first place because there's no people that would ever want to be or be able to be what i need and all of them "abandon" me sooner or later; dogs do accept me and want me around but i won't always bea ble to have a real one; someday i just wont' be able to handle losing another one).

i'm sure all this stuff makes me out to be a flaked-out weirdo, but...i guess i probably am.
 
That's what I meant by alternating: you've done a lot on the concept and functioning and the jigsaw is that top-down phase asking for a break. H/w is real-world and finnicky, so prototyping can often be better tackled bottom-up starting at the sensors, but if you can build a testable part of the skeleton then go for it. Alternating helps me to modularise a task that's taking shape and firm up the interfaces, and tends to expose 'minor' functionality that turns out to be daunting and needs working around.

https://google.com/?q=dry+run+a+program
Not only will you evolve Deepseek's code, but the more you skim or youtube code in an unfamiliar language the more you ingest how it works, making the language tutorials easier. There are things about Python I'm not keen on but it's the new English for what you're doing. Hang in there with the AI, it gained 2 IQ points while I wrote this ;)
 
Back
Top