Robotics Project: Snuggles, The Wolf

AIUI an LLM basically responds with its estimate of the most likely answer that's plausible. ChatGPT has extensions to handle programming and some other topics that need handling differently (so it says). It's said to be best at python; on a niche language like openscad it helped me with an outline but details needed fixing. That's quite good (Claude 3 Haiku gave me five out of five wrong answers on openscad) and certainly better than starting from scratch.
 
ChatGPT is a wonderful thing, not just for low-level coding tasks, but for overall project assessment and feasibility studies. Try this query in your ChatGPT:
"I need assistance in determining the skill levels and scope of doing a robotics type project. The project is a dog which looks semi-realistic and has servos operating its paws and basic body movements, including perhaps some body movements in its face. It needs to be able to have realistic and comforting motions. I would also like it to have some sound capabilities, possibly based on some modern AI techniques for its dog-like responses. The comfort dog does not need to walk,Although I've been exploring gyros and things of this nature, so let's include that as a possible part of its feature set. Give me a practical and realistic determination of what skill levels are required to attempt a project like this, and also approximately how much coding would be required in man hours as a very rough guide and estimate. What disciplines and skill levels are required for the team to work on such a project? for the various roles also indicate how many years of experience would be a minimum for them to even attempt that particular role."
 
@marka-ee : I appreciate the thought, but knowing skill levels / etc required doesn't help me, since there isn't a way to find and get others to help with the project, regardless of their skill level. I don't have money to pay them, and in all the years of trying I haven't been able to get anyone with the required skills to actually want to work on it directly. (just advice on various things, of wildly varying usefulness)

So I have to use whatever skill levels I myself have, regardless of what they are, and just keep working on it until I learn the things necessary to do the various parts of it. ;)

Knowing how far I myself might be from the necessary skill levels isn't useful to me, since I'm already going to just learn what I need to know as I go, quickly or slowly, until it's done or I run out of lifetime to do it in. :lol:


I already know the scope of the project, since I've been working on designing it for so many years. :oops:


There is almost certainly a few hundred times the amount of detail in my head about this than I have managed to type up here in the thread, including unsuccesful tests / technologies, bad ideas replaced by better ones, etc. I expect it would take me a few years to type up everything already explored and researched (any notes (physical or digital), sketches, bits and pieces, etc., that I had before the housefire were lost in that fire since it was in my bedroom where I had all my important stuff like that. But most of that stuff was irrelevant even at that time, and almost all of it would be now for sure, given changes in available current technology that caused me to start this thread in the first place, being able to "see the light at the end of the tunnel" now that things are so close to being "easy" to create this from, or at least the closest to being possible that it's ever been. And after the fire I didn't write down most things about it, just kept them in my head for the most part, other than occasional doodles on napkins and placemats etc when out and about).
 
Ambitious <> infeasible and there's drive behind his project. When I subbed to ES I thought motors had a fixed voltage, now I develop fast chargers and control my ebike using micros. BTW I like the idea of flexible materials in the toes. On a bike cable ties have several engineering advantages over bolts.
EDIT: for <> read != which is clearer, both are: 'not equal to'
 
Last edited:
Ambitious <> infeasible and there's drive behind his project.
Honestly this isn't all that ambitious a project. I can imagine much more complex things (have done so, but they're all well beyond my present or future abilities and budget so they just stay in my head, no point in noting any of them down).

I'm sure that any half-decent robotics guru (with knowledge in that area equivalent to what I have for ebikes, or music/sound creation/editing) could whip this thing up pretty quick. I just don't know any.

The only thing that might take some specialist expertise would be the behavioral response learning system, which would probably work "best" with some form of "AI", but even that could probably be based on one of the various opensource AIs out there if one is appropriate to this type of system.

Even without the ability to learn new things on it's own, the basic behavior set itself could be created once built simply by manually moving the various parts by hand to record the movements from the IMU sensors and whatever motor feedback system, and assigning various sets of responses to the various behaviors, and then assigning the required input conditions to trigger those (also recorded from the sensors). (ideally a GUI would be created so that any end-user of the system could do this themselves, to add behaviors not in the basic set).


For myself, the hardware is not that much of a challenge, the things I don't know are relatively easy to pick up as I go.

But software...while I understand the principles, learning the actual coding is very hard for me, working on my own. (If I were working *with* someone and learning as I go, I would learn faster, but it would still not be easy). My brain just doesn't work right for certain types of things, and apparently that's one of them.
 
Here's a Deepseek version of the behavioral AI description I gave it and the code and other data it gave back, for future reference.
I can't tell if it's given me anything useful (for instance mxlemming said that the ChatGPT code is just garbage that isn't worth saving).

Hopefully (even though I asked a lot of it and probably provided it far too little info) this version will be more useful, once I understand enough more to be able to use it. I guess if it's just garbage too, then I'll have to find some other way of doing this, or wait until one of the AIs is good enough to teach me how, in detail, step by step.


Amberwolf said:
Can you use your own code to provide me with a complete behavioral-learning AI? This AI will need to send and receive data to/from another system that then reads sensors from and controls a realistic robotic wolf (henceforth wolfy) motion via motors and creates realistic sounds. The AI needs to take the data from the robot control system's (henceforth RCS) user inputs (henceforth UI) and learn how to do the things the user is teaching it, just like a real dog or wolf would. The RCS has multi-axis IMU sensors on all parts of the wolfy that provide data to the AI on where the wolfy is being touched, and how hard the touch is. This includes data on the wolfy's own touches of the world around it, as well as UI of the user touching, petting, or holding any part of the wolfy. The RCS also senses the position and velocity and direction of movement of each part of the wolfy, both for the wolfy's own movements and for those where the user is moving the wolfy's limbs, body, head, or tail. The RCS also has image sensors in the eyes, and microphones in the ears, that feed their data in a stream to the AI. The AI needs to read those inputs, and learn to correctly respond to different types of input. For one example, a touch that is above a certain high threshold, with a velocity change that is a sudden drop or spike, indicates an impact. If the wolfy was moving a limb and the limb stops moving and this kind of detection occurs, it means the wolfy's limb hit something and stopped, so it should learn not to do that, especially in situations where it was moving the paw toward the user and the impact is on the side facing the user; it means it hit the user too hard. It should also learn that this requires a bashful response, and react in a contrite way with it's movements and sounds. If the wolfy was moving the paw toward the user and the impact was on the top side of the paw, or there is a sustained push detected over surface areas facing the user , it means the user is providing the wolfy negative feedback, slapping the paw away or pushing the whole wolfy away, so that the wolfy won't injure the user. If the microphones detect the user saying "no" then that tells the wolfy the same thing. If the microphones detect the user saying "sit" then the wolfy needs to perform all the actions required to position itself in a sitting position. There are many more behavioral examples, but they should be obvious from how a dog or a wolf actually behaves. The behavioral AI will need to have the basic behaviors already embedded in it, and learn when and how to apply them from user interactions like the above. It will also need to learn new behaviors from user interactions. For isntance, it needs to be able to visually follow user movements, so that it can see the user is pointing to a spot, and then listen to the user sounds so that it can hear the command the user is giving, and also respond to user touches to move toward or away from the touch depending on the user input and the situation. There will also be a Programming Mode the user can tell the wolfy AI to enter, where the user can position the wolfy, or move it in whatever way is needed, while telling the wolfy that this is a new behavior. That behavior is then recorded as a new behavior that will occur whenever needed or called for by the user. There will be a second part of the Programming Mode where the user can modify existing behaviors. The user's command will call up a behavior, and the wolfy will play it back, and the user then manipulates the wolfy or tells it to do something differently or visually indicates some change, and the wolfy modifies the behavior until the user says it matches the desired change, and saves the behavior as a new version of that behavior. The same process can be used to create new behaviors from old ones. Other parts of the wolfy behaviors include going to sleep like a dog whenever the user isn't interacting with it for some amount of time. This sleep mode will include typical dog sleep behaviors, like puppy dreaming, snoring, etc. When it wakes it will be sleepy and slow to respond to behaviors for a short but variable time. All of the behaviors and responses will be varied each time they are used, so that it appears more alive to the user.
Creating a complete behavioral-learning AI for a robotic wolf like "Wolfy" is a complex and multidisciplinary task that involves robotics, machine learning, sensor integration, and real-time control systems. Below, I’ll outline a high-level architecture and provide a Python-based framework to get you started. This framework will include basic behaviors, learning mechanisms, and integration with the RCS (Robot Control System).

Deepseek said:
High-Level Architecture
Sensors and Inputs:
IMU sensors (touch, position, velocity, direction).
Image sensors (vision).
Microphones (audio input).
User inputs (commands, gestures, etc.).
Outputs:
Motor controls (movement of limbs, head, tail, etc.).
Sound generation (barks, whines, growls, etc.).
Behavioral AI:
Predefined Behaviors: Sitting, sleeping, following, etc.
Learning Mechanism: Reinforcement learning, supervised learning, and imitation learning.
Behavior Modification: Programming Mode for creating and editing behaviors.
Communication:
Real-time data exchange between the AI and RCS.
Python Framework
Below is a simplified Python framework to implement the behavioral-learning AI. This framework assumes you have access to the RCS APIs for sensor data and motor control.

Code:
import time
import random
import numpy as np
from sklearn.linear_model import SGDClassifier  # For simple learning tasks

# Constants
TOUCH_THRESHOLD = 5.0  # Example threshold for touch sensitivity
SLEEP_TIMEOUT = 300  # Time in seconds before Wolfy goes to sleep

# Predefined Behaviors
class Behaviors:
    @staticmethod
    def sit():
        print("Wolfy is sitting.")
        # Send motor commands to RCS to position Wolfy in a sitting pose

    @staticmethod
    def sleep():
        print("Wolfy is sleeping.")
        # Send motor commands to RCS for sleeping pose
        while True:
            print("Zzz...")
            time.sleep(2)
            if random.random() < 0.1:  # Simulate dreaming
                print("Wolfy is dreaming...")
                # Twitch motors slightly

    @staticmethod
    def follow_user():
        print("Wolfy is following the user.")
        # Use vision and audio inputs to track and follow the user

# Behavioral AI Class
class WolfyAI:
    def __init__(self):
        self.behaviors = Behaviors()
        self.last_interaction_time = time.time()
        self.is_sleeping = False
        self.learning_model = SGDClassifier()  # Simple model for learning

    def process_sensor_data(self, touch_data, imu_data, audio_data, vision_data):
        if self.is_sleeping:
            return

        # Check for user interaction timeout
        if time.time() - self.last_interaction_time > SLEEP_TIMEOUT:
            self.behaviors.sleep()
            self.is_sleeping = True
            return

        # Example: Detect touch impact
        if touch_data['force'] > TOUCH_THRESHOLD and abs(touch_data['velocity']) > 1.0:
            print("Wolfy detected an impact!")
            self.respond_to_impact(touch_data)

        # Example: Process audio commands
        if "sit" in audio_data:
            self.behaviors.sit()
        elif "no" in audio_data:
            print("Wolfy understands 'no' and stops the current behavior.")

        # Example: Follow user if they move
        if vision_data['user_moving']:
            self.behaviors.follow_user()

    def respond_to_impact(self, touch_data):
        if touch_data['location'] == 'paw_top':
            print("Wolfy understands it was pushed away.")
            # Adjust behavior to avoid repeating the action
        elif touch_data['location'] == 'paw_side':
            print("Wolfy apologizes for hitting the user.")
            # Perform contrite movements and sounds

    def enter_programming_mode(self):
        print("Entering Programming Mode...")
        # Allow user to create or modify behaviors
        new_behavior = self.record_behavior()
        self.save_behavior(new_behavior)

    def record_behavior(self):
        print("Recording new behavior...")
        # Capture motor commands and sensor data during user manipulation
        behavior = {}  # Store the behavior data
        return behavior

    def save_behavior(self, behavior):
        print("Saving behavior...")
        # Save the behavior to a database or file

# Mock RCS Interface
class RCS:
    def get_sensor_data(self):
        # Simulate sensor data
        return {
            'touch': {'force': random.uniform(0, 10), 'velocity': random.uniform(-2, 2), 'location': 'paw_top'},
            'imu': {'position': [0, 0, 0], 'velocity': [0, 0, 0]},
            'audio': "sit",
            'vision': {'user_moving': True}
        }

    def send_motor_commands(self, commands):
        # Send commands to motors
        pass

# Main Loop
if __name__ == "__main__":
    wolfy_ai = WolfyAI()
    rcs = RCS()

    while True:
        sensor_data = rcs.get_sensor_data()
        wolfy_ai.process_sensor_data(**sensor_data)
        time.sleep(1)  # Simulate real-time processing


Deepseek said:
Key Features
Predefined Behaviors: Basic behaviors like sitting, sleeping, and following are implemented.
Sensor Data Processing: The AI processes touch, audio, and vision data to respond appropriately.
Learning Mechanism: A simple learning model (SGDClassifier) is included for basic tasks. You can replace this with more advanced models like deep reinforcement learning (e.g., using TensorFlow or PyTorch).
Programming Mode: Allows the user to create and modify behaviors.
Next Steps
Integrate with RCS: Replace the mock RCS interface with actual APIs for sensor data and motor control.
Expand Learning: Implement more advanced learning algorithms (e.g., deep Q-learning for reinforcement learning).
Add More Behaviors: Implement additional behaviors like barking, wagging the tail, etc.
Optimize Real-Time Performance: Ensure the AI can process data and control Wolfy in real-time.
This framework is a starting point and will need to be expanded and refined based on the specific requirements of your project. Let me know if you need further assistance!
 
I like to tackle unfamiliar problems from both high and low levels alternately. Here's some IMU code for Pi Pico:

import machine
i2c = machine.I2C(0, sda=machine.Pin(12), scl=machine.Pin(13)) # creating the object that allows for I2C communication in MicroPython
imu = MPU6050(i2c) # passing the i2c object to the MPU6050 class above. This class will handle all communications
imu.wake() # wakes up the MPU-6050 (it may have been in sleep mode)
gyro_data = imu.read_gyro_data()
print(gyro_data) # (0.346823, -0.198345, 0.023958)

Taken from Development Tutorial section 3 of:

GitHub - TimHanewich/scout: Python-based Quadcopter Flight Controller Software using a Raspberry Pi Pico, MPU-6050, and a FlySky radio transmitter & receiver

with a step by step build guide here:

How to use an MPU-6050 with a Raspberry Pi Pico using MicroPython | by Tim Hanewich | Medium
 
I appreciate the thought, but I already have code-stuff to individually read IMUs; I even built a little breadboarded test unit on a previous page that read the IMU 3-axis angles and move servos to match them, from a gimbal-cameramount tutorial.

What I need is a program that will do *all* the functions I've previously described with the data gathering and processing of all the IMU outputs. ;) I have some concept of what's required, as a kind of block diagram sort of thing, but digging into coding to implement it keeps exploding my brain; there's just too many things I don't know or "understand" yet. (I've previously mentioned my brain problems in being unable to learn this stuff linearly and the gaps that leaves me with)

ideally it would be something that uses the mpu6050's existing onboard mcu to preprocess data to help this along, but i also don't understand the documentation they provide for them (such as it is--it seems to be pretty minimal, even insufficient, to use it's features). it's probably me and not the docs, though.

my usage for the data is so different from the stuff i find out there that i don't yet know hotw o adapt the code taht is out there to what i'm doing. i'm sure eventaully i'll figure it out, probably once the ai stuff can be used to teach me the process the same way you can teach a really stupid dog to eventually stop peeing on it's own head.


ATM I've got some heavy duty cold (for days now) that i can't think thru, so 'm probably not seeing / saying something right here, or misunderstanding what your post was about, etc. i could probably manage that last part anyway, without the cold, but it isn'thelping at all.
 
sorry i am all grumpy....mostly i'm exhausted and feel awful, beren that way for days when i took the week off so i could work on stuff around here and couldn't do much of anything but lay here most of the time. :(
 
No sweat, hope you feel better soon. It sounds as though you've enough h/w built to check how much noise and drift you get from your IMU and how rapidly it lets you re-read it. Those are or were IMU weaknesses so this might be worthwhile just to make sure it performs well enough for you to build up from there in the way you intend to.

If you dry run the Deepseek code with pen and paper is it along the right lines? When you list what Deepseek left out (compared to your prompt) if there's anything fundamental (rather than enhancements) you could try feeding the code back into Deepseek to improve upon, stepwise.
 
Apologies for the poor quality of the responses below, especially the typing, i'm not presently able to go back and fix all the problems i made tyipng it up. :(

It sounds as though you've enough h/w built

not sure i'd call i t "built" ;) but it's on a breadboard.

to check how much noise and drift you get from your IMU and how rapidly it lets you re-read it. Those are or were IMU weaknesses so this might be worthwhile just to make sure it performs well enough for you to build up from there in the way you intend to.

I don't know. I expect there will be drift on each separate imu, and it'll probably need a separate board that has the ohter 3 axes, as these are only six not n9. so they can't compensate for it. i read of various techniques to use the searpate board to recalibratefor drift over time. taht's one of the parts of the wakefromsleep routine for the whole wolfy, where it will "stretch" and figure out where all it's parts are relative to each other and magnetic north, etc.


If you dry run the Deepseek code with pen and paper is it along the right lines?

I don't understand what this means. could you explain?

even if i could read my writing :oops: :/ it would take forever to write it all out on paper with a pen*** and i wouldn't get anything out of it that i don't get from seeing it on screen where it's actually readable.

Even if i wrote it out on paper i wouldn't know what it is doing any better than i would any other way?


***if i used a pen i couldn't correct mistakes and i make lots and lots of those with wrong letters and shapes that aren't the right leters, or end up wiht them on top of each ohter instead of in sequence, etc. I'd have to use a pencil so I could erase all those once i went thru it enough times to see all the mistakes, looking back and forth from screen to paper over and over.


When you list what Deepseek left out (compared to your prompt) if there's anything fundamental (rather than enhancements) you could try feeding the code back into Deepseek to improve upon, stepwise.
i have no way of knowing wheterh the code is right or not yet, i haven't learned enough to find out. if i knew enough coding to know if it was right or wrong, id' be able to write it myself. :oops:

to test the code i have ot build hardware that it can contorl and then run it to see what happens. but that's still being worked out. I need to build the test skeleton, then install basic servos for one limb, along with imus on that limb and the main body frame at least, so it has something to relate to position wise. then i migth be able to start testing the ai genertead code, and themmaybe start learning something from the process of figurong out what does and doesn'tw ork.


*****************

its not really right but you could think of the way i have to do things like this where i don't know all the details of exactly how to do something already as a jigsaw puzzle thats poured out onto the table and floor and the next room or two, and i just put together whichever peices i already have available whenever they fit. The puzzle is all there in my head, just that some of the pieces are still blank, cuz i don't know exactly what is supposed to go there, although i have the shape of it. i doubt that helps any, but...it's all i can think of atm.


if i had a partner to work with consistently whenever i could work on it, that laready knew how all these things worked, i could learn what i need from doing that. right now that will probably have to be the ai cuz they won't get upset by my weird brain that can't figure actual people out or sometimes can't deal with how they act towards me, etc. (this is why i need the wolfy in the first place because there's no people that would ever want to be or be able to be what i need and all of them "abandon" me sooner or later; dogs do accept me and want me around but i won't always bea ble to have a real one; someday i just wont' be able to handle losing another one).

i'm sure all this stuff makes me out to be a flaked-out weirdo, but...i guess i probably am.
 
That's what I meant by alternating: you've done a lot on the concept and functioning and the jigsaw is that top-down phase asking for a break. H/w is real-world and finnicky, so prototyping can often be better tackled bottom-up starting at the sensors, but if you can build a testable part of the skeleton then go for it. Alternating helps me to modularise a task that's taking shape and firm up the interfaces, and tends to expose 'minor' functionality that turns out to be daunting and needs working around.

https://google.com/?q=dry+run+a+program
Not only will you evolve Deepseek's code, but the more you skim or youtube code in an unfamiliar language the more you ingest how it works, making the language tutorials easier. There are things about Python I'm not keen on but it's the new English for what you're doing. Hang in there with the AI, it gained 2 IQ points while I wrote this ;)
 
Ok, that makes sense, I think. Not sure how much is the explanation and how much is my brain not being starved...my O2 stats on the little finger monitor are finally staying above the beeping-alarm range, and I don't have all the pretty sparkles coming and going in waves, so either it's better or I finally burned out all the weak braincells. :p

Still sometimes choking (well, gagging) on the crap the coughing is flinging out of my lungs...
 
So, new problem, should be less complicated, but:

In my nonrobotic prototype, I have a speaker setup for "breathing" sounds to help me stay asleep on those occasions I can get there.

It's worked fine for a long while, but in hte last several months has stopped making sound for no apparent reason. (troubleshooting below). When this happens, I'll wake up, either immediately or after nightmares about the loss of various dogs probably caused by the sound stopping. It does matter how loud the sound is--if I turn it up above some (still very quiet) point, it will eventaully cut out once a sound exceeds some volume. Once it cuts out, the amp has to have power disconnected then reconnected to restart. However, sometimes it will cut out even at a lower volume than that.


All the images are off the internet, as everything is inside the wolfy ATM and I haven't taken any pics of the actual parts I have.

The sound is delivered from a cheap little sd-card-playback board DY-SV5W from aliexpress. That works fine. Has a three position switch connected to three of the terminal pads to switch between calm slow breathing, gentle snoring, and slow panting, depending on what calms me at the moment. (all recorded from JellyBeanThePerfectlyNormalSchmoo).
It's powered by a 5v switching regulated (5.01v actual) 2A wallwart external to the wolfy.
1742256809965.png
manual

The output of that is connected to the input of a cheap little amplifier board from aliexpress, using the Yamaha YDA138 chip, powered from a 12v switching regulated (12.16v actual) 4A wallwart external to the wolfy. It's capable of 10w/channel, much more than the power output I'm actually using. (it's setup to barely be audible)
1742257282263.png
manual

The speakers being driven are two different ones, one for the left channel, the 4W that's built into the old Altec-Lansing ACS90 computer speaker casing (the left one in the image, with no electronics in it; the other one wasn't available for this project or it would have been simpler)
1742257603099.png

that the above two items are also built into; this is in the chest. The other much smaller one on the right channel used to be part of a triangular iBT88 Ihome bluetooth speaker pair (electronics failed and were then removed); this is in the head (because it fits in the space easily), with a shielded cable from it to the amp inside the other speaker.
1742258348333.png

Both speakers, while different from each other, are within the impedance the amp can support (see the manual, I don't remember what it was). I cant' use identical speakers partly because I don't have two the same and partly because one big enough to give the right deeper sounds for the chest is too large for the head (which doesnt' need bass in it). The wave files I created for it separate the frequency bands to each speaker to give a more realistic sound balance, though the sound sources are essentially mono recordings (done on a phone with "two mics" but the second channel hardly contains anything other than ambient sounds that I must filter out, so easier to leave it out)


I first verified there werent' any wiring failures (breaks, shorts, etc) in power or speaker or soundboard-amp interconnect, etc.

Then I verified that while the amp gets warm, it's no more than 90F inside the speaker, and even at the loudest it's ever used at the chip itself doesn't exceed that temperature. I added a very small heatsink to it anyway, since it doesn't have one built in (it uses the PCB as one); the heatsink was removed from an ancient PCI ATI Radeon video card and cut down to fit in the available space. Didn't expect any change, and there wasn't. It doesn't appear to make a difference whether the amp is inside the speaker or out in open air.

I verified the sound board continues to output sound even when the amp isn't driving the speakers.

I verified the voltages (see above) of each power supply; and also swapped out both of the supplies (12v and 5v) to no effect. Also tried a 9v supply in place of the 12v, no change. Neither has any voltage drop during operation even druing the cutout.

I haven't yet tested or replaced the caps on the amp; that's probably next.

It could be even a bad amp chip (it may be genunine or counterfeit, the PCB may not have all traces connected as required, there might not be pullups on all the right pins, etc., more t hings to check).


I used to use an even crappier PAM8403 amp chip that ran on 5v, and used the same PSU as the soundboard, but even with a separate 5v supply it would distort and crackle randomly (regardles of temperature, etc); this is why I changed to the 12v board above.
1742260435750.png

I had the guts from a couple of 12v computer speakers previously, but one of them never worked right and the other had a 60hz hum nearly as loud as the audio being played even when powered from a battery and shielded by foil, even with replacing capacitors.

So....if anyone has ideas on what might be wrong with the YDA138 board.....


If I can't get it to work right, Ive got another cheap crappy amp board that's supposedly capable of 50w, but I think it requires 24v, and I would really like to go back to a 5v only system; I don't need more than a couple of watts, realistically.

Later on, with the full robotic wolfy, it may need more power for some of the vocalizations it could make, but I'll deal with that when I get that far (which is a long way away).
 

Attachments

  • DY-SV5W Manual.pdf
    129.4 KB · Views: 2
  • YDA138.PDF
    1.2 MB · Views: 1
FWIW here are my hunches. A non-linear failure in a linear task points to the amp chip. The delay and volume sensitivity hint at a thermal issue or even static or bias build-up, and the issues with previous amps hint at grounding issues. The board has barely enough components for the YDA138 basics and the middle electrolytic is the odd one out so probably power supply. So I'd run everything off the same supply, prefer a wallwart with a ferrite bead if you have one, keep power and signal interconnects short (except for the wallwart) and twin-core (not separate singles), and ground any screening at one end only; check for dry joints on the middle cap, replace the cheap amp board.

Linear ICs seem to be flakier than digital (maybe ESD protection's harder) so I'd favour a discrete amp [1] or get disposables [2]. Like hunting a grounding fault on a car sometimes it's easier to dismantle and rebuild, which tells you how much I know about linear stuff.

[1] Class A Analog Amplifier Module $9 2 Channel Class A Analog Circuit Power Amplifier Board Module 15W Stereo | eBay
[2] 4pcs/lot LM386 amplifier module $2 free shipping https://vi.aliexpress.com/item/1005006691775986.html
 
FWIW here are my hunches. A non-linear failure in a linear task points to the amp chip. The delay and volume sensitivity hint at a thermal issue or even static or bias build-up, and the issues with previous amps hint at grounding issues.

After a series of shutoffs in which the position of the wolfy's head appeared to make a difference (which was not noticeable previously), I did some further digging (literally) in the wire from the amp to the head speaker, and found that just like in some of the assorted ebike wiring faults over the years (on my own and in posts here on ES) the insulation jacket on the outside of the cable is visually intact, but internally there is a break in the insulation on an individual wire.

The cable is 3-4mm with an outer jacket, an unbraided outer shield, and two individually insulated inner wires (stereo pair). I'm only using one of the wires for the single speaker signal positive, and then using hte braid as the ground.

The insulation break is in the signal wire, and when it is bent just so, it spreads enough to allow relatively high resistance (for a short) contact between the signal wire and the grounded shield strands.

The amp has protection against this, so it shuts down, rather than exploding, but the previous 5v-based amp did not, and the wiring fault is probably what destroyed that one (one day it was so hot I could smell it, and the external 5v supply had shutdown; no wiring faults were found so I figured that it was simply yet another faulty cheap board; it wasnt' the first that failed of the lot and those had done so on the test bench).

I first just cut the affected segment out and retested to be sure that was actually the cause, then replaced the entire cable from amp to speaker and so far it's been fault free. (knocks on head for luck).



The board has barely enough components for the YDA138 basics and the middle electrolytic is the odd one out so probably power supply.
That's not surprising. Almost all the cheap devices of any type use the minimum effective parts--most of them use the spec sheet's bare test circuit.


So I'd run everything off the same supply, prefer a wallwart with a ferrite bead if you have one, keep power and signal interconnects short (except for the wallwart) and twin-core (not separate singles), and ground any screening at one end only; check for dry joints on the middle cap, replace the cheap amp board.

Well, I can't run them off the same supply since they're different voltages (the sound generating board is 5v only, the amp is 12v (well, 9v-15v, IIRC)).

If I could I'd prefer to use a 5v amp, but none of the ones I've found so far are useful or they make things more complex than necessary (most are based on the same chip as the crappy ones, others are mono only (some claim stereo and have two inputs and outputs but internally discard one channel and route the other to both outputs; some are BT input only (no wired input)). Some of them were bought really cheap on aliexpress, some found in assorted devices I either already had or got from thrift stores for mulitple parts I could use from them, etc.




Linear ICs seem to be flakier than digital (maybe ESD protection's harder) so I'd favour a discrete amp [1] or get disposables [2]. Like hunting a grounding fault on a car sometimes it's easier to dismantle and rebuild, which tells you how much I know about linear stuff.

ESD issues are about the same for both, in my experience. "digital" amplifier ICs offer protection circuits that the pure analog ones usually don't, so like the Yamaha chip tend to survive events the analog ones don't, like the shorted speaker wire on this one.

I am already using "disposables" in that they are a dollar or less for some of them, the 5v PAM8403 ones
1742768551293.png

that failed before a little more than 50 cents each.

I think the yamaha one under discussion was 3 or 4 bucks?

I have a TPA3110based one that was less than a dollar, but still takes more than 5v to run, and claims to be 30wx30w, with a teensy tiny heatsink on it I'm pretty sure couldn't do that without active forced airflow.

1742768672270.png
And this one
1742769789973.png
that supposedly runs on 5v but barely powers up from it, with unreliable operation regardless of the current capability of the supply.


As far as discrete amps go, if you mean one built from individual transistors and such, they're no more reliable than IC-based ones: their parts count is usually much higher, so the potential for problems from solder joionts, pcb traces, wiring, increases even as the ability for individual parts to possibly be tougher due to their larger sizes increases.

In previous wolfy prototypes the speakers have usually been comptuer speaker amps that were often discrete-transistor types. Some used pure analog amp chips (various TDA series usually) in assorted packages, with many discrete resistors, capacitors, diodes, etc. to support them. The other half of the speaker set being used in this wolfy has a hybrid amp like that; I forget what chip it has, but it's 12v powered, and has a tone control as well as a sub output jack and a jack to plug this one's bare speaker into.





The latter type might work, but requires one for each channel, and it is not "10W" output :lol: as a fair number of the ads claim. At best it can do about a watt with a 9v supply (seen this chip in a number of toys/etc with sound). How well it works on 5v would have to be tested. I expect it'll be just as bad as the PAM chip was, though. If I were to try it out I'll have to search for a version of it I can get without import, though, don't want to wait a hundred years to get it with the new tariffs likely to delay everything. :/

It'd probably be safer to just see if anything I have or run across as free junk has one of these in it to try out...but then that's time needed to disassemble, rebuild, etc, whcih is why I bought amp boards in the first place, to save me that time. :/

I'd rather use one that has a TDA chip in it; the speakers I've used with those in them have worked well enough. I'm more likely to find a usable computer speaker set at goodwill for a buck or two than I am to find a trustworthy ad for a bare amp board these days...but theyll be 12v speakers rather than 5v, with a near certainty.

Too many decisions, too many things to do, too many false starts, too little time for proper research and experimentation or money to throw at the problem instead. :/


For now at least the YDA board is working, and it has protection that actually works (whcih is surprising but helpful).
 
Glad you got it sorted... a grounding issue of sorts ;) That's useful info about parts reliability. Saw a YT video that showed how iron's being substituted for copper in some flexes, I guess that'll lead to more core fractures but what's up with the insulation, I thought PVC was cheap enough to use thickly.

The suggestion of a single PSU was because I've come across 'isolated' supplies with leakage, not quite floating as they should be. There are dual power modules giving +12V & +5V but deriving the 5V with a converter might be cheaper. The outputs from my 5+12V ATX PSUs are rough though.
 
Nothing is isolated in this system; the audio signal passes from the source device to the amp thus the grounds of both must be tied together. (not going to try building or buying audio isolation devices just to cause a requirement for power supply isiolation devices).

So there is no reason or point to using isolated supplies. It's possible (even likely) taht the supplies I am using are isolated DC from AC sides, but I haven't checked that as it doesn't matter for my purposes.

The only reason to use isolated supplies is when grounds must be kept separate, so that all voltages between two systems have no connection to each other, either for direct personal safety or to prevent shorting out one of the systems, or in certain types of audio or other signal processing applications where ground loops can cause signal issues (primarily in live stage audio where different pieces of a total system end up powered from different AC sources, and DI boxes or other audio signal isolation methods also have to be used, to prevent or minimize hum from different AC grounding levels.

So...it doesn't matter if I use a single power supply with a single or dual voltage, or separate supplies, and so I go the affordable route and use the stuff I have (rather than trying to find and buy a useful dual-voltage supply; none of the ones I have for things like external harddisk backups, etc have enough current on one or the other of the voltages, or they are too electricaly noisy for audio use).



The insulation on the wires used is normal thickness, but at the point of failure it thinned out--either a factory defect or something caused by repeated bending of the wire at that point, either in it's present application or whatever it came from originally (since nearly everything I have is used and/or ancient as I can't afford to buy much new (or even old, so I accept what I can get when it's thrown away by others or given to me).
 
I think this might be a great interest to you. MSN
robotic therapeutic puppies with many of the desired features you want.
 
Last edited:
Thanks--but even the tombot dog isn't sufficient for what I want or need. It's closer than any other I've seen, but still quite far, and since it isn't open source I can't use what they've done to further my own; so I'm still completely on my own to finish designing and building this project.



I am glad that someone out there has actually done something to help people that can't have real animals, it's significant progress, lightyears beyond the toy robot "doglike" things seen over the years, none of which would be useful for this. (Mine probably won't be at a useful point for years, maybe decades, at the rate I am able to progress by myself.)

Would be better if it was open source so those that wanted to could build their own customized versions, and would be even better if it didn't require a phone app to setup / etc (because that means it will become obsolete, less useful, or even useless as soon as that company stops supporting the app versions that it requires, for whatever reason).
 
Just for inspiration. He intends to open source the head design. It uses an RP2040 to operate each feature and a Pi for overall control but later he had to sort of work around this design choice because the h/w didn't support it.

3D Printed Biomimetic Mechatronic Hand Explained Part 1 - YouTube

My New Robotic Head Design is Legally Confusing - YouTube

Will Cogley - YouTube

If it were me I'd try out a single processor and loop, and when comfortable with micropython convert to using its asyncio feature. I expect actuator delays would complicate the coding but STM32 / pyboard overcomes ADC delays and inaccuracy, and micropython supports several acceleration methods, even inline assembler. Luckily it isn't me, those videos span 6 years.
 
Thanks!

With some googling on the name I found his regular site which is much more informative and includes the files that can be used to create them and has other projects than just those two. I'll watch the videos when i get a chance; they'll probalby tell me stuff that's not in the files.

This one
is intended to use a python script to match mouth movements to spoken words, which could be adapted (probably not by me) into the wolfy's mouth movements related to the wolf / dog sounds it would make.

The original script could be used if anyone ever wanted to have the wolfy be a more human-interactive type like Teddy from AI, but I'm not looking for a people-replacement, so I don't know how useful that would be for me.

There's also an animatronic eyes that's better than any of the other ones I've found out there of the ones that have the code itself available as well as models/instructions/etc.

I haven't looked at the robot head yet, but if it combines these into one functional unit, then as long as I can figure out how to mechanically alter the design to be a wolf head instead of human and still function as designed, then I could probably use it "as is" for the most part. I would probably need to add something for the nose movements and the ears.

Since I don't want the wolfy to have the motors inside it because of the noise and the heat buildup (remember, it's all covered in padding to make it soft and squishy and then that's all covered in fur), I do already have to adapt the designs to cable-operation (meaning, stronger motors to overcome the cable friction, etc)
 
In reply to my PM Amberwolf wrote: "Can we put this in the thread itself instead, [reasons]". (AW: PM thread is Snuggles project +)

I wondered if there are any suitable alternative actuators without the drawbacks of cable operation and PM'd a lengthy ChatGPT session that I couldn't link to as a free user, mangled by copy-paste incompatabilities between ChatGPT's output and ES/XenForo's input and cringey where ChatGPT gets smarmy. Here are my prompts and key terms in the output: if you want to re-create the output it'll only get better with time. After logging on I usually start with "Please always be concise and non-judgemental".

TLDR: Search for Proprioceptive PneuNets – SDU Soft Robotics. On YT I didn't find any multi-compartment pneunets like ChatGPT's sketch.

You said:

Are there actuators made of memory metal or something like that? I'm looking for a way to produce small movements in the order of pi/2 radians or maybe 20mm, but smaller and quieter than a servo or stepper motor. Efficiency and reproducibility are secondary in my application. Remote operation is acceptable but hydraulics and co-axial cables tend to be too bulky. Maybe pneumatics can be made smaller, or perhaps there's some actuator like a very long bimetallic strip.

ChatGPT said:

Yes, there are actuators that use materials with shape memory, including memory metals (such as Nitinol) and other materials with similar properties. Here's a breakdown of relevant options that might suit your needs:

1. Shape Memory Alloys (SMAs)

Nitinol is the most well-known memory metal. When it is heated beyond a certain temperature, it "remembers" a previous shape and can produce movement. Nitinol wires or springs can be used to create small movements, and they are quieter and more compact than traditional actuators like servos.
Movement: The movement is typically linear, but a bending configuration or mechanical linkage could convert this linear motion into a rotational or complex motion.
Advantages: Compact, silent, and low power consumption when heated, with no need for motors or complex mechanisms.
Challenges: The heating and cooling process can be relatively slow, and precise control may require a temperature controller, which could add complexity.

2. Pneumatic Actuators (Micro-pneumatics)

Pneumatic actuators can be made very small, especially with micro-pneumatic systems. They often use small air chambers or diaphragms that expand or contract when air pressure is applied.
Movement: They can provide linear or rotational motion, depending on the configuration.
Advantages: Small, can produce quick movements, and generally quieter than traditional motors.
Challenges: Requires a pressurized air supply, and the control systems can be more complex (but compact options do exist).

3. Bimetallic Strips

Bimetallic strips are another actuator based on material expansion. They consist of two metals with different coefficients of expansion, which bend when heated or cooled. These strips can produce small deflections.
Movement: Primarily linear, but can be used in a bending configuration to generate rotational movement.
Advantages: Simple, inexpensive, and silent.
Challenges: The displacement is relatively small, and precise control can be tricky without additional mechanisms.

4. Electroactive Polymers (EAPs)

EAPs are materials that change shape when an electric field is applied. They can be used for actuators with soft, quiet, and compact movements.
Movement: Can provide both linear and rotational movement, with some materials capable of large deflections.
Advantages: Silent, flexible, and potentially very compact.
Challenges: These materials can be less efficient and may require high voltages for activation.

5. Electrothermal Actuators

These actuators use the expansion of materials in response to heating, similar to SMAs but typically designed to generate more controlled and precise motions. They can be designed for small linear or rotational movements.
Advantages: Compact, and can be controlled via simple electrical heating.
Challenges: Like SMAs, they can be slow in response time.

You said:

Using off the shelf parts are any of these capable of producing more than light force, as I guess micro-pneumatics might be, but less complex to implement? Are there any suitable piezo-electric actuators or are those included in EAPs?

ChatGPT said: (greatly shortened)

Force capabilities and practicality:
Micro-pneumatics:
Shape Memory Alloys (SMAs like Nitinol):
Bimetallic strips:
Piezoelectric Actuators:

Off-the-shelf piezo stacks are:
Excellent if you need very strong force and tiny movement.
Terrible if you need 20 mm or ~π/2 radians of motion — unless you mechanically amplify the movement, which adds bulk and complexity.

There are piezoelectric benders ("piezo bimorphs") that bend more, but still not easily 20mm stroke without stacking them, and they don't easily produce high torque.

You said:

I'm wondering how to avoid burdening the system with a heavy rigid framework for the actuation forces to react against. Are there good solutions to these issues?

ChatGPT said:

Use antagonistic SMA pairs:
Use lightweight flexures instead of rigid frames:
Use tension-only structures:

You said:

Coaxial cables are usually used with the inner in tension. Instead, imagine two rods in one close fitting tube: each can be used in tension or compression. Is that commonly implemented anywhere?

ChatGPT said:

Some marine throttle cables are called push-pull cables — the inner cable is stiff enough to transmit moderate compression too. But they use a single rod, not two independent ones.
Flexible Shafts with Inner Core: (Used for power transmission) can rotate in both directions, but that's torque, not linear motion.
Medical Endoscopy Instruments (Steerable Shafts): Some advanced flexible medical devices use two rods inside sheaths to push-pull for steering.

You said:

A compartmentalised pressurised membrane structure could be light, rigid and capable of movement in response to pressure modulation. Imagine an air-inflated heart acting in reverse. Difficult to construct in that form, but perhaps something could be engineered mainly flat for 3D printing. Do you know of any examples of this in practice?

ChatGPT said:

1. Soft Robotics (Pneumatic Artificial Muscles, "PAMs")

Devices like the McKibben actuator (invented 1950s) use a pressurized membrane to create linear pulling force.
It’s an inflatable bladder wrapped in a braided mesh.
When inflated, the mesh forces it to contract in length and expand radially.
Very similar in spirit to your "heart-like" action!

2. PneuNets (Pneumatic Networks)

A field of soft robotics where flat sheets with embedded chambers are selectively inflated to bend, twist, or expand.
Typically 3D printed or molded from silicone.
Made of compartmentalized membranes exactly like you're imagining.
Inflating different chambers causes bending or articulated motion.

I'm not allowed to directly send images or links in the free version you're using. I’ll sketch it out for you in text form:

Side view of a PneuNet:

[Chamber 1] [Chamber 2] [Chamber 3]

Inflate chamber 1 → bend left
Inflate chamber 3 → bend right
Inflate all → elongate or arch up

These can be printed in thin flexible materials (TPU, silicone) and operate at low pressures (50–150 kPa, ~0.5–1.5 bar).

3. Inflatable Space Structures

NASA and others have developed inflatable booms and habitats for space applications where lightweight rigid structures are needed after inflation.

So to directly answer your question:
Soft robotics: PneuNets actuators
Medical devices: Artificial muscles, stent structures
Aerospace: Inflatable rigidizable booms, modules
Industrial automation: McKibben muscles, Festo fluidic muscles

Practical Notes for 3D Printing:

Flexible materials: TPU (thermoplastic polyurethane) is printable by many FDM printers.
Flat structures with air channels are easier to print and seal.
Channels must be designed to avoid weak points that could rupture under pressure.
You might need multi-material printing (hard + soft materials) to optimize structure if you want certain areas rigid.

Summary

✅ Your "reverse heart" idea matches PneuNets and soft robotic actuators.
✅ 3D printing flat pressurized membranes is a practical, explored idea.
✅ Membrane structures can act as both actuator and lightweight "frame" at the same time — very elegant for weight-saving.
 
Last edited:
AW wrote: "the air hissing thru tubes valves and pipes, etc. Some of the noise could be moved outside like the air comprssor, but the internal sounds would be annoying"

Maybe if the system were made from rubber, silicone or 3DP TPU the material would damp turbulence noise. The compressor and tank might consist of a rubber bladder and a peristaltic / roller air pump. There are soft versions of actuators and valves too: a film stretched over a hole in a rigid sheet forms an air diode.
 
Back
Top