efficiency and pwm

Joined
Apr 25, 2014
Messages
3,049
Location
Sausalito
with a motor said to be most efficient running at roughly 85 percent of its no load, is that determined with the pack voltage and you can't assume a new no-load speed based on an effective voltage due to pwm at lower duty cycle? I believe the full pack voltage is used in determining the no-load speed and ultimately for the motor alone riding around slowly is less efficient than going close to full speed.
anything you can add or fix or related?
 
Yes, true to a point. That's why we have throttles.

If the PWM is anything less than 100%, you add a bunch of switching losses in both the controller and motor. If you look at the efficiency curves for a motor/controller combination, you see a kink in the line where the duty cycle reaches 100%. For max efficiency, the motor controller should only be commutating the full pack voltage.

At lower duty cycles, while the motor/controller efficiency will be less, but you'll have less loss due to wind resistance, motor windage, eddy currents etc. and the batteries will be more efficient. You really need to consider the entire system.
 
Always,, total efficiency of the entire system depends greatly on the load. Overloaded, whether its just weight, or weight up a grade, gets inefficient.

Doesn't a motor run fairly efficient at low rpm, if the relative load is low? I see this as why 3000w hub motors got so popular, 200 pound body weight is not overloading them. So they run fairly efficient at any speed on the flat, and can zip up hills without loss of rpm.

This is why mid drives popularity is soaring, you can shift down and lower the load when you need to. Or leave in in high gear too much, and overload it too much. :roll:
 
The no-load speed isn't constant, it's variable depending on what voltage is currently being fed to the motor at the time. The idea is that for good efficiency you wouldn't then load it down to less than 85% (using your number) of its unloaded speed.

Controller efficiency (at part or full throttle) is normally only an issue if it makes the controller overheat. The average controller is, what, 98-99% efficient? Even if the losses at part throttle doubled it would have a barely noticeable effect on the overall system efficiency.
 
I'm seeking a motor's efficiency alone as that' the part of my system that gets hot, it being a smaller hub motor.

So the classic dc motor graph showing efficiency, power, and rpm, in which the efficiency is by far best at around 85% and max power around 50% of the no-load speed is out the window and I can get just as good efficiency at 20% of my possible max speed? there's new losses popping up I've never considered such a switching losses in the motor and I'm frustrated... aren't there generalizations about efficiency of a motor and speed? what happened to these motors "wanting to run fast"?



and then fundamentally what is the loss in a motor when running at a low speed, at full duty cycle, when the no-load speed is so far away.
 
You've got some misconceptions here.

Anytime core loss is above copper loss, the motor gets more efficient if you reduce rpm and increase phase current (torque).

Anytime copper losses (phase current^2 * winding resistance) are greater than core losses (hysteresis dominantly for hubs, the eddy portion is typically insignificant if it's not built with mfg issues), then the motor gets more efficient to increase rpm and decrease phase current.

There is no value at all in saying 85% of no load. Likewise, if the motor inductance is adequate to not saturate the core during the PWM on rise period, there's no advantage in going to higher speed PWM, but potential for increased switching losses.

*edited to fix formula typo
 
I haven't come up with a way to figure what my losses are comprised of but since it's a small slow turning hub motor I imagine it's largely copper losses as I read that it's hard to get to flux density that would saturate the silicone steel..as I read that hysteresis is increased when saturation happens. but I'm just guessing.

Looking at the standard graph and simulations, such as the Grin one, they seem to always show best efficiency at roughly the 80%. If core losses are increasing with increased rpm then to have efficiency at such high rpm the copper losses must be going down with increased rpm. ?
 
Copper loss will always be exactly phase current amps^2 * winding resistance.

This is true if you're at 20,000rpm or 0rpm.

Core losses are mildly more complex to calculate, but for hubmotors eddy loss can typically be ignored, making core loss be however many grams of core material is getting it's magnetic orientation flip-flopped by whatever severity at whatever electrical frequency (this is commutation freq, and entirely unrelated to PWM freq, which is just used for current control).
 
if copper losses are linear based just on the current and iron losses are based on rpm...why would a motor be more efficient at a higher rpm? Or maybe the point is it's not, but that goes against that common graph.
 
Hummina Shadeeba said:
if copper losses are linear based just on the current and iron losses are based on rpm...why would a motor be more efficient at a higher rpm? Or maybe the point is it's not, but that goes against that common graph.


Looking at the simulator output plot can be very confusing and/or misleading without understanding the relationships between the component losses composing the final efficiency.

Something super slick Justin could do with the sim is to graphically show in the curve which portion is due to core losses and which portion is due to copper losses with a pair of added lines to the graph.

This would make is super easy to understand if it makes sense to increase phase current or spin faster.

As one extreme example, for my deathbike (or my DSR), when you're cruising at a steady speed (basically any steady speed up to ~100mph), core losses dominate over copper losses, so slowing the motor down with taller gearing would improve net system efficiency for cruise. It would just proportionately make acceleration force worse when you were not in steady state cruise. Either way though, efficiency is so high adding the loses of a single gearing stage would exceed total system losses and make cruise at all speeds less efficient.
 
With copper losses as I(2)xR and consistent (until the motor gets hot) and iron losses simply increase with rpm why am I always seeing best efficiency at around 80% max speed? the efficiency should be best at the lowest speeds if these are the only factors. surely there's other factors at play. I can imagine increasing speed increases power and in that way more power could be produced efficiently at higher speeds but why is torque said to be most at lowest speed too?

Ive been interested in ev for a while now and still don't get this fundamental understanding. I have some tools to do real-world testing but wind resistance and rolling resistance are big factors and would just like to know what speed of solely the motor is most efficient and looking at graphs is telling me i'm simply most efficient running around 80% of no-load.
 
Hummina Shadeeba said:
if copper losses are linear based just on the current and iron losses are based on rpm...why would a motor be more efficient at a higher rpm? Or maybe the point is it's not, but that goes against that common graph.

It's because in typical applications current (and copper losses) drops as RPM increases.
In most cases motor heat is directly related to gearing (wheel diameter for hub motors). I would look into that instead instead of chasing marginal core losses.

Avner.
 
sorry yes copper losses are non-linear thats the wrong term



it's seeming efficiency in a hub motor especially is about reducing copper losses and this is solved simply by having enough copper to reduce the resistance yet still have enough turns to produce the inductance.
 
Yes, though I would say enough turns to produce the magnetic field (amp-turns), which allows the real work to occur. The inductance is useful to provide good filtering for the PWM but too much slows things and wastes energy at commutation time, and too little requires higher frequency PWM and makes it harder to control. So there is a range of inductance that is easier to work with.
 
i see you're in san francisco. Me too. I just had these huge hub motors made and am curious about what direction to go for best efficiency. Was doing a smaller version but they got hot so these were an obvious direction to go as I'm assuming most of the heat was due to copper losses as it's a relatively slow turner. lots of assuming but a bigger motor seems the answer across the board
 
Hummina Shadeeba said:
i see you're in san francisco. Me too. I just had these huge hub motors made and am curious about what direction to go for best efficiency. Was doing a smaller version but they got hot so these were an obvious direction to go as I'm assuming most of the heat was due to copper losses as it's a relatively slow turner. lots of assuming but a bigger motor seems the answer across the board


Bigger motor is typically the right answer for anything with windings overheating.
 
liveforphysics said:
Something super slick Justin could do with the sim is to graphically show in the curve which portion is due to core losses and which portion is due to copper losses with a pair of added lines to the graph.

It is there if you click the "advanced" tab from the trip simulator, rather than just the motor simulator!
http://www.ebikes.ca/tools/trip-simulator.html
TripSim Core Losses.jpg

Your statement though that the eddie losses are negligible and core loss is completely dominated by hysteresis seems at odds with most of my firsthand test experience though. With direct drive hub motors in the ~300 rpm range I normally see eddie losses are about 25-35% of the total core losses. And with geared motors with their much higher commutation frequencies, the motor drag roughly doubles from very low speeds to the typical cruising speed. Like this here (xofo equivalent to a Bafang CST), going from 1Nm of drag at a standstill to 2Nm at the ~250rpm design speed:
CogTorque Example.jpg

Isn't that a sign that the core loss is split almost 50:50 between hysteresis and eddie currents at the operating RPM, or are there other reasons why the drag caused from hysteresis would increase with motor RPM like that?

Anyways Hummina in principle measuring the core losses to first order is quite easy to do with basic equipment. Run the motor unloaded and full throttle at a given voltage (say 24V) and record how many amps it draws no load. Then do the same test a different voltage (like 12V) and compare the no load currents. Then make a straight line interpolation from that and you have a pretty good idea of the core loss as a function of RPM.

Normally, when a motor overheats then as liveforphysics says, it's because you need a bigger motor. But I have taken a brand new turnigy hobbycrap RC motor, run it completely unloaded for a while, and was surprised to see that the core reached nearly 100oC just on it's own, while producing zero output torque or power.
 
justin_le said:
going from 1Nm of drag at a standstill to 2Nm at the ~250rpm design speed:


Isn't that a sign that the core loss is split almost 50:50 between hysteresis and eddie currents at the operating RPM, or are there other reasons why the drag caused from hysteresis would increase with motor RPM like that?

That's so awesome the simulator shows core vs copper! I never noticed!

I don't see the squared curve signature of eddy loss VS RPM in those graphs? The linear component would match hysteresis losses quite well (unless I'm missing something in the chart?). Eddys are a squared loss with RPM increasing, but magnet/rotor surface speeds of direct drive hub machines keeps it in the insignificant portion of the losses curve.

justin_le said:
...I have taken a brand new turnigy hobbycrap RC motor, run it completely unloaded for a while, and was surprised to see that the core reached nearly 100oC just on it's own, while producing zero output torque or power.

Even some very high performance motors can't run unloaded at full RPM with temperature stability due to core loss being so significant. When running loaded they actually have a slightly rougher core-loss heat-load to eat even if copper loss isn't factored. This is due to eddys across the face of the magnets which are stronger in high torque situations (the face of the magnet ideally is a static field, but in practice gets deformed dynamically from the field in the tooth interacting with the magnets at high tooth flux densities).
 
liveforphysics said:
That's so awesome the simulator shows core vs copper! I never noticed!

Well, it's not the motor simultor but the EV trip simulator, which will be worthy of it's own thread here on ES explaining all functions and features when I have the time. If you look on the left hand pane you'll see it's also very easy to input all your own custom motor parameters and motor constants, and I'm hoping to populate this with some representative skateboard motors soon once the wind tunnel test chamber is active again (currently it just has ebike hub motors).

I don't see the squared curve signature of eddy loss VS RPM in those graphs? The linear component would match hysteresis losses quite well (unless I'm missing something in the chart?).

Yes, what you are missing is that the 'Y' axis here is scaled in torque units rather than power units. I find that's a much easier plot to read and understand than power vs. rpm. The 'Y" intersect is the constant hysteresis drag and slope lets you visually see how much eddie losses come into play at higher speeds. To get core power vs. RPM from that plot, you'd just need to multiply by RPM and then the offset and slope become a slope and a quadratic term respectively.

liveforphysics said:
justin_le said:
...I have taken a brand new turnigy hobbycrap RC motor, run it completely unloaded for a while, and was surprised to see that the core reached nearly 100oC just on it's own, while producing zero output torque or power.

Even some very high performance motors can't run unloaded at full RPM with temperature stability due to core loss being so significant. When running loaded they actually have a slightly rougher core-loss heat-load to eat even if copper loss isn't factored. This is due to eddys across the face of the magnets which are stronger in high torque situations (the face of the magnet ideally is a static field, but in practice gets deformed dynamically from the field in the tooth interacting with the magnets at high tooth flux densities).

Interesting, and thanks for confirming this. It was just such a shock to me since I'm used to the large hub motors where you can run the motors unloaded and the last thing that would cross your mind is that this would heat them up. I was like Hummina playing with these RC motors for skateboard usage. One things I have noticed is that all of the commercial skateboard hub motors I've purchased from china lately all by default seem to be 0.2mm laminations with very low turning drag. I think this is motivated more by having the motors spin freely while coasting rather than concerns of overheating from core lose, but it's good to see this taken to heart.
 
Fantastic simulator and I've been using it on the continuing quest to figure out how to run most efficiently. Simply trying to figure why it's more efficient to run at high speeds.

With pwm at a lower duty cycle the motor amps are higher than the battery amps; is this the cause of inefficiency at lower speed as the copper losses are higher with the higher current even though pulsed? 100 amps at 10% duty cycle and .1ohm being greater losses than 10 amps at 100% duty cycle at the same resistance?
100 squared/10 x .1 =100
vs
10 squared x .1= 10

With pwm the motor sees pulsed voltage but continuous amperage so is it fair to assume it's creating the heat for a tenth of the second in the instance above as I did it or the whole second?

It's seeming the pulsed power is more inefficient and better to run high battery amps for better efficiency. ?


Speaking of heating unpowered I've got 3 motors on my board and only one is powered. The two unpowered just spinning along and getting warm. I've isolated the wires and there's no obvious shorts but one of the unpowered motors will be consistently 10 or 15f hotter than the other Don't know why. Same .2mm lams. Maybe bad bearings or is it possible to have a hard to detect minor short in a winding..but would that even cause any losses as the wires aren't connected to
anything and no circuit is made

Lastly all the esc I tried on the simulator show increased motor amps compared to battery all the way up till most efficient speed but the vesc goes through to 100% duty cycle before it's even gotten to half speed..why?
 
Motor current is essentially continuous (there is some ripple), due to the filtering effect of the inductance.

A short in the motor will waste energy inside the motor with circulating current.

The losses due to pulsed power are small, and it provides needed control.

A disconnected but spinning motor will still have losses such as eddy current.

The motor simulator is only in equilibrium at the one speed (where power crosses the load line). To the left of that it is accelerating, to the right it is decelerating. So the efficiency at other than the equilibrium point is not a steady state efficiency, the energy used includes that for acceleration, lowering the efficiency.
 
Hummina Shadeeba said:
Fantastic simulator and I've been using it on the continuing quest to figure out how to run most efficiently. Simply trying to figure why it's more efficient to run at high speeds.

The answer to that is trivial. For a given torque output, at higher speeds the motor will be producing more output power (torque*rpm) but will have the same copper losses., Hence the copper losses as a percentage of the total power are lower and your efficiency if higher.. ie 50 watts of I^2R loss with 100 watts of output power is (ignoring core losses) 66% efficiency, while 50 watts of copper loss with 200 watts of output power from doubling the RPM would be 80% efficiency.

With pwm at a lower duty cycle the motor amps are higher than the battery amps; is this the cause of inefficiency at lower speed as the copper losses are higher with the higher current even though pulsed?

No, the extra copper losses that may be present from PWM ripple in the phase current are like a 3rd order effect that you can completely disregard.

100 amps at 10% duty cycle and .1ohm being greater losses than 10 amps at 100% duty cycle at the same resistance?

As AlanB mentioned that's not how PWM works. With the 10% duty cycle, you won't have 100 amps for ON portion or the cycle and 0 amps for the OFF portion. Instead you'll see something like 8 amps ramping up to 12 amps during the ON portion,and that 12 amps gradually ramps down to 8 amps during the OFF portion. On average it stays pretty close to 10 amps, and will have basically the same heat generation as if it was a steady 10 amps at 100% duty.

With pwm the motor sees pulsed voltage but continuous amperage so is it fair to assume it's creating the heat for a tenth of the second in the instance above as I did it or the whole second?

It's creating heat for the whole second.

It's seeming the pulsed power is more inefficient and better to run high battery amps for better efficiency. ?

There is some marginal truth to this, that running a motor at say 24V and 100% duty cycle could be a tiny bit more efficient than 48V at 50% PWM duty cycle, but when I say tiny I really mean tiny. You wouldn't be able to perceive the difference. The controller on the other hand will have noticeably higher losses in the latter case, but you are explicitly asking about motor efficiency in this thread, not controller efficiency.

Speaking of heating unpowered I've got 3 motors on my board and only one is powered. The two unpowered just spinning along and getting warm. I've isolated the wires and there's no obvious shorts but one of the unpowered motors will be consistently 10 or 15f hotter than the other Don't know why. Same .2mm lams.

My guess if you are 100% sure there is no copper short is that most likely one of the motors has better isolation between the laminations than the other. If the laminations are touching in multiple spots then you can get eddie currents across the lams rather than just inside each individual one. I once had an ebike hub where I chucked the motor stator on a lathe in order to shave off a tiny amount of metal for better mechanical clearance. In the process though the lathe bit effectively smeared and shorted the perimeters of the laminations together. It looked smooth and shiny but the cogging drag increased by a huge amount. [/quote]
Lastly all the esc I tried on the simulator show increased motor amps compared to battery all the way up till most efficient speed
but the vesc goes through to 100% duty cycle before it's even gotten to half speed..why?

You'll need to provide graphs and numbers of what you are doing and what you are asking. The point where the controller switches from 100% duty cycle to partial duty cycle is based simply on the current limit you have set for the motor controller. The higher the current limit,the lower the motor speed will be while still putting out 100% pwm.

-Justin
 
Thanks for helping me better understand this stuff. Huge revelations for me!

Any way to better isolate the smeared stator? I smeared one last night as the motor structure became loose and had a bit of rubbing with the magnets. I've had this before and the magnets are super durable but the stator will polish in a spot. I wonder if there's anything that can be done to reisolate. I'm guessing no. It seems the lams aren't fully isolated from each other anyway even at the start
 
Back
Top