**Important** reality check on motor, voltage, current etc.

Well, it probably doesnt' matter whether or not some are cooler than others, but rather that some are already hot.

I doubt that there is much difference between having the other half of the phase leg cool or hot when the first half (whichever one that is) is already near or at it's max. It'd fail either way if it kept having more heat added to it.
 
This is speculation, but I wonder if dv/dt turn-on is rearing it's head here.

When the upper MOSFET is turning on, the drain voltage at the lower FET goes from ~0V to full battery voltage in a very short period of time. This dv/dt can cause current to flow through the parasitic gate-drain capacitance in the lower FET. That current tries to turn the FET on just like the gate driver. If there's enough current and it isn't drained away, the lower FET will turn on and shoot-through occurs with a very large spike of current.

The gate threshold voltage on a MOSFET has a negative temp coefficient, so it decreases as the FET gets hotter. The spec Vth for a 4110 FET is about 3V, and the coefficient is about -7mV/C. So, cruising around at low throttle getting the FETs hot will bring the threshold down. Gunning it right after might get the FET hot enough to trigger shoot-thru. Running at high voltages will make this worse since the dv/dt is larger. Hot controllers at high voltages is pretty much worst-case for dv/dt-induced turn on.

Edit: I forgot to add that the heat is an even bigger problem because in the cases we're talking about the low-side FETs are taking most of the abuse. High-side FETs don't suffer from this issue.

This would still be a temp-induced problem, but if I'm right a shoot-thru event might be killing FETs before the heat does. I think it would be very difficult to prove this, though. You could probably make some measurements to see if it's plausible, though.

Since these controllers weren't originally designed for high-voltage use, a gate drive circuit that's perfectly fine for the original specs might not be quite good enough for higher voltages and higher dv/dt.
 
If the fast switching is the problem, why can't the controllers do wider pulses with more space in between each pulse for the same average pulse but greatly reduced switching losses? Isn't there something in the programmable settings of these Infineons to accomplish just that? What about caveman style and wire a momentary switch in parallel with my throttle for manual manually reducing the number of times switched for pulse and glide throttle for the occasions that I know is harder for the controller. The only problem is if I'm forced to partial throttle use due to traffic and can't do pulse and glide due to a car in back of me, though I guess I could pulse and glide with fast switching with my thumb.

I've got to find a low cost answer that retains high performance without giving up everything I like about riding with a low turn count motor. There has to be an electronic answer to solve this problem other than going to greatly more expensive mosfets that are more difficult to implement, even if it means causing more of the motor sonics that I hear only for 2-3 seconds during certain partial throttle positions (about 30%) while accelerating from near 0 speed. I'd think there must be some way to whack off at least 75% of that switching heat loss and problem just by changing the frequency of switching to a slower speed during conditions that are difficult for the controller, without affecting what we see as a rider in any way.

John
 
In this case, the frequency or duration of the PWM pulses aren't the issue. The turn-on dv/dt is entirely determined by the gate drive circuit in the controller. One way to reduce it is to simply slow the turn-on down, but that will also increase switching loss. A better way puts a diode in parallel with the drive resistor. The turn-on current is controlled by the resistor as usual, but the diode bypasses it to make turn-off happen faster. On the low-side, that diode also helps hold the gate down against the induced current. Sometimes a "snubber" circuit is used which puts a series R-C across the FET to limit dv/dt, but that also creates losses and more heat to deal with (not to mention large power resistors).

It's possible some modifications could be made to the gate drive circuit. We'd want to be certain this is an issue before going to that trouble, though. Data needed!
 
I'm not talking about the shoot-though issue, unless that's the whole problem. I thought you guys were saying there's an issue with some loss for each on/off cycle that accumulates with more on/offs resulting from partial throttle. Isn't there a resolution for the high current part too via a higher apparent voltage or no?
 
I doubt shoot-thru is the entire problem. It might not be a problem at all. Just speculating.

The bottom line is that torque and phase current are directly related. If you want a certain amount of torque from your setup, that will require a certain amount of current to get it regardless of voltage. There are ways the setup could be optimized, but current and torque will always go together.

If heat was the only or primary issue, you could always up your cooling. Fans or even water cooling are possible.

I'm getting the impression that heat might not be the main issue for you, though. I need to get to bed before I get myself in trouble with more speculation. :)
 
The wider pulse means sharper current spikes on the fet. With a high inductance motor (lots of turns) you can get away with wider pulses. The motors with low turns need bunches of short duration pulses to keep fet junction current ramp rates in check.




John in CR said:
I'm not talking about the shoot-though issue, unless that's the whole problem. I thought you guys were saying there's an issue with some loss for each on/off cycle that accumulates with more on/offs resulting from partial throttle. Isn't there a resolution for the high current part too via a higher apparent voltage or no?
 
rhitee05 said:
In this case, the frequency or duration of the PWM pulses aren't the issue. The turn-on dv/dt is entirely determined by the gate drive circuit in the controller. One way to reduce it is to simply slow the turn-on down, but that will also increase switching loss. A better way puts a diode in parallel with the drive resistor. The turn-on current is controlled by the resistor as usual, but the diode bypasses it to make turn-off happen faster. On the low-side, that diode also helps hold the gate down against the induced current. Sometimes a "snubber" circuit is used which puts a series R-C across the FET to limit dv/dt, but that also creates losses and more heat to deal with (not to mention large power resistors).

It's possible some modifications could be made to the gate drive circuit. We'd want to be certain this is an issue before going to that trouble, though. Data needed!

If it's the gate drive circuit that's the culprit, that'd imply infineons are pretty much useless, no? But, somehow, they seem to survive during certain types of usage quite well... (Or so I believe. I have a hard time believing that ebikes.ca would sell an unreliable controller if a significant percentage was affected. It seems there's something quite rider/motor dependent given that john has a blown controller (a many?) so far.)
 
Plus, wider pulses generally means lower frequency, which probably means more audible noise from the motor coils if it goes low enough.
 
swbluto said:
It seems there's something quite rider/motor dependent given that john has a blown controller (a many?) so far.)

?????????? Did you miss the part that I've been riding an unchanged 60mph bike every day for over a year and a half with the only maintenance being changing brake pads? Only after further shunt mods the other day for the purposes of this thread to try to discover if my controllers have some different setting protecting them did I suffer this controller failure, which was riding at 5mph and not up a hill, so nothing to do with current multiplication unless it's allowed to go 20 or 30 times battery current. As I stated before the other identical situation failures were on different bikes, one even brushed. Hyena also reports similar very low speed failures, so it's an issue.

The other controller failure with my motor big low turn count hub motors was the meltdown of the Methods controller, but that was with a larger wheel and partial throttle up a significant hill, something I find out after the fact not to do.
 
liveforphysics said:
The wider pulse means sharper current spikes on the fet. With a high inductance motor (lots of turns) you can get away with wider pulses. The motors with low turns need bunches of short duration pulses to keep fet junction current ramp rates in check.

John in CR said:
I'm not talking about the shoot-though issue, unless that's the whole problem. I thought you guys were saying there's an issue with some loss for each on/off cycle that accumulates with more on/offs resulting from partial throttle. Isn't there a resolution for the high current part too via a higher apparent voltage or no?

Are you saying my manual throttle pulsing approach won't work to reduce the problems associated with partial throttle? If it does, then there should be a simple way to do the same thing electronically, or does the delay between WOT type pulsed operation need to be measured in seconds to dissipate the heat, so manual pulse-n-glide would work because it's so slow, but electronically won't work?

John
 
rhitee05 said:
This is speculation, but I wonder if dv/dt turn-on is rearing it's head here.
.....

Interesting thought, Eric.

Certainly the gate drive circuit can be critical. As the FET turns on, it goes through a region of linear operation. There is a huge capacitance from drain to gate. In fact, if you look at the circuit, its almost designed to set up some parasitic oscillations at this point. So its quite plausible that a dv/dt spike on the drain could turn the gate on.

I once managed to capture some scope traces of 4110 FET ringing at about 50 kHz as it went through the transition. As soon as I redesigned the gate driver with high frequencies in mind it behaved. The simple answer is that the gate driver must present a low impedance, not just at the switching frequency, but at all frequencies that the device has gain.

Nick
 
Ok, so I've added controller heat calculations that accounts for freeWheeling losses, resistive losses and switching losses.

Using matt's trike parameters, it seems at cruising speed on flat-land, that the controller heat peaks at ~92% throttle at around 63.8 watts (at ~48 mph). At WOT (at ~50 mph), it puts out 26.1 watts of heat. At 40% (~22 mph), it puts out 28.3 watts. At 20% (11 mph), it puts out 15.8 watts.

Now let's go up a 7% hill at cruising speed.

At 10%(4 mph): 88.5 watts
At 20% (10 mph): 86.4 watts
at 40%(20 mph) : 89.7 watts
At 60%(30 mph): 97.6 watts
At 80%(40 mph): 104.8 watts
At 90%(44 mph): 106.7
At 95%(46 mph): 107 watts
At 100%(47 mph): 50.7 watts

This seems to be confirm partial throttle on hills is worst for the controller, but it doesn't seem to matter much on flatland until it becomes significant in the 50-90% range (the upper end being worse).

(And, thinking about it, I believe this seems to vindicate my understanding for the past year or two. 100% throttle minimizes heat while less throttle reduces ESC heat, except for absurdly low throttle (<20%) in the case of a hill. However, if you're given a hill of some sufficiently small height, the esc will likely be warmer at 20% than at 60% throttle by the top because it takes 3 times as long to get to the top, and that implies 86.4 *3 / 97.6 = 2.6 times as much thermal energy by the time it gets to the top. Of course, if you're running enough cooling to cool a 120 watt pentium 2 processor, you should be good. :mrgreen: )
 
swbluto said:
If it's the gate drive circuit that's the culprit, that'd imply infineons are pretty much useless, no? But, somehow, they seem to survive during certain types of usage quite well... (Or so I believe. I have a hard time believing that ebikes.ca would sell an unreliable controller if a significant percentage was affected. It seems there's something quite rider/motor dependent given that john has a blown controller (a many?) so far.)

On the contrary, I'm sure the original Infineon design is just fine. It may not be outstanding but it seems to be quite adequate in most respects. The design simply puts cost ahead of performance.

My point is that the stresses on a controller operating at 100V/100A are significantly higher than the original design specs, which I believe are something like 48V/40A. The modded controllers may use better FETs, but higher voltage FETs still have higher Rdson than good-quality low-voltage ones. Freewheeling losses are higher no matter how good the FETs are. dv/dt is higher. dI/dt is higher, which can also put different stresses on a controller. A gate drive circuit originally designed for a lower voltage/current regime could be operating in a much more marginal fashion.

I did some quick numbers last night. Based on the specs for the 4110 FET of Vth=3V, assuming junction temp of 125C (hot), Cgd=350pF (adjusting that data sheet value from 50V to 100V) and assuming a gate resistor of 20 ohms (what LFP told me). I came up with a maximum dv/dt of about 142V/us. That sounds like a lot, but that would mean the maximum switching speed at 100V is about 700ns, which is actually pretty slow. The switching losses at that speed would be about 70W (again, divided among the high-side FETs). If it switches any faster than that, there is serious risk of dv/dt-induced shoot-through.

Another potential issue is the increasing numbers of FETs. I was discussing this with LFP in another thread, but we weren't able to fully resolve the question. Based on what he told me, it sounds like they might be using the same drive circuit for the 6-, 12-, and 18-FET designs. If that's the case (not proven at this point), the larger controllers which use more FETs for higher current would also be seeing a sharp increase in switching loss. The same gate drive circuit would take 2x at long to switch 2 FETs, or 3x as long for 3 FETs, with a corresponding 2x or 3x increase in switching loss (plus the added loss from higher current). It would take 'scope data to confirm this, but unless the circuit has been changed switching time must increase.

Tiberius said:
I once managed to capture some scope traces of 4110 FET ringing at about 50 kHz as it went through the transition. As soon as I redesigned the gate driver with high frequencies in mind it behaved. The simple answer is that the gate driver must present a low impedance, not just at the switching frequency, but at all frequencies that the device has gain.

This is an excellent point. From what LFP told me, the gate drive circuit uses a single series resistor with all FET gates tied directly in parallel. This risks oscillation between FETs. If it gets bad enough, the oscillation can go beyond the maximum gate voltage and blow the FET.
 
It seems like a good mod for these things would be throwing a separate gate drive resistor, maybe a 20ohm in series with each gate leg individually. Then put a glob of thermal epoxy and a little scrap of aluminum heat-sink to sit over the external SMD gate drive FETs to hopefully offset the increased current draw. Hopefully the FETs wouldn't bounce between each other that way, and the switching time should be both reduced and consistent.
 
That's how the bigger EV controllers (Curtis etc) do it. A resistor for each FET. I didn't think you could do it any other way!
 
Can someone please compare 4110's, 4310's, and 4115's in regard to these latest discussions, because I don't know what to look for? I have controllers with all 3 types of fets, so it would be useful to classify them by their best use. The 4115's are especially important to know about so I know not to use the high voltage setup for regular riding if the numbers look bad, just for some WOT speed runs. The long term plan is to use those HV controllers on builds with a hub motor out of the wheel and geared down to take advantage of the broader high efficiency of band of the motor, enable high power at lower current leading to better overall efficiency, and of course, get 2 or 3 speeds out of the deal so the bike can laugh at steep mountains.

John
 
liveforphysics said:
It seems like a good mod for these things would be throwing a separate gate drive resistor, maybe a 20ohm in series with each gate leg individually. Then put a glob of thermal epoxy and a little scrap of aluminum heat-sink to sit over the external SMD gate drive FETs to hopefully offset the increased current draw. Hopefully the FETs wouldn't bounce between each other that way, and the switching time should be both reduced and consistent.

The best method is a single series resistor common to all the gates, then a set of parallel resistors in series with each gate individually. You would choose the resistors so that the total resistance seen by the driver (R1+R2/N) is the same as the current resistance. So, for a 12-FET controller you might choose 10 ohms for the single resistor and then a 20 ohm per gate, 10+20/2=20, the same as the current design. Except now you have 40 ohms impedance between gates to resist the ringing.

Edit: You could try lowering the total resistance as part of this mod to improve switching speed. You'd have to be careful though and make measurements as you go. If dv/dt is marginal now increasing speed could push it over the edge. It wouldn't add very much heat to the driver IC, though, so heatsinking wouldn't be necessary.
 
John in CR said:
Can someone please compare 4110's, 4310's, and 4115's in regard to these latest discussions, because I don't know what to look for?

For anything over 100V, you would have to use the 4115, but you pay a penalty in much higher Rdson (more than 2x the 4110). For anything lower than 100V you would want to choose another part.

Between the 4110 and the 4310, the 4110 has the better specs. It has 25% lower Rdson and also lower thermal resistance to the case, so it will produce less heat and dissipate it faster. The 4310 does have slightly lower gate charge (Qg) which would reduce switching losses somewhat, but I don't think it would be enough to make up for the Rdson advantage. The 4310 is also about 25% cheaper than the 4110, so you're getting what you pay for in either case.

At even lower voltages, below 60V, I think the 3006 would be my device of choice. It has an Rdson almost 50% lower than the 4110 for a little less $$.

Rdson drops as the voltage rating drops, so there's a strong incentive not to get a higher-voltage part than is needed (with some headroom, of course).
 
Thanks,

Am I correct in understanding that as long as I get above that "knee" in the motor sim curves that I'm free and clear, since power that's also right around that point if not at it? That's also the point of half of no load speed, so easy to predict for any voltage if you know your motor's Kv.

How much of a role would commutation rate play in these switching losses, and could that account for a big part of the difficulty driving my motors? eg The electric rpm of my motors is much higher than X5's, because mine has so many more poles with 51 stator teeth with no overlapping winding, and 48 magnets. With such a high electrical rpm, could that since I'm looking to get to 1200-1500 motor rpms with the higher voltage controllers? Does the bemf protect everything from the current spikes as long as I can get above the "knee"?

John
 
John in CR said:
How much of a role would commutation rate play in these switching losses, and could that account for a big part of the difficulty driving my motors? eg The electric rpm of my motors is much higher than X5's, because mine has so many more poles with 51 stator teeth with no overlapping winding, and 48 magnets. With such a high electrical rpm, could that since I'm looking to get to 1200-1500 motor rpms with the higher voltage controllers? Does the bemf protect everything from the current spikes as long as I can get above the "knee"?

It's probably not playing a very big factor. A 51-tooth motor implies that it would be 17 poles. 1500 rpm equates to 25 rev/sec, so at 17 poles that would be 17*25=425 electrical rev/sec. Still pretty small compared to the 10-20 kHz of the PWM frequency, a few percent.

Anywhere above the "knee" on those curves, yes, the maximum phase current will begin to decrease with increasing speed, eventually to zero a no-load speed. Current limiting won't be in effect anymore either, so WOT=100% duty.
 
Ok, so the commutation is just a very occasional switch compared to the pwm cycling. Don't those Infineons have a selection item for PWM rate? Could reducing that be a way to reduce heat from switching losses?

Regarding above the knee operation, I'm thinking less than full duty. eg I'm going to take my bike on a 50 mile ride to the beach, so 55-60mph of WOT will force me to carry a much large pack than if I slow it down to 40-45mph. In my case the knee occurs at 35-38mph. Also, once I make so aero improvements before the Monday ride the WOT speed will go well over 60mph, which is uncomfortably fast for this bike. The ride there will be a piece of cake, since I lose 1000m of altitude gradually over much of the distance. The way back will almost definitely require an opportunity charge around the midway point due to mostly WOT riding.
 
EDIT: the below quote was the original post. The equations I used had a wrong constant for the T_ON and T_OFF variable (10 uS instead of 1 uS) so the switching losses were overstated. So, the graph below is a bit exaggerated.

Just thought I'd post an update. I've introduced controller heat calculations in my simulator that takes into account the major sources of heat including freewheeling losses, switching losses and mosfet resistance losses. Those details are provided in the text box after calculation, so if you're technically inclined, you can modify some of the parameters to make a more intelligible estimate. (Like, say, you think the switching time is more like 500 nS instead of 1000 nS, in both directions. Then you'd half your switching losses.)

The link is in my signature below. It also includes custom phase current limiting, so you can see how that may affect controller heat losses.

Anyways... it seems pretty illuminating now. At super low throttles, the "steady speed" makes minimal controller heat but slowing down by say, 2 mph, while only putting out 150 watts through the motor, may substantially increase the controller heat by a factor of 6-10x. So, a small variation in speed over variable terrain may explain why controllers get hot at super low throttle. I'll try to graph John's case to see what his actual numbers may be.

If anyone wants to correct me...

controller resistance = .006 (6 mOhm - pretend mosfets are hot.)
phase current limit = unlimited
battery current limit = 100 amps
a 5302 motor... ???
100 Volts battery
.1 Ohm battery resistance
20 inch wheel

Those are the big ones, I think...

Here's a sample picture.

Johnsbike.png

This is at 7% throttle. At the "final speed" of 6.3 mph at 7% throttle, the controller is putting out around 29 watts of heat. However, if he hits a rock or hits a hole or bump or whatever while the motor is only putting out 150 watts of motor power, then it seems pretty likely that his speed may decrease by 2 - 3 mph. Looking at the graph, you'll see that the controller is putting out ~110 watts of heat in that situation if the bike slows to 4 mph for a moment.

Plugging in 25 volts into the simulator, it appears a lower voltage battery would help but not significantly (Though, if the controller has a thermal resistance of 2C/W, it'll be significant). We're talking about a peak controller heat wattage of 70 watts instead of 110 if slowed down by 2 mph from the steady-state speed of 6-7 mph.

Throttle pulsing or suddenly increasing the throttle from 7%-15% for a split second would certainly compound the problem. And, the thing is, the difference between 7%-15% is probably not noticeable to someone going over bumpy terrain (It's probably not even controllable).

Btw, when john opens up WOT at 7 mph or so, it appears the controller is instantaneously putting out ~1100 watts assuming no phase limit. :shock: If the controller is relatively cool, that may be ok, but it would obviously be a problem at high controller temps. If he were to accurately limit the phase currents to 150 amps, it looks like the controller would be putting out about ~400 and would probably be more likely to survive. At 100 phase amps, it peaks at about 260 watts. A 5302 would have relatively sucky torque at those phase amps. It'd probably be better to move up to a higher winding - that'd reduce the strain on the controller, too.
 
I’m going to make another attempt to understand Luke’s “red line”. When Luke says that PWM doesn’t work to the left of the “red line” I think he means changing the PWM factor D, in the area to the left of that line will not affect speed. That’s because the current is huge since DV is so big compared to Vbemf for any “normal” value of D. I think we all get that.

Now I think Luke would agree that with the bike moving slowly we can use PWM to reduce phase current. Say we decided to use throttle position to generate linear values 0 to 1. But for the PWM value D we use 1/10 * Throttle Linear Value. Now for any speed and, as an example, half throttle, the phase current and torque will be down toward 1/20 of the “characteristic plot” value for a given motor, controller and battery voltage. The break in the curve, after which speed and torque are linearly related will move left. The current and the torque will be lower at each speed. The motor will come out of saturation at a lower speed. This will really give us current limiting as well as making the relationship of speed to throttle position more understandable. That is, over a larger span of speed range, the speed will be linearly related to throttle position. Maybe at some speed we gradually increase that 1/10 weighting to 1 as a function of speed.

If all this is true, we have the same final speed at full throttle determined by Vbemf = Vbatt and we have wasted less energy heating the motor, but we will have lower torque for acceleration.
 
donob08 said:
I’m going to make another attempt to understand Luke’s “red line”. When Luke says that PWM doesn’t work to the left of the “red line” I think he means changing the PWM factor D, in the area to the left of that line will not affect speed. That’s because the current is huge since DV is so big compared to Vbemf for any “normal” value of D. I think we all get that.


I will help you understand the meaning of the red line.

When you are giving full throttle signal, the following is true at the red line:

This is the soonest point (lowest speed) the controller can have 100% duty cycle (regardless of throttle signal input obviously.)

It is the highest speed the controller can maintain it's constant power-IN.

It is the soonest point (lowest speed) that battery current and phase current can be equal, any lower speed or lower throttle setting will ALWAYS result in higher phase current that battery current.

After this red line, the BEMF takes over the roll PWM was doing to regulate battery current draw.

After the red line, if you remain full throttle, phase current and battery current will be equal.

After the red line, if you go partial throttle, phase current will always be greater than battery current, but both will be lower than full throttle would give.

Before the red line, phase current is always greater than battery current.
 
Back
Top