ZombieSS's power stage for Lebowski's controller video pg17

i can see where you are going wrong and getting confused. you are using my post from March 16 as a strict guide. but you need to be more flexible.

i've written my response 3 times but deleted each time because i am not writing very clearly. let me try math instead..hopefully it is helpful

longest OFF time..
let's zoom in to the region of the gate drive and consider just these delays (gate drive IC and mosfet, no boost stage delay because i think you don't have this in your design anymore). the longest off time, i just copy/paste your numbers i do not check if they are correct as i am tyring to show method not values:

Gate driver delay 620 off
Gate driver distortion 120 max
Mosfet delay 41 off
Mosfet rise/fall 39 off

but look at your mosfet fall time. 39ns? really? that's what the datasheet says, so 39ns is correct. but that is not ALL the mosfet off time. you have 2-step turn OFF too which is 1000ns additional!

and what is more, you write that you have 800ns of measured delay. so obviously the mosfet delay time is not 41ns. well, i think we can see now that if the mosfet datasheet had a graph that showed gate resistor vs. sswitching time then we would see a major impact to the value 41ns, likely it would be 400ns.. perhaps the G/S cap is what is slowing this even more, to get a total of 800ns.

so your mosfet delay time, might as well use your measured value as it is more accurate (and more conservative)
so your mosfet fall time, you must include 2-step turn OFF because it occurs at every pulse and is real part of your fall time

so.. longest OFF time is 3.9 +100 + 620 + 120 +41 +800+ 39 +1000 ~= 2700ns

Fastest on time, 466.8nS (this is theoretical, i'm sure your gate resistor & G/S cap would make this value much higher, but we keep this value as it is more conservative

deadtime = (longest_OFF - fastest_ON)*1.2 = (2700 - 470)*1.2 = 2700ns

is this deadtime acceptable at your desired switching period (1/frequency)?

deadtime_limit <= 5% of 1/fsw
deadtime_limit <= 0.05*50u = 2500ns

not good.. your deadtime represents slightly more than 5% of the switching period. this means you will have high low frequency distortion, i.e. instead of delivering torque, you will be making your motor heat up. this amount is small, don't get alarmed too much.. it is just something to be aware of.

its OK to run like this, just know you will not be at optimal performance. you have some choices:
1. decrease the OFF time. you already experimented with this on your gate drive and you feel more comfortable switching slowly, so maybe this is not an option for you?
2. lower your switching frequency. yes you can do that, probably lower it to 18kHz is OK. i don't recommend going lower as you start to hear it, and it gets annoying.
3. lower your deadtime from 2700ns to 2400ns. i say this knowing that your fastest ON time is probably WAY faster than reality.. surely the G/S cap slow sthe ON time just as much as it slows the OFF time.. so you probably have a couple undred nano seconds to play with. the down side is that we don't know for sure so we would be taking a small risk. but it IS small.
4. just live with it and accept the limitations. if i were you, that's what i would chose. might be that you would be totally happy with the performance. then one day you might experiment lowering the switching frequency just to see the difference, probably you won't even notice it. or you could lower your deadtime a bit and see what happens.
 
PWM processing 0.8 on / 3.9 off
Isolation delay 25 on / 100 off
Gate driver delay 350 on / 620 off
Gate driver distortion 25 min / 120 max
Mosfet delay 18 on / 41 off
Mosfet rise/fall 73 on 39 off

Fastest on time, 466.8nS
Slowest off time, 803.9nS

Difference between fastest on and slowest off time = 337 nS.

I'm pretty sure you misunderstood one of my number such as the measured 800nS propagation delay. I'll now make a second pass at calculating dead time now that you posted up a better example for me.

I just measured my actual MOSFET D-S switch time off->on at 175nS and the on->off time at 80nS (2x the datasheet value due to my resistor and G-S cap), this is at 60V buss.
The 800nS of measured off time is from the micro controller PWM pin until the D-S turns off or on at the MOSFET with the 2 level 1000nS delay and 1000nS of dead time subtracted. When I made this "800nS" measurement my actual controller PWM pin to MOSFET on/off was 2800nS.

The measured 800nS off is pretty close to the sum of the above off propagation delays from the datasheets, on is slower than the datasheets as expected, I'll of course stick with the data sheet for the on time it's the quicker of the two and the measured for the off time as it's longer than the datasheet. I am switching D-S on/off pretty quickly which does concern me a little, but I think I'll try it and see how it works.

Slowest possible off time with the 2 level turn off in use:
PWM processing 3.9nS
Isolation delay 100nS
Gate driver delay 620nS
Gate driver distortion 120nS
Mosfet delay 39nS
MOSFET fall 80nS <- measured
2 level turn off 1000nS

4+100+620+120+39+80+1000=1963
Slowest off time = 1963nS


Fastestst possible on time with the 2 level turn off in use:
PWM processing 0.8ns
Isolation delay 25ns
Gate driver delay 350ns
Gate driver distortion 25ns
Mosfet delay 18ns
Mosfet rise fall 73ns <-datasheet vs 175nS measured
2 level turn off 1000nS

0.8+25+350+25+18+73+1000 = 1492

Fastest on time = 1492nS

dead time = Slowest off - Fastest on = 1963 -1492 = 471nS
Add 20% margin for safety = 471nS * 1.20 = 566nS dead time.

Dead time that should be programmed into my controller = 566nS.

Longest possible propagation delay = 963nS propagation delay (datasheet + measured) + 1000nS 2 level turn off propagation delay + 566nS dead time = 2490nS

To support that 566nS is the correct dead time number use the following points.
2 level turn off of 1000nS delay on/off can not be bypassed, if a pulse is shorter than the set 2 level turn off delay, it is ignored meaning the MOSFET never gets turned on (datasheet figure 6). Now this brings up a 2nd interesting question, what is the shortest pulse width I have to handle based on set switching freq? Section 5.6 of the datasheet has an interesting note about minimum on time. If the on pulse Maybe 1000nS is too long for center aligned sine wave based PWM.

If 1005nS is our shortest possible pulse width and I say 5% (PWM) of the switching period at the max switching frequency is the shortest pulse the controller will output, I do not need to worry about pulses being too short until the shortest pulse from the controller < 1005nS / 0.05 = 20100nS. Freq = 1/20,100ns = 49.75Khz. Even at 2.5% PWM I'm still good to ~25khz. This means I do not need to worry about my minimum pulses because I will most likely not have usable power < 5% PWM on this controller, but on a higher powered setup, this could come into play if lower PWM % is required for operation.

Continuing from above about section 5.6 of the min on time the data sheet says if a input pulse is between the minimum pulse length determined by the 2 level turn off timing and the Twinmin the MOSFET could be partially turned on. What is Twinmin? Datasheet gives it at Twinmin=2 level turn off time+2*Rdel*Coff. Rdel is specified in the datasheet as 500 ohms. That means my Twinmin value is 1000nS + 500*100pF = 1005nS, not big enough to worry about in my case, but I had to be sure after missing this during initial design.

Maximum switching frequency possible with current setup = 1963nS off time + 566nS dead time = 2529nS, 1/2500nS = 400khz * 0.05 = 20khz. What do you know, HighHopes predicted that this TD350E design should be good for about 20khz max and there it is proven by the math (which I hope I got correct :lol: )

So, next step... done. I don't need to switch faster than 20khz with this setup right now, I'm quite happy at 15-18khz which is not audible to most people.

Alternatives, drop 2 level turn off to 400nS if I wish to switch at a higher freq. That will allow me a max switching frequency of 1/1900ns = 476.2 * 0.05 = 26.3khz

If I want to go all out and max out my switching frequency, I need to disable the 2 level turn off completely which would yield 1/1500nS = 33.33khz which is the fastest I would ever intend to switch and that would be if I am powering a hard to drive motor like the large Colossus.

Now I know what my design is capable of under different operating conditions. My latest design has a boost stage in it comprised of D44VH10 and D45VH10 transistors. The datasheet specs the following max stats:
Delay Time 50nS
Rise Time 250nS
Fall time 90nS

You can see that the overall max switching frequency would have to drop based on the delay/rise & fall times of the boost stage.

How much difference?

Off time
4+100+620+120+39+80+1000=1963 + 50ns delay + 90ns fall time = 2103nS

minimum times not spec'd in the datasheet, so I'll guess based on the datasheet max rise of 250 and delay of 50 for 300. I'll take 50% of that and call it 150nS.

On Time
0.8+25+350+25+18+73+1000 = 1492 + 150 = 1742nS

Dead Time = 2103 off - 1742 on = 361nS * 1.2 = 433nS

Longest off period 2103+433 = 2536, virtually the same as without the boost stage so the same maximums should still apply.

I now feel much better about writing some Specs for my driver. Working through all the required math to design a good gate driver is really time consuming. I need to add all this info into my spread sheet so I don't have to worry about doing by hand again the next time I want to change driver chips.
 
2 level turn off of 1000nS delay on/off can not be bypassed, if a pulse is shorter than the set 2 level turn off delay, it is ignored meaning the MOSFET never gets turned on (datasheet figure 6)

ahhh... 2 level turn off affects also the ON pulse? then you are correct with your math.

and i think it is good to have minimum pulse protection built in, otherwise you'd have to ask the software to have this feature instead.

good explanation, i think you understand this chip & gate driver design better than i do now 8)
 
HighHopes said:
2 level turn off of 1000nS delay on/off can not be bypassed, if a pulse is shorter than the set 2 level turn off delay, it is ignored meaning the MOSFET never gets turned on (datasheet figure 6)

ahhh... 2 level turn off affects also the ON pulse? then you are correct with your math.

and i think it is good to have minimum pulse protection built in, otherwise you'd have to ask the software to have this feature instead.

good explanation, i think you understand this chip & gate driver design better than i do now 8)

It's kind of funny how much info is "hidden" in the data sheets if you know where to look. I am feeling pretty comfortable with the td350e driver operation. The math examples you provide me with either spell it out for me or point me in the right direction. When I see you do something that looks "incorrect" to me I feel I need to get into the details and justify why my result is different. Right or wrong, the exercise of doing this makes one a better designer. Critical thinking is important. I now see where I went wrong in my previous attempts at calculating dead time.
 
Futterama said:
HighHopes said:
and i think it is good to have minimum pulse protection built in, otherwise you'd have to ask the software to have this feature instead.
What kind of trouble can be caused by very short pulses? Lebowski's controller can go all the way down to 33ns pulses.

Partial turn on of the MOSFET. This means the MOSFET is placed in his transconductance zone which is where heat is generated. Limiting shortest pulse length helps with this. I experimented with lebowskis chip last night and saw pulses as short as 180nS (I think this is what I logged, it was a quick bench test), but I only tried frequencies up to 33khz.
 
Futterama said:
Lebowski said:
Note that the controller ic has a setting called 'amplitude', with this you can limit the amplitude and thus set the highest\lowest dutycycle.
I think you need to elaborate a bit, maybe with an example too.
Yeah i thought this question would come :D but it's one of the this that will change going into v2.01, where i'll add the reverse...
But starting with v2.01, 100% amplitude will mean from 0-100% dutycycle,
90% will mean from 5 to 95 % , so with 5% of 50usec ( in 20kHz case) = 2.5usec minimum on time and 47.5usec max ontime.

But, only starting v2.01, till then divide the % by 1.11
 
Lebowski said:
Note that the controller ic has a setting called 'amplitude', with this you can limit the amplitude and thus set the highest\lowest dutycycle.

Details please? How does the amplitude setting effect pulse width?
 
HighHopes said:
2 level turn off of 1000nS delay on/off can not be bypassed, if a pulse is shorter than the set 2 level turn off delay, it is ignored meaning the MOSFET never gets turned on (datasheet figure 6)

ahhh... 2 level turn off affects also the ON pulse? then you are correct with your math.

and i think it is good to have minimum pulse protection built in, otherwise you'd have to ask the software to have this feature instead.

good explanation, i think you understand this chip & gate driver design better than i do now 8)

For others reading I want to add that the 2 level turn off adding propagation delay to turn on/off happens at every pulse. Some gate drivers do it only during a fault which allows higher switching frequencies to be used. If it did add to both the on and off there was be a large distortion of the waveform. You could input a 2000nS pulse and end up with a 3000nS pulse being output which is of course not good.

So many details involved in gate driver design. No wonder it's taken me a year to get this far. Gate driver design touches on so many different subjects. It's analog design but part of it is digital design, then add in the power devices and you end up stepping into the world of RF and EMI management... yikes!
 
zombiess, I have a hard time understanding the use for a miller clamp when using MOSFETs, all the info I can find keeps mentioning IGBTs when talking about miller clamps.

I was trying to find information about the conditions where a miller clamp is necessary.
 
Futterama said:
zombiess, I have a hard time understanding the use for a miller clamp when using MOSFETs, all the info I can find keeps mentioning IGBTs when talking about miller clamps.

I was trying to find information about the conditions where a miller clamp is necessary.

All PCB traces have inductance and resistance. When you couple a magnetic field into the traces they will generate a voltage. If the resistance is high, and has a direct path to the gate, your Vgs voltage can actually rise high enough to turn on the MOSFET gate and start conduction! The can and will cause a shoot through condition to happen and without proper protection, the device will be destroyed.

MOSFETs and IGBTs have a lot of similar properties and when talking about a single MOSFET and a single IGBT in a gate driver setup most of the operating theory is the same. There are differences such as IGBTs do not parallel well because they behave like a BJT transistor with current sharing. Functionally the miller clamp works the same on both devices from what I have read. It gives the gate driver a direct path to it's respective ground. It shorts out the gate pin to the source pin, so even if voltage is picked up by the trace, the MOSFET has a very hard time trying to turn on from the voltage induced in the trace because it's now shorted.

The clamp usually engages around 2V Vgs as the device is turning off, this varies from driver to driver. It is a very effective method to control the miller effect and I don't ever see myself developing a gate driver without this feature at a minimum. The best of both worlds is when you have a layout that generates very little noise + the extra safety of a miller clamp. Good driver layout is the most critical part of a design according to HighHopes experience and I have to agree after going through multiple designs. Every correction I needed to make came back to something in the gate driver setup, even how it connects to the power devices which is tricky in itself because it's attached to a very noisy environment. You do not want any part of the gate driver to cross over anything with buss voltage on it (the power pass sections, B+,B- and phase outs). This makes layout very tricky. My latest TO-247 design took me several attempts to get a decent layout and this is what made the overall theory click in my head finally. The layout I posted on the first page here is sub optimal, but works pretty well because I followed most of the rules.
 
I was trying to find information about the conditions where a miller clamp is necessary.

everything zombiess says is correct. this is just another way to say the same thing.

try looking at it this way.. we know that "miller effect" is real, it is an injected current (i = C*dv/dt) from the power mosfet drain, into the gate driver then to mosfet source. as this current flows through the gate resistor it generates a voltage, V = I * R, and it is possible that this voltage is high enough to turn the mosfet ON when we wish it to be OFF. this event occurs at every switch.

so what conditions would make it necessary to have a clamp? you can get a sense of this by realizing that the amount of injected current is related to the speed of the D/S switching and how much parasitic capacitance exists in the mosfet (gate to drain capacitance). here i am actually just putting words to the equation i = C*dv/dt.

next, this injected current, whatever value it is.. gets multiplied by the gate resistor to create a voltage. look at your gate resistor you have, 40 ohms? that will generate more unwanted gate voltage than if the value were 3 Ohms. how much injected current do you have? multiply tthem together and compare this value to the the power mosfet voltage threshold value. then answer the question.. do i need a MUCH lower resistance path for this injected current? if yes, then a miller clamp can be the solution.

couple of extra points.. as the phyical size of the mosfet gets bigger as comparing TO-220 to TO-264 package or as you add in parallel, the amount of G/D parasitic capacitance increases. and a clamp is advised. as you go even higher in power eventually G/D cap gets so high that the injected current is A LOT and even the "low impedance path" that the miller clamp is still too high, V = I_injected * R_millerClamp = greater_than_Mofset_threshold. you have to be even more aggressive because miller clamp by itself is not going to work, so we would use add negative bias turn OFF to solve this problem.

IGBT is found most often in discussion of miller effect because it has by its construction nature higher G/D capacitance than a mosfet. that is bad. it has other things that are good (better than mosfet), but it just happens that G/D cap is high. so in discussions you see miller effect as always a problem with IGBT even at low power inverters so it is often discussed. it is not a problem with mosfet inverter until you are > 5kW level. this is considering good design. if you build a really crappy inverter power bridge then you will have miller effect problems even at 100W. what can i say, gate driver design is critical to success of your product so it is a subject worth learning in detail if you are serious about DIY in this field.
 
I will make an example of the calculation to see if I get it right. According to the datasheet of a random example MOSFET, the IRFB4110 has a "gate-to-drain "miller" charge" of 43nC. I will assume 42V is switched in 500ns. Are these numbers in the ballpark?
So if I understand the formula correct (I = C*dv/dt), this calculates to 2mA.
With 40 ohms gate resistor, the voltage on the gate would be 0.08V. The gate threshold of the example MOSFET is 2V.

No need for a miller clamp at all in the above example, right?
 
Futterama said:
I will make an example of the calculation to see if I get it right. According to the datasheet of a random example MOSFET, the IRFB4110 has a "gate-to-drain "miller" charge" of 43nC. I will assume 42V is switched in 500ns. Are these numbers in the ballpark?
So if I understand the formula correct (I = C*dv/dt), this calculates to 2mA.
With 40 ohms gate resistor, the voltage on the gate would be 0.08V. The gate threshold of the example MOSFET is 2V.

No need for a miller clamp at all in the above example, right?

I did not check the math, but the part of your quote I put in bold can be answered with the following.

It depends on how good your driver layout is and that's what it really comes down to. Put part of your gate trace in parallel with the buss path in a non laminated setup and you could be in serious trouble. Gate driver design goals should be avoid crossing any high power sections (bus voltage, phase outputs), keep loop areas small, switch on/off at a reasonable rate, use extra safety features such as a miller clamp to help prevent issues.

Check out the ACPL-H342 gate driver from Avego for a small opto coupled gate driver with a miller clamp. Personally I'm sticking with the TD350E for now since it's easy to disable features on and shrink it's size and I am VERY familiar with it; but if you really need to keep the size down the ACPL-H342 is a pretty nice driver with a miller clamp which only needs about half the board space as a TD350E setup using all features. I'd personally sacrifice a larger controller for safety features, but that's me. For a motor drive I don't know if I'd ever use a driver that did not have a miller clamp or at least add one using discrete components myself. Having everything integrated makes design MUCH easier and on a small scale much cheaper.
 
The ACPL-H342 looks interesting but it is not a bootstrapping gate driver, though:
Active Miller clamp function eliminates the need of negative gate drive in most application and allows the use of simple bootstrap supply for high side driver.
But I haven't been able to find any information about how to add a bootstrap supply to it. Also, it does have a pretty big propagation delay, probably due to the opto isolation. I would like to find a bootstrapping gate driver with miller clamp with low propagation delay, or maybe just an example circuit of how to implement a miller clamp to any bootstrapping gate driver.
 
Boot strapping is easy. Its a fast diode with low vf drop and a capacitor. Schotky types should work.

Diode goes from low side gate supply + into the diode which then connects to the capacitor which is between the high side drivers gate supply pins. Calculate the cap size, there are online calculators and lots of app notes on doing this. You'll also want a 100nf decoupling cap and a small supply cap on high and low sides, probably something around 2- 4.7uf for a low power drive. App notes have the details.

As for the propagation delay, how fast do you intend to switch. My td350e driver setup with all the bells and whistles will do 20khz. If I turn off the two level turn off it will be OK to 30khz. I show the math on how those were calculated several posts ago.
 
Futterama you would be much happier with a DC/DC isolated supply. Bootstrap has lots of issues. You will find your self trouble shooting around it a lot.
You really should use a fet driver with Desat detection. Now that I know about it I do not think I will run without it. It should be easy to set up as Zombies found. And will save almost all fets in the future from death. The only thing left to watch will be temperatures.
 
Arlo1 said:
Futterama you would be much happier with a DC/DC isolated supply. Bootstrap has lots of issues. You will find your self trouble shooting around it a lot.
You really should use a fet driver with Desat detection. Now that I know about it I do not think I will run without it. It should be easy to set up as Zombies found. And will save almost all fets in the future from death. The only thing left to watch will be temperatures.

Someone has seen the light :lol: Pretty amazing what an extra $2-$5 worth of components per driver can do in a system isn't it?
 
Yup. I only found out about it thanks to highhopes.
 
Arlo1 said:
Yup. I only found out about it thanks to highhopes.

His reasoning is pretty powerful to justify it. Made me abandon the idea of a low cost setup because the difference between low cost and low cost but with protection was literally about $15 per controller + another $25 in isolated DC-DC supplies. It's more complex to design due to the math and layout but totally worth it. I just can't bring myself to live without it now.
 
Arlo1 said:
Futterama you would be much happier with a DC/DC isolated supply. Bootstrap has lots of issues. You will find your self trouble shooting around it a lot.
You really should use a fet driver with Desat detection. Now that I know about it I do not think I will run without it. It should be easy to set up as Zombies found. And will save almost all fets in the future from death. The only thing left to watch will be temperatures.
Yeah, maybe I should go that route. I was just a bit intimidated by the cost and complexity of the isolated DC/DC converters, but I found some single IC isolated supplies at Analog Devices that might just do the trick. They even have gate drivers using their iCoupler/isoPower technology. I have to read a bit more about them but I think these could be something:
http://www.analog.com/en/interface-isolation/digital-isolators/products/index.html
 
i'm glad your interested in this subject futurama. there is a LOT you can do with this knowledge, motor drives are just one example.

so its great to get invovled, but be very careful going on your own to reinvent the wheel.. this subject is not easy.

bootstrap is only advisable when the power levels are low, say less than 5kW. what i'm really saying is that noise generated by dV/dt and dI/dt is low. but this DIY crowd wants higher power. from what i saw there was a lot of cheap controllers trying to be used beyond their capability leading to unhappy customers. so the reason i helped zombiess was to get into the DIY world a working (funcitonal + reliable) gate driver that would be good for the power levels this crowd wanted, 10kw to 20kW. from this the most important thing to learn was the value of a reliable design philosophy can bring for minimally acceptable cost. arlo's design is similar power levels but has one new feature, high switching frequency. so you can see how the knowledge & practical experience is progressing in a linear fashion. establish the fundamentals, add one new feature at a time. it is slow, but it is necesssary to be slow as this subject is not easy.

so my advice would be to see if your application can make use of zombiess or arlo's (latest) gate driver. if it CAN, then wait for these two to finsish some real road tests. wait for them to update the schematics with their findings, to work out the bugs (there are always bugs). then.. THEN ... make your gate driver the same as theirs but layout to fit your profile and at that time introduce ONE new feature if you want to experiment or need to meet a specific requirement.
 
Back
Top