Smart Battery Data Specification

The "Bad Cell" Problem

Often with a balancing based system there is a tendency for the weakest cell to have to pull it's full load and the stronger cells end up essentially "sandbagging" at less than full effort. Over time the weaker cells get weaker and you develop a "Bad Cell" that is the "runt" of the litter and it becomes the weakest link that diminishes the strength of the overall pack. In this case you get a pack that starts off balanced, but the weaker cell discharges very rapidly and so the low voltage cutoff triggers very early in order to protect the weakest cell because the balancing can only do so much during runtime. Running the numbers again as before I get a "waste" of about 25% of the overall pack capacity... but that's not even a given because I'm not able to even know what the upper limit of the good cells might be because the balancing starts things off at the weaker cell level which might be of a lower voltage than what the strong cells might be able to hold.

The "Smart Battery" strategy is looking better and better... :p


The logical conclusion would be that a new pack would show the closest comparision of the Balancing Strategy verses the Smart Battery Strategy because we would assume that the cells begin the most closely equal. Over time the advantage of the Smart Battery would increase and eventually the Smart Battery would dwarf the abilities of the Balancing approach. Over time some cells will age faster than others and that will benefit the Smart Battery...
 

Attachments

  • Cell Usage Problem.gif
    Cell Usage Problem.gif
    6.5 KB · Views: 1,772
Cost Benefit Analysis

This is more complicated to do in actual values because there is no data to analyze at present. But the idea would be to think in the long term output of the cells over their lifetime. If the Smart Battery Strategy is able to deliver 100% of the total abilities of the cells for the life of their usage then we compare that to some percentage less of "perfect" that the Balancing Strategy delivers. If we can calculate the cost of the cells as purchased we can then calculate backwards the long term investment cost of a slightly more expensive Smart Battery verses Balancing verses using nothing at all.

In the end it should be possible to establish accurate metrics to be able to give a ROI "return on investment" value for what you put into a Smart Battery solution. Given the current high cost of LiFePO4 it makes sense to get all you can from them.

Also, I don't see any reason that you couldn't reuse the Smart Battery electronic's and just replace the cells when they wear out. So the Smart Battery electronics could be a one time up front cost. (so it's not necessarily a recurring cost)
 
safe said:
Everything looks to be correct.

:arrow: Any comments?

Aside from the problem of locating and purchasing the right MOSFET's are there any other issues that might inhibit the success of this design?

From an on resistance standpoint, that configuration looks good. That's what you'd need to do.

The problem is driving the gates. Unless you put a charge pump on every cell, I don't see how you can bootstrap the gate drives. If you had gates that could turn fully on with 3.3v, you could sidestep that problem.
 
fechter said:
From an on resistance standpoint, that configuration looks good. That's what you'd need to do.

The problem is driving the gates. Unless you put a charge pump on every cell, I don't see how you can bootstrap the gate drives. If you had gates that could turn fully on with 3.3v, you could sidestep that problem.
I'm actually somewhat amazed that through 99% perspiration and 1% inspiration I managed to find a way through all the pitfalls to even get this far. :wink:

The spirit of Thomas Edison lives... :lol:

225px-Thomas_Edison.jpg


It does look like I'll need a charge pump for each cell, but since in this design I'm not attempting the full "Distributed Pulse Width Modulation" idea and instead are taking the lower road of just wanting the "Smart Battery" it will not require high speed gate charging. It doesn't matter if a few microseconds transpires as the capacitance of the MOSFET's get charged up to open the gate. The ByPass part of the circuit offers a perpetual path, but at higher resistance, so the energy will seek the path of least resistance and actually the slow opening of the MOSFET would mean that they would get their current in less of a surge. The charge pumps are really cheap... so it's not a big worry as far as economics.

It "seems" to all be right... but this is all theory right now...


:arrow: The next issue would be the charger.

One of the unique charactoristics of this design is the ByPass diode INCREASES the resistance as the cells drop into their "Off State" condition. The charger might be designed so that it senses the change in resistance... as the resistance rises then it slowly turns itself off.

Does that idea make any sense?

:?: How might I do it?
 
A Fool In April?

I just had a realization... while discharging this cell using this design the Diode for the ByPass is properly positioned to limit the backwards current from causing a short circuit. However, when charging you reverse the current flow in order to add energy to your cell and you would need the ByPass:

...to work in the opposite direction...

so maybe today is my day to be the "April Fool?"

This circuit needs a ByPass that can work both ways.

And what about the MOSFET's? How can they handle the current flowing backwards? A charger does reverse the current doesn't it? How might the PIC interpret things being backwards?


file.php
 
Charger Simulation Fails Miserably :?

Add a voltage source in reverse so that you get a reverse flowing charge current and the MOSFET's become diodes themselves and start to leak. The Diode that is used in the ByPass circuit and cheerfully regulates flow during discharge now get's stuck in a permanently closed state.

:arrow: Charging seems to make this circuit design fall apart...

(the shaded areas below are what "should" be happening but isn't)
 

Attachments

  • MOSFET with Charger Diagram.gif
    MOSFET with Charger Diagram.gif
    4.6 KB · Views: 1,694
  • MOSFET with Charger Chart.gif
    MOSFET with Charger Chart.gif
    12.2 KB · Views: 1,687
Still Some Leverage...

While the "desired" behavior doesn't take place, the behavior that actually takes place offers some hope of being exploited. The MOSFET's do vary their backwards current flow dependent upon whether the gate is opened. If the gate is closed the backwards current is limited by the diode like behavior of the MOSFET. If the gate is opened then the MOSFET behaves more normally and allows more energy to flow. So this does do a little bit of shunting behavior... in comparision it's not far from how most shunt based chargers would work. It's still a far cry from the ideal and it's the ideal that I'm really after.

Seems to me that if during charging the PIC could open up another "bypass to the bypass" that could get around the Diode you could get the behavior you wanted.

Maybe even send some of this "relief valve" back to the charger itself?

Maybe this could be used somehow as a signal?

Hmmmmmm.... :?

Maybe if you place a SINGLE high powered MOSFET (one that you don't care about if it has a high resistance value) and orient it so that it faces BACKWARDS now you have a forward current diode and a backwards current switch.

:arrow: That might solve it... (close to the original design, but reverse the direction of the ByPass MOSFET)
 
Fool Holds His Own Parade!

:arrow: Fixed the problem...

You have to use another MOSFET as described above. This seems to work... (time to stop with the computer stuff and get out and do some more welding)
 

Attachments

  • Dual MOSFET Diagram.gif
    Dual MOSFET Diagram.gif
    5.2 KB · Views: 1,684
  • Dual MOSFET Chart.gif
    Dual MOSFET Chart.gif
    10.9 KB · Views: 1,683
You're getting warmer...

Depending on how fast you need the FETs to switch over, you might be able to cheat by placing a 10v zener from the gate to the source to keep the gate happy, then use resistors from the top of the stack (B+) to pull them up. This would work on all but the top two cells, which would still need a charge pump.
 
fechter said:
Depending on how fast you need the FETs to switch over, you might be able to cheat by placing a 10v zener from the gate to the source to keep the gate happy, then use resistors from the top of the stack (B+) to pull them up. This would work on all but the top two cells, which would still need a charge pump.
I don't mind the charge pump... I've sort of accepted that I'll need one and they don't cost much.

The charts look like I'm planning to open and close the MOSFET's all the time, but I really don't expect to do that. The only reason I'm presenting the data like this is so that you see the full spectrum of how the circuit behaves. I'm covering all the bases so to speak.

Question:

As you can see from the chart when the current runs in the backwards direction the MOSFET first acts like a diode when the gate is not charged, but even when the gate gets charged up there still seems to be higher resistance. Notice how the ByPass MOSFET allows a lot more current to pass given the same power supply. What seems to be happening is that the MOSFET really doesn't like being used in reverse... and I wonder if the extra resistance you get from the reverse current will increase the heat. Maybe you can pull 155 amps in the forward direction "in theory" and 50 amps in practice, but I wonder how many amps you could charge at?

:arrow: It's not necessary to charge at the same peak amps as discharge, so I'd probably be satisfied with 10-20 amps on the charger side. I wonder if there's a way to know how the MOSFET's behave when run in reverse?
 
Inverting MOSFET Design

After playing around with the parameters a little I've come to realize that in the forward current power direction you really don't need to do anything but worry about opening the forward Power MOSFET's because the diode effect on the ByPass MOSFET is not a whole lot different than when you open it. (it's not worth the bother)

In the other direction, when you charge, the Power MOSFET's operate as a diode and in their case as well the advantage to opening them is also not very large.

Maybe with a PIC you might bother with opening the MOSFET's when they are working in the backwards direction, but for this example I'm just simplifying things and using them as diodes.

In order to simulate the charger I'm placing a reverse biased battery in the main power loop. The diagram and charts pretty much are self explanatory.


Everything seems to be working... (finally :roll: )
 

Attachments

  • Dual MOSFET - Power Mode Diagram.gif
    Dual MOSFET - Power Mode Diagram.gif
    5.2 KB · Views: 1,909
  • Dual MOSFET - Power Mode Chart.gif
    Dual MOSFET - Power Mode Chart.gif
    10.4 KB · Views: 1,911
  • Dual MOSFET - Charging Mode Chart.gif
    Dual MOSFET - Charging Mode Chart.gif
    9.9 KB · Views: 1,915
Eight Valve System

Much like an automobile motor, this system has a full eight valves... four intake and four exhaust per cell/cylinder. :lol:

The advantage of this is that you could buy a large batch of MOSFET's and get the price down. Ideally if you could buy these for $0.50 cents each you could make up the circuit for each cell for only $4. However, the price looks to be more like $0.80 cents each, so it's more like $6.40 per cell. That's $102 bucks just for the MOSFET's. So the price is still a problem on this idea.
 

Attachments

  • Eight MOSFET System Diagram.gif
    Eight MOSFET System Diagram.gif
    6 KB · Views: 1,905
Simple Design

This is for people that have trouble reading circuit diagrams. This is the same logical circuit as the 8 valve design, but with just two. Ideally you want more MOSFET's because the more you have the lower the resistance. At some point all the extra MOSFET's would start to use up more energy charging their gates than reducing the overall resistance, so there's an ideal number that I'm sure you could calculate.
 
Verification Using Real Data

file.php


I want to thank DoctorBass for providing this data in it's raw form. I have processed the data and arrived at the following conclusion that was posted already on another thread, but I'll repeat it here.

Yes, in fact a "Smart Battery" could indeed prove to be more effective in getting the most from your pack than an LVC cutoff solution. The results were pretty easy to arrive at and this was with only three cells that were pretty new and in good health. As cells decay the advantage will become even more pronounced.

:arrow: The cells if used to their full potential would deliver:

Cell 1 - 25.9054 Wh
Cell 2 - 26.9098 Wh
Cell 3 - 25.1866 Wh

:arrow: But when you subtract the unused portion due to the LVC:

Cell 1 - 25.4805 Wh
Cell 2 - 25.4460 Wh
Cell 3 - 25.1866 Wh

Which means a difference of 1.8887 Wh.

And that translates to a 2.4815% difference.

The "Smart Battery" concept really is "smart" after all... :p
 

Attachments

  • raw data.zip
    10.3 KB · Views: 23
Toorbough ULL-Zeveigh said:
The debate was whether a 1% loss due to the use of MOSFET's as a way to give "Off State" or "On State" behavior in accordance with the Smart Battery Data Specification could be compensated by the losses you see due to wasted capacity with a LVC system. This proves beyond a shadow of a doubt that... yes... you can achieve equal or better performance very easily with the Smart Battery approach.

:arrow: People wanted to see hard physical evidence and not theory so now this does that... :p
 
Toorbough ULL-Zeveigh said:
DoctorBass used his own testing equipment to produce real data for three LiFePO4 cells. (Headway) He then posted the data on another thread and I downloaded it. After spending about half an hour massaging the data (string substitution routines, etc to get the data out) I was able to translate his data into actual numbers. I then went about the analysis.

Here's the methodology...

:arrow: I started by multiplying each voltage value for each cell by the testing amp rate which was 8 amps. Then I would divide by either 60 (if I was analyzing the minute based data) or 3600 (if I was analyzing the second based data) and this gives the Watt Hours delivered by the cell over the interval.

:arrow: Next I summed all the Watt Hour values over the the full discharge period and then added the totals together to arrive at a combined Watt Hour value for a "full capacity" scenario.

:arrow: In order to discover the LVC cutoff result I summed all the Watt Hour figures up until the first cell reached the end. (2.5 volts) This meant that two of the cells still had some capacity left over that was never used.

:arrow: Then I took the LVC result and subtracted it from the "full capacity" scenario and arrived at a Watt Hour difference.

:arrow: Finally I took the Watt Hour difference and used the LVC value as the base and arrived at a percentage. The relative sizes are large enough that either way it's about 2.5%.

So does that explain it for you in excruciating detail? :lol:


Obviously the circuit is still in development, but the whole idea was to provide undeniable proof that if you take real data and compare the different approaches that the Smart Battery approach works well and does indeed look to be a very promising avenue to go down.

This essentially "proves the point" that it's a good idea.

Let me speak in "Oil Country" terms... it's like we drilled a testing hole and struck oil... it's a "good thing"... :wink:
 
Well Martha, let me put it in terms that everyone else seems to already understand regardless of where they're from.

All of this:
safe said:
I started by multiplying each voltage value for each cell by the testing amp rate which was 8 amps. Then I would divide by either 60 (if I was analyzing the minute based data) or 3600 (if I was analyzing the second based data) and this gives the Watt Hours delivered by the cell over the interval.

Next I summed all the Watt Hour values over the the full discharge period and then added the totals together to arrive at a combined Watt Hour value for a "full capacity" scenario.

In order to discover the LVC cutoff result I summed all the Watt Hour figures up until the first cell reached the end. (2.5 volts) This meant that two of the cells still had some capacity left over that was never used.

Then I took the LVC result and subtracted it from the "full capacity" scenario and arrived at a Watt Hour difference.

Finally I took the Watt Hour difference and used the LVC value as the base and arrived at a percentage.

comes to a meager 2.5%

woOOT WOOot. :cry: (slide trombone accompaniment)

regen braking gets you 5% to 10% yet people prefer to pass on the added complexity.
At least with regen you get the braking thrown in as bonus.

In "oil" terms, I know for certain that you don't swab the inside of the gas tank on your motorcycle with a sponge to wring out the few last remaining drops of the precious juice.
(Although I'm sure someone will point out it might come to that.)
It may be a good thing, but hardly worth the effort.




rf said:
You've suggested an interesting possibility.
-1

rf said:
You've proven nothing.
+1
 
Invalid Thesis

:arrow: Toorbough ULL-Zeveigh, you're presenting the wrong argument.

I'm NOT saying that you will magically invent new energy out of nothingness because of owning a Smart Battery.

But I AM saying that if you compare the LVC solution verses the Smart Battery solution that you pretty much can never lose going with the Smart Battery. As a pack ages the weakest cell gets worse and worse and worse and that just amplifies the advantage of the Smart Battery. In a very badly imbalanced pack you could gain an advantage that is as high as 10% - 20% which is significant.

So if you use the correct reasoning and realize that it's a question of COMPARING techniques and assessing their relative strengths and weaknesses that the Smart Battery does indeed come out the victor.

Perpetual motion machines need not apply here... :)
 
No, woot is a completely valid thesis.
There's a web site woot.com so it must be valid, it's almost a lifestyle.
safe said:
I'm NOT saying that you will magically invent new energy out of nothingness because of owning a Smart Battery.
I don't know what you're reading cuz I didn't say that either.

safe said:
the Smart Battery does indeed come out the victor.

At increased cost & complexity.
I never doubted that SBS provides some marginal benefit.
It has to or Intel couldn't sell the idea.
Just as my illustration of the increased complexity in wringing gas out with a sponge & sucking dry the fuel lines will give you a little extra too, which I notice you completely ignored that, so I take it that is valid.
The thesis is called diminishing returns.


Instead of being shackled by a proprietary system, paying double for a smart battery pack & charger just to get a potential few percentage points down the road, personally I would prefer to take that money to buy more battery & go with a shallower discharge leaving ample charge in the tank instead of sucking dry the last electron.
It accomplishes the same thing, preventing the weak link from getting weaker at lower cost.
Better ROI I think.

The straight goods here is that SBS is great if you're a battery pack mass producer for testing, warranty purposes etc. & that is who it's custom tailored for.
It's prime benefit that get's manufacturer's to buy into it is that it's a proprietary system they can get off the shelf saving them from major re-design of their own for every pack yet locks the end user into their camp to pay for all the testing & silent-witness support circuitry built into the pack.
The side benefit of SBS being that of a perceived added value, a high-tech form of racing stripes to get the consumer to believe in a technical voodoo they can't quite understand so they will willingly pony up the extra cash.

From the end user standpoint however any slight & dubious technical benefit is merely window dressing so that yeah, it can't be argued that it's not completely worthless to them.
For the one-off builder SBS serves no useful purpose over simpler & cheaper monitoring solutions.
They don't really care about storing a manufacter's ID number in flash that seems to get you all hot-n-sweaty.
I'll concede that for LiCo & LiPo SBS might earn it's keep for somone like you who wants to dance on the razor's edge, but for LiFePOOOO it's wasted effort.
 
Toorbough ULL-Zeveigh said:
I never doubted that SBS provides some marginal benefit.
:arrow: Well, okay then, you at least acknowledge the theory.

These Makita battery packs are coming with some aspects of the Smart Battery included in them (very limited and some debate about whether even calling it a Smart Battery is valid) and I think that sort of thing could be done for ebikes. If there was a mass produced Smart Battery that used minimal functionality to make this whole issue become a "non-issue" then we could just buy the thing and use it.

LVC is less than ideal and in the worst case situation it can get pretty bad if one cell is really in bad shape.

Economic arguments are one thing... at this point I'm just satisfied that you acknowledge the "theory" and that's enough... I'm not making any money for anyone here (I have no vested interest) but I just like the design style.

I think it would be cool to have a home made Smart Battery pack... if for no other reason than to have bragging rights. :lol:

:idea: You can imagine:

"My pack is perfectly controlled using the Smart Battery design and so I can get 100% out of my cells every ride and never have to fear anything."
 
Concrete vs Abstract

Concrete

On the concrete level the Smart Battery Data Specification is a specific API that is used with a specific communication protocol.

Abstract

On the abstract level the Smart Battery introduces certain ideas of which all or some might qualify as making some implementaion qualify as being a Smart Battery. You might adopt one or the other idea and be "sort of" a Smart Battery. The two key philosophical ideas are:

:arrow: "On State" or "Off State" Behavior allows a cell to either be included in the system or taken out of the system.

:arrow: The "Safety Signal" from the cell determines action which is the opposite of a traditional system where control is driven from some outside source. So the battery itself is "smart" not the other way around. The issue becomes about who initiates action... the cell or the externals.

...any system that uses these abstract principles "could" be called a "Smart Battery". But there will always be those that say:

"It's not really a Smart Battery unless it implements the Smart Battery Data Specification."

...and so the arguments begin. :lol:
 
There's no argument.
SBDS is a clearly defined set of parameters, as you've pointed out.
Whereas the term 'smart battery' is simply slang.
It's like the slang 'IDE' that can applied to anything when they really mean the defined ATA which even at the term has been diluted thru misuse.
Manufactuers & retailers simply slap on the tag even if the smarts consist of nothing more than a thermistor I've seen in one case.
It's easy to understand why they do it, to make their product look better as they try to get away with asking a higher price.
 
safe said:
[...]
LVC is less than ideal and in the worst case situation it can get pretty bad if one cell is really in bad shape.
[...]
This whole `smart' battery thing revolves around a few too many assumptions. One of the most troubling is the assumption that cells will age at different rates or with different discharge profiles. Sadly, I don't think we have much data about that. We also don't know how old they will get before that change starts to happen. If cells start to diverge after ten years of use then maybe we don't even care. Maybe a fancier BMS is a waste of time and effort.

The KillaCycle A123-powered drag bike apparently uses a form of `smart' BMS. But they're using a whole lot more cells than we are with ebikes. And they do seem to be abusing them somewhat.

Most of the cell failures we've seen have been due to abuse and/or failure of BMS/LVC circuitry. We're drawing conclusions from inappropriate data.

Richard
 
Back
Top