SamTexas said:
Agreed on your single cell, open circuit failure scenario. So what to do? Let say I repeat my test at 2.5A and it passes sucessfully (no fire or explosion). Then you could come back with dual single cell open circuit failure scenario. Where does it end? And what's the solution? Give up battery altogether?
The simple answer is that there will never be a foolproof way to remove all the risk, all we can do it try and understand where the real risks are and mitigate them as best we can. For example, if the risk of a cell going open circuit, or a cell interconnection failing, can be reduced to such a low level as to fall below whatever we decide is the degree of risk we are prepared to accept, then that's all we need to do. I strongly suspect that laptop cells are being deliberately engineered now to have a high internal resistance, as part of the manufacturers safety strategy, for just this reason. With the decreasing power demand from modern laptops, the cells can get away with having higher losses, particularly if that significantly reduces the risk of fire or explosion from a damage induced short or over-charge event.
Our problem is that, unlike the primary use of these cells in laptops and the like, we want to connect cells in parallel to get the capacity we need. That creates a potential charge failure mode that wasn't anticipated by the cell manufacturer (it also introduces other potential failure modes, too). All we can do is mitigate these as best we can. I strongly suspect that ensuring the cells are mounted securely, with proper regard for possible shock and vibration induced damage, coupled with good cell level control of charging, probably makes a pack made from these cells as safe as any other. Having seen a video of an 18650 exploding from massive over-charge, I don't think we can ever claim that they are safe, but there is every reason to suggest that, for some limited power requirements, they are a pretty reasonable choice as far as safety goes.
SamTexas said:
Completely disagree with your second point about loss. Why do I care about the loss that is happening inside the cell? I pay for xyz Wh at 0.5C. I get xyz Wh at 0.5C. Why am I complaining?
The loss during discharge is important for several reasons:
First of all, 34W of waste heat inside the battery pack from a 1C discharge is a fair bit, it will heat the inside of the pack up, which may or may not be a risk.
Secondly, the effective battery capacity you get is being reduced by that loss. If you have two 10Ah, 44V (440 Wh) nominal battery packs, one with 0.34 ohms IR and one with 0.009 ohms IR, and you use these packs to run two identical ebikes at around 220W (say a constant 5A discharge), then the higher IR pack will waste around 17 Wh as heat in the battery and the lower IR pack will only waste 0.45 Wh as heat in the battery. For a typical ebike that uses around 15 to 16 Wh per mile you will get around an extra mile of range from the lower IR battery pack for the same nominal pack capacity, just because of reduced losses.
Finally there is an efficiency point. Say you charge these two packs at 0.5C (5A). If we assume that both battery packs have a unity Peukert number, then the charging losses for full charge in the higher IR pack will be around 17 Wh, meaning you need to put in 440 + 17 = 457 Wh and the charging losses in the lower IR pack will be around 0.45 Wh, meaning you only need to put in 440 + 0.45 = 440.45 Wh. The higher IR pack is clearly less efficient, in that it needs more energy to charge and gives less energy during discharge.