Lishui "Open Source Firmware" project / KingMeter 5S

And thank you. I didn't immediately realize you immediately pasted this into the NG_12FET branch on GitHub.
 
Hello.
I have a question that is slightly off topic. I read your thoughts on VESC and you believe that LISHUI controllers are much better designed and of higher quality than VESC. I would like to hear your detailed arguments on this issue. It seems reasonable to me to combine VESK with a separate controller that collects and processes bicycle control signals, provides Bluetooth connectivity, controls the screen, etc.
 
I would like to hear your detailed arguments on this issue.
"Better" for EBike use.
Lishui controllers are certified to meet EN 15194, so you can be sure, that you get no issue with electromagnetic compatibility, no legal issues etc.
Especially the cheap China VESCs like Flipsky are known for poor quality.

Lishuis have all necessary features for E-Bike use implemented, like commercial display support, torquesensor input, head- and rear light support, waterproof housings etc....

Of course there might be high quality VESCs also, but they will be three times more expensive than a Lishui and you will struggle in making them usable, if you want EBike specific functionality like common displays, torquesensor, light....

I can't understand people, who are adding additional hardware to the controller, like the Cycle Analyst, for things, that the controller itself can do easily. In the "scooter world", it is normal, that the dashboard is the "brain" of the system, that reads the peripheral sensor data and calculates the current setpoint for the motor controller. But not in the "EBike world"
I guess, that's exactly the reason, why the VESC has very poor EBike features, it comes from the Hoverboard and scooter world....
 
Last edited:
I believe that this controller with a T15-T17 torque sensor is quite functional and easy to configure. However, I can't seem to overcome the system's hallucinations in torque override mode. On my latest commit, the controller freezes occasionally upon startup. On the E30 screen, the LED on the Lishui is constantly lit. It doesn't freeze immediately, but rather when pedal torque appears. I suspected transients when applying power. Indeed, the sensor stabilizes at around 1500 millivolts after 450 milliseconds. I increased the delay before self-tuning the torque offset to 2000 milliseconds. This improved, but it still freezes occasionally. If it starts up, everything works perfectly, just as I expected. I suspect this may be a hardware issue—either I damaged something else when removing the silicone, or my STM32 itself is truly defective. Unfortunately, despite my quick wits and extensive experience working with "black boxes," this is the first time I've seen STM32, the C language, or GitHub. Debugging is slow, using ChatGpt. And I might miss something basic. If it's not too much trouble, take a look at my notes and maybe your experienced eye will spot the problem. Something's going wrong when calculating MP.torque_offset and uint16_mapped_throttle = map(adcData[6], .....................................; . It's this combination that's causing the problem.

I also really miss a debugging tool like your abandoned EBiCS-Flutter. Despite the problems, I'm still tinkering and planning to convert a couple more bikes to this controller. That's the picture.
 
Delete this while structure:
while(adcData[1]>THROTTLE_OFFSET)

{
//do nothing (For Safety at switching on)
}
I understand that this needs to be removed. It's an endless loop if the throttle ADC tries to start the engine immediately after power is supplied to the controller.
 
#ifdef TORQUE_OVERRIDE

#else
while(adcData[1]>THROTTLE_OFFSET)

{
//do nothing (For Safety at switching on)
}
#endif

Is this possible?
 
#if (R_TEMP_PULLUP)
MS.Temperature = T_NTC(adcData[6]); //Thank you Hendrik ;)
#else
MS.Temperature=25;
#endif
How does it work? And why is adcData[6] used here?
 
Ничего не изменилось.
@szupek had a similar issue. He solved it with some delays at startup
https://endless-sphere.com/sphere/t...ware-project-kingmeter-5s.106862/post-1844839

but I discovered that after adding some delays and LED blinks at the beginning of main.c for newly downloaded sensorless branch, the hanging of controller when starting at 48.6V problem is gone

I recommend to try different delay times at start up.
You should use the debug function of the Eclipse IDE to see where the code stucks, like @szupek did.

How does it work? And why is adcData[6] used here?
This is for Controllers from EBS Heidelberg, they are using a customized (old generation) Lishui hardware, that has a Pullup for NTC temperature reading on this ADC channel. It is not used, as long as you don't set a value different from zero for the Pullup resistor in the main.h.
 
Last edited:
I managed to replace the cloned STM32F103 with the original and flash it with open-source firmware. I had to reduce the SWD frequency to allow flashing using longer wires.

Some notes about the LSW12G-USAMK-F

The temperature sensor raw data is located on PB0 - adcData[5]

The output of the analog temperature sensor is pulled up to 3.3 V using a 3000 Ohm resistor.

The digital pin SS1 is pulled up to 3.3V with a 9300 ohm resistor (or close to it, according to my measurements).


Below is a picture of the traces underneath the microcontroller if anyone needs to follow the vias.
Mine is the same. As far as I understand, adcData[5] is not used for the temperature sensor anywhere in the firmware.
 
Last edited:
This is what the inhuman mind came up with:



and why it actually causes your symptoms):




UART via DMA is used without discipline
Example: HAL_UART_Transmit_DMA() is called without checking whether the channel is free or whether the transmission is complete. This is a classic example: it works almost always, but then sometimes freezes or crashes.

Parsing of the 901U protocol is done as “I read a piece and consider it a message”
This is an incorrect UART stream model. UART produces a stream of bytes; frames must be assembled by the state machine: startcode → length → wait for all bytes → checksum → CRLF.

Lots of "magic numbers" and direct register entries mixed in with HAL
For example, ADC1->JSQR=0b..., DMA1_Channel5->CNDTR, etc. This is acceptable, but requires very careful design. This shows "I screwed it up so it works."

Critical sections "by eye"
The ISR has __disable_irq() around a fairly large chunk of logic. If something goes wrong there, it's easy to lose events/timings.

Blocking waits without timeouts
Something like while(!ui8_adc_regular_flag){} is a direct path to “sometimes it hangs forever”.

One UART line for everything (display protocol + debug) without arbitration
This is almost guaranteed to cause glitches unless a single “UART dispatcher” is created.


So yes: it looks like a project that grew out of “I want it to run” + a lot of experimentation, and not like code that was written with reliability and debugging in mind.


The electronic bastard is lying, isn't he?:)

Seriously, express your opinion on these points.
 
Seriously, express your opinion on these points.
Asking the AI without having deep knowledge of the topic yourself is always difficult. All the stuff you have written about UART is strange. The processing of the incoming messages is triggered by the UART idle interrupt for example, so I'm sure, that the message was received completely....

Of course, the code is far from perfect. It is a hobby project and I am a mechanical engineer, not a computer scientist. But it has been running completely stable for years, without any error messages. Not only for me, but for a lot of satisfied users.
I always try to help users with problems, and so far we have always been successful. (y)

If the AI has a hint, why the startup ends sometimes in the error handler for some few users and has a suggestion how to solve it, just let us know!
 
Last edited:
Asking AI questions without deep knowledge on the topic is always difficult.

Oh yes, that's true!
I hope I didn't offend you. I'm also a mechanic, by the way. I ask questions like these to better understand what I'm up against and where the problems might lie. My current knowledge is completely insufficient to assess the overall structure and, so to speak, the "style" of this project. And such an understanding is very useful as I tinker with it, given my lack of conscious information and experience.


I'll try flashing the firmware in Eclipse now and see what happens. And listen to what the inhuman mind has to say. Maybe I'll figure something out.

I managed to replace the cloned STM32F103 with the original and flash it with open-source firmware. I had to lower the SWD frequency to allow flashing using longer wires.

A few notes about the LSW12G-USAMK-F

The temperature sensor raw data is located on PB0 - adcData[5]

The output of the analog temperature sensor is pulled up to 3.3 V using a 3000 Ohm resistor.

The digital pin SS1 is pulled up to 3.3V with a 9300 Ohm resistor (or close to it, according to my measurements).


Below is a picture of the traces underneath the microcontroller if anyone needs to trace the vias.
I have exactly the same controller. As far as I understand, adcData[5] is not used anywhere in the firmware for the temperature sensor?
 
As far as I understand, adcData[5] is not used anywhere in the firmware for the temperature sensor?
there are three analogue inputs available on the lishui hardware. Pin PA3 for the throttle and PB0 and PB1 (marked as AD1 and AD2 on the PBC header in the old generation) for general purpose.

Pinout%20Old%20Generation.PNG


In the new generation PB0 is for the motor temperature sensor and PB1 is for the torque sensor signal.

Pinout%20New%20Generation.PNG


In my firmware, PB0 is assigned to array element adcData[5] and PB1 is assigned to adcData[6].

If you want to use the existing temperature function of the firmware with the new generation controller, just replace adcData[6] with adcData[5] in this line and set the right pullup resistor value in the main.h
 
But in the NewGeneration12FET branch, we always see: MS.Temperature = T_NTC(adcData[6]); Meanwhile, in the NewGeneration12FET controller, adcData[6] is always the torque sensor. And in the Java configurator, there's a switch named PB1 = torque sensor. But this doesn't affect this line of code.
 
we always see: MS.Temperature = T_NTC(adcData[6]);
The reason is simple: So far no one asked to use the motor temperature sensor on the new generation controllers. So I never changed that. ;)

And in the Java configurator, there's a switch named PB1 = torque sensor
This switches the torque signal input from the throttle input to AD1. Most generic controllers only have the Throttle and the PAS available on the external connectors by default. If you want to attach a torquesensor and don't want to open the controller to solder in an extra wire, you can use the throttle input.
 
#if defined (ADC_BRAKE) while ((adcData[5]>THROTTLE_OFFSET)&&(adcData[1]>(THROTTLE_MAX-THROTTLE_OFFSET))){HAL_Delay(200); y++; if(y==35) autodetect(); } #endif
#endif ui32_brake_adc_cumulated -= ui32_brake_adc_cumulated>>4; ui32_brake_adc_cumulated+=adcData[5];//get value for analog brake from AD2 = PB0 ui16_brake_adc=ui32_brake_adc_cumulated>>4; ui16_torque = ui32_torque_raw_cumulated>>3;
What to do with this if adcData[5] is a thermistor?
 
You should use the debug function of the Eclipse IDE to see where the code stucks, like @szupek did.
What did you use for this? I think I managed to run debugging once, but it didn't work this time. Is the process described somewhere?
 
Back
Top