Not Getting Output in STM32 401RE 12-Bit ADC

Hello everyone My self Manan.
By below code I am getting good results(On STM32 Nucleo 401RE 10bit ADC).

#include "EmonLib.h"                   // Include Emon Library
EnergyMonitor emon1;                   // Create an instance
void setup()
{  
  Serial.begin(9600);
  emon1.current(1, 111.1);             // Current: input pin, calibration.
}
void loop()
{
  double Irms = emon1.calcIrms(1480);  // Calculate Irms only
  Serial.print(Irms*230.0);	       // Apparent power
  Serial.print(" ");
  Serial.println(Irms);		       // Irms
}

But when I tried by changing to 12 bit ADC I am not able to get result.
I have already changed library and added line analogReadResolution(ADC_BITS)
Then also I am not able to get the result.
Can any one help me how to get result by using 32-bit microcontroller with 12 bit resolution.

That’s because the parameter analogReadResolution(ADC_BITS) is meant for the Arduino Due,
which does not use the STM32.

From the emonlib readme.txt file:

Update: 5th January 2014: Support Added for Arduino Due (ARM Cortex-M3, 12-bit ADC) by icboredman.

To enable this feature on Arduino Due, add the following statement to setup() function in main sketch: analogReadResolution(ADC_BITS); This will set ADC_BITS to 12 (Arduino Due), EmonLib will otherwise default to 10 analogReadResolution(ADC_BITS);.
See blog post on using Arduino Due as energy monitor:
Home Energy Monitoring System

This thread may be of some help:

1 Like

I created a branch of EmonLib modified to support STM32 that may be of interest here
GitHub - openenergymonitor/EmonLib at STM32

Here is the list of changes:
Comparing master...STM32 · openenergymonitor/EmonLib · GitHub

EmonTx Shield and NUCLEO-F303RE + EmonLib example:
STM32/EmonLib.md at master · openenergymonitor/STM32 · GitHub

Bear in mind I haven’t touched it in 1.5 years so cant give any guarantees that it will work/is suitable, but it may help if only as a starting point…

1 Like

In theory though it oughta work on any Arduino supported platform that supports 12 bit conversions. Provided you’re just calling the standard Arduino runtime routines in your sketch and not doing anything fancy with h/w registers, then you oughta’ be able to run it pretty much untouched. Although there could well be timing assumptions. analogRead() timing will vary a lot from platform to platform so if the code is assuming it takes x usecs (for phase adjustments say) then that could be well out.

I’m afraid I’ve never used an stm32 in an arduino runtime context, so can’t help much more than that.

1 Like

Is it falling over trying to calibrate itself against the internal reference? If any of the #defines are (wrongly) satisfied, then it could be getting that wrong, otherwise it ought to fall through and assume a 3.3 V reference.

Trystan’s version does NOT use the internal reference voltage, so that is a better starting point.

@TrystanLea - Why did you change the filter time constant from 1/1024 to 1/4096, or did you think that number was the ADC span? I chose that so that the compiler could optimise it as a bit shift, should it so desire.

1 Like

Dear TrystanLea,

Thanks for your response.

I have tried your suggestion as per your code changes and getting good results than previous one. But still I am not getting actual Irms value. I am sharing below some readings for your understanding.
Actual Current(Amp) ----- Measured Current(Amp)
0 ------------------------------------------- 0.4648
0.520 ------------------------------------- 0.9411
0.964 ------------------------------------- 1.5855
1.306 ------------------------------------- 2.0949
1.489 ------------------------------------- 2.3729
1.860 ------------------------------------- 2.9337
I understand that you have changed current calibration value and it is calculated like this way, calibration value(60.606) = CT ratio(2000) / Burden Resistor(33 ohm).

As per our hardware configuration, we are using different custom made CT which is having 2500 Turns ratio and our circuit has 3.9 ohm burden resistor. When I am calculating calibration constant using these values then I am getting ~641 value as calibration constant. After putting this constant value in code, it shows wrong values.

I am not understanding that why calibration constant value is dependent on ADC resolution. As per calibration constant calculation there is no talk about ADC resolution. Can you please explain me why we need to change calibration constant value when we are going to change 10-bit ADC resolution to 12-bit ADC resolution?

Can you please help me to resolve this problem ? It would be highly appreciated if I could get support.

Thanks,
Manan

The calibration constant is the current that produces a voltage of 1 V at the ADC input.
When you put a voltage into the ADC, you get a number out. Somewhere in the calculations, you need to know the relationship between the number and the voltage. That is where you must know the ADC resolution.

In the original emonLib, it is here:
double I_RATIO = ICAL *((SupplyVoltage/1000.0) / (ADC_COUNTS));

SupplyVoltage = 3.3 V, ADC_COUNTS = 1024
For you, ADC_COUNTS = 4096.

1 Like

Dear all,

Thank you for your answers and suggestions.
I have tried and get very good results. I share some readings below.

Actual Current(Amp) --------------- Measured Current(Amp)
0 ------------------------------------------- 0.3894
0.525 ------------------------------------- 0.5141
0.972 ------------------------------------- 1.0140
1.503 ------------------------------------- 1.4061
1.854 ------------------------------------- 1.859
2.48 --------------------------------------- 2.3738
2.984 ------------------------------------- 3.1066
3.568 ------------------------------------- 3.4518
3.716 ------------------------------------- 3.8284
4.002 ------------------------------------- 4.1422

But I still have doubts whether I will install this CT in a different location with a different input load and whether I will not get a good result.
Then how to calibrate?
Are there calibration methods for this?

I would be very grateful if I could get support.

Thank you very much,
Manan

If you have calibrated correctly, tnen the different load should make no difference to the calibration. When it is correct, it should remain correct until you change the c.t., or the voltage reference that the ADC uses, or the sketch. Look at it this way: does the speedometer on a car need calibrating when you use a different road?

1 Like

Dear Robert,
Now I want to more clarify that what we have done at our firmware side to get good results.

First we checked ADC count at no load condition and took that value as a reference in code. Then we are taking ADC samples at every 1 ms interval and subtract those values from that reference value. After this we have done squares and took average to get RMS value. Now we have RMS digital count.

Below pseudo code for better understanding.
while(1)
{
raw_data[i++] = ADC_Value(); ---------------------------// Storing ADC Value into array
final_data[j++] = raw_data[i] - 2029;-------------------- // Subtract reference value
square_final_data = (final_data[j] * final_data[j]) ;— // square of all values
sum_final_data += square_final_data ;---------------- // sum of all square values
average_final_data = sum_final_data / 1400 ;------- // average of all sum
if( j == 1400)
{
rms_digital_value = SORT(average_final_data) ;— // RMS value in digital form
i = j = sum_final_data = 0;
}
}
Reference meter I(Amp) ------ RMS Digital Value
1.858 ------------------------------------- 65
2.480 ------------------------------------- 83
2.984 ------------------------------------- 99
3.568 ------------------------------------- 110
3.716 ------------------------------------- 122
4.002 ------------------------------------- 132
Then we try to find relation between RMS digital count and respective input current, taken value from reference meter. We got this relation through help of linear equation, y = mx + c. Now using this equation(by keeping m and c values constant), we are getting unknown input current value after inserting RMS ADC digital value in that equation.
Now our queries are:

  1. Is this procedure correct to get input current?
  2. Actually I didn’t do any calibration previously, the values are getting from the above mentioned procedure but still there is some offset in actual value and derived value. So it is important to do calibration to remove this offset. I need to understand that is there any calibration method available to get more accurate results? As you know that different-different hardware is having different-different offset values even if all those hardware is having same CT. So we need to do calibration on each and every hardware board through externally.
    I hope you understand my point regarding Calibration.
    I would be very grateful if I could get support.

Thank you very much,
Manan

That looks to be correct.

But one place where I think you might have a problem is:

You realise that each set of hardware will have a different “m” and a different “c”. You can do nothing about the slope - it depends on component tolerances that you have no control over. All you can do is spend a lot of money on very tightly specified parts. Plus, there is no guarantee that these will not change over time.

There’s also no guarantee that your offset value will not change over time, but you can do something about that.

There are many methods to remove the offset (the “2029”):

  1. Measure it with another analog input, and subtract.
  2. Use a high-pass filter to allow the 50 Hz and all harmonics through but remove the d.c.
  3. Use a low-pass filter to obtain the d.c. only component, then subtract that number.
  4. As well as accumulating the squares to get the rms, accumulate the raw numbers also, then at the end of the averaging period, use the maths to remove the average (= d.c.) component.

We don’t use (1) because we don’t have a spare analog input. If you have one, you could use it.
We used to use (2) in emonLib, but it took a long time to settle and gave false readings while it was doing so, and it couldn’t be pre-set to start near to the final value.
We use (3) in emonLib, because it can be initialised to the nominal value, then it corrects itself quickly.
We use (4) in emonLibCM, but subtract the nominal d.c, first, just to keep the numbers smaller.
There’s no great difference between (3) and (4) in the accuracy of the result, but (4) runs a lot faster in software because less maths is done per sample, and then only involves integer multiplication and addition. The time-consuming floating-point division etc needed by the filter is not required, and what remains is only done once per reporting interval.

Thank you for your reply.
It is working fine and all my doubts are cleared now.
Thank you Robert for your kind support.

Thanks.

1 Like

Hi @Manan233 I’m also trying to do Current measurements with SCT013-030A which is non-invasive type having 1800 with burden resistor 33ohm with controller STM32F103C8T6(Bluepill) I can’t able use this analogReadResolution(ADC_BITS) and #include<driver/adc.h> library please help in solving this… Thanks in advance

I think you are confused. The transformer ratio is 100 A : 1 V. As the output is a voltage, you should not use a second burden resistor, there is already one inside the c.t. housing.

This will not matter until you get it working, but when you do, then it will affect your calibration.

Sorry I’m using SCT-013-030 it’s 30A: 1V, yes it’s having built-in burden resistor of 62ohm (sorry above I wrote it as 33ohm)
I’ve done coding as below

#include "EmonLib.h"
#define ADC_INPUT PA0
#define HOME_VOLTAGE 230.0
#define ADC_BITS 12
#define ADC_COUNTS (1<<ADC_BITS)
EnergyMonitor emon1;
short measureIndex = 0;
unsigned long lastMeasurement = 0;
unsigned long timeFinishedSetup = 0;

void setup() {
  // put your setup code here, to run once:
  //adc1_config_channel_atten(ADC1_CHANNEL_6, ADC_ATTEN_DB_11);
  //analogReadResolution(12);
  Serial.begin(9600);
  emon1.current(ADC_INPUT, 30);
  timeFinishedSetup = millis();
}

void loop() {
  // put your main code here, to run repeatedly:
  unsigned long currentMillis = millis();
  if (currentMillis - lastMeasurement > 1000){
    double amps = emon1.calcIrms(1480); // Calculate Irms only
    double watt = amps * HOME_VOLTAGE;
    Serial.print("Current(A):");
    Serial.println(amps);
    Serial.print("Power(W):");
    Serial.print(watt);
    lastMeasurement = millis();
    if(millis() - timeFinishedSetup < 10000){
      Serial.println("Startup Mode..");
    }
  }
}

the corresponding results

Original(in multimeter) Measured(STM32 output)
0A 0.29A
0.73 0.76A
0.80 0.81A
1.0A 1.03A

I can’t able to stabilize current value
Any help in stabilizing the current value appreciated
Thanks in advance

[Edited for format – Moderator (RW)]

I’m not using any external burden resistor