Spikes in data

The likelihood here is that your data source is updating less frequently than your Feed update rate (or the timing of the update is not very accurate). Generally it is better to have the Feed update rate to be longer than the data update rate.

[edit]

No idea

I’d run a tail command combined with a grep against the log file and just extract the log for one feed and see what happens. You could do this against both log files in separate SSH windows and then spot the events happening.

I knew this would happen, but I restarted the emonpi to get the emoncms.log to update, and as I suspected the restart appear to have cleared the problem again.

I’d like to leave that emoncms.log set to INFO, but the filesize gets out of control pretty fast so I had to set that back to WARN again.

I suspect it may be a while before the spikes show up again so I can have another crack at tracking the cause down.

We learn something every time though :slight_smile:

Mike

Reviewing the thread:
1 The data is coming in by radio.
2. Several quantities (not all) have one data point all at the same value.

What is the entry in your emonhub.conf for the two relevant sensor nodes? Are they specific, in that each quantity in the message is uniquely defined, or is there a “general” definition like: datacode = h

Is there another transmitter on the same frequency close by that could be generating a message that’s being correctly decoded? (OK, it’s a long shot, but something like a garage door opener or radio doorbell?)

 

Yes - in general it’s not possible to save the floating-point representation of a decimal number exactly.

It appears in the code for the “wh_accumulator” - which I quoted in this post: How to accumulate watt Hours correctly from an input? - #12 by Robert.Wall
Also: Emoncms stable release v11.3.0
though neither source explains much. I’m sure I’ve seen it documented or mentioned somewhere else, but can’t find it. I’ve a sneaking suspicion it relates to what happens to missing values, i.e. if there’s a gap (according to the timebase) between the last value in the Feed and the value being submitted.

Hi,

This is the configuration for nodes with feeds that were showing spikes in the data.

[[26]]
    nodename = emonth8
    [[[rx]]]
       names = temperature, external temperature, humidity, battery, pulsecount
       datacodes = h,h,h,h,L
       scales = 0.1,0.1,0.1,0.1,1
       units = C,C,%,V,p

[[28]]
    nodename = emonth10
    [[[rx]]]
       names = temperature, external temperature1, external temperature2, external temperature3, external temperature4, humidity, battery, pulsecount
       datacodes = h,h,h,h,h,h,h,L
       scales = 0.1,0.1,0.1,0.1,0.1,0.1,0.1,1
       units = C,C,C,C,C,%,V,p

Having been an OSIsoft PI System Administrator for many years, and never have come up against this when scaling received integers back to floating point or real numbers your statement there had me wanting to educate myself on how the values are stored in python.

I think this link covers exactly what you said, and it’s interesting reading, and when I found a calculator that had enough decimal places, it comes back with exactly the results I see in emonhub.log for multiplying by 0.1.

So there’s that theory shot down. I reckoned it might just have been possible for rogue transmission to get decoded, but with the data specified exactly (particularly the message length), that possibility is even more remote.

It still leaves the possibility that it is coming from the source node, but I’ve never seen anything like it reported before, so I’m inclined to push that down the list.

I suspect you’ve never seen the value printed to the full precision - that’s the “problem” because you’re not the first to comment on this.

There is a notable set of exceptions though:

Python 2.7.18 (default, Jul  1 2022, 12:27:04) 
[GCC 9.4.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> x = 7.0/16
>>> print("%.20f" %  x)
0.43750000000000000000
>>> 

However, for me using your numbers (but probably a different Python, in Ubuntu):

>>> x = 257.0/10
>>> print("%.20f" %  x)
25.69999999999999928946
>>> print("%.14f" %  x)
26.70000000000000
>>>

What size do you see? It shouldn’t be an issue.

Roughly 3MB in 10 minutes

I have searched through a complete emonhub.log looking for the rogue values anywhere and don’t find them.

Since I’ve restarted the EmonPI they no longer show up.

Next time I need to find a way of diagnosing this without a restart. I was running MQTT Explorer but its duration of history isn’t very long it appears so I never had that data to compare with. I was always away from the PC for too long:-)

It took 6 weeks after the SD card change before this appeared again, so it would be a while.

Mike

The rotated logs should be in /var/log/rptated_logs

You must have a lot of processes etc. I have quite a lot going on and my logs definitely don’t grow that quickly on INFO.

I’d change emonhub MQTT interfacer to use JSON rather than individual topics.

        node_JSON_enable = 1
        node_JSON_basetopic = emon/

I have seen the input MQTT process to emoncms get overwhelmed if there are too many topics to read. It also means every value gets the same timestamp on receipt (or timestamp the data in emonhub).

It’s on my list to do that, but cheers for the suggestion :slight_smile:

I wanted to read the docs and understand how that works first. But what I think I’ll do is since I have the time this afternoon I’ll make the change and see how that works out. I always make a backup before changes anyway.

Mike

I can see the difference in emonhub.log.

2023-02-26 14:53:20,512 DEBUG    RFM2Pi     7022 NEW FRAME : OK 28 189 0 89 1 134 1 124 1 146 1 192 1 26 0 1 0 0 0 (-47)
2023-02-26 14:53:20,514 DEBUG    RFM2Pi     7022 Timestamp : 1677423200.512701
2023-02-26 14:53:20,514 DEBUG    RFM2Pi     7022 From Node : 28
2023-02-26 14:53:20,515 DEBUG    RFM2Pi     7022    Values : [18.900000000000002, 34.5, 39, 38, 40.2, 44.800000000000004, 2.6, 1]
2023-02-26 14:53:20,515 DEBUG    RFM2Pi     7022      RSSI : -47
2023-02-26 14:53:20,516 DEBUG    RFM2Pi     7022 Sent to channel(start)' : ToEmonCMS
2023-02-26 14:53:20,516 DEBUG    RFM2Pi     7022 Sent to channel(end)' : ToEmonCMS
2023-02-26 14:53:20,739 INFO     MQTT       Publishing 'node' formatted msg
2023-02-26 14:53:20,740 DEBUG    MQTT       Publishing: emonhub/rx/28/values 18.900000000000002,34.5,39,38,40.2,44.800000000000004,2.6,1,-47
2023-02-26 14:53:20,741 DEBUG    MQTT       Publishing: emon/emonth10 {"temperature": 18.900000000000002, "external temperature1": 34.5, "external temperature2": 39, "external temperature3": 38, "external temperature4": 40.2, "humidity": 44.800000000000004, "battery": 2.6, "pulsecount": 1, "time": 1677423200.5127013, "rssi": -47}

Hello @Mike_Henderson interesting issue!

Sounds like this is something downstream of emonhub as the values are not showing up in emonhub, either something in the MQTT data transfer, emoncms_mqtt script, input processing, feed buffer or after storage during data requests.

Are you just using the ‘Log to feed’ input process for creating these feeds? How many feeds do you have on the system? Are these all PHPFina feeds?

Just a random thought, If you have a lot of spurious inputs it might be worth trying a clean of those with emoncms/device/clean.json

You could also try temporarily turning off the redis feed buffer emoncms/default-settings.ini at master · emoncms/emoncms · GitHub

redisbuffer[enabled] = false

It’s not a good idea to leave that off for long as it will increase SD card write wear if disabled but might be good link in the chain to check to see if it’s that that’s causing the issue.

Hi,

Yes, the feeds that I’ve found spikes in the data have all been PHPFina, and nothing other than ‘Log to feed’. There are 235 feeds configured.

The spikes are of a frequency of around 6 a day. Well, that’s the ones I’ve spotted. I do go looking for them once I notice one appear. I tend to manually correct the data when I find them as it makes the data untidy to leave them in place.

No issues now since the PI had a restart the other day while changing emoncms.log levels. Was there a way to alter the emoncms.log logging level without a complete reboot?

I’m worried that when/if they appear again, any changes I make to diagnose the problem at that time cause them the disappear again :frowning:

Mike