Community
OpenEnergyMonitor

Community

Double input entries, only one of 'em gets data


(Niklas) #1

Hi, does this look familiar to someone? Something I did or forgot to do?


(Paul Reed) #2

Can you delete the ‘dead’ inputs, and add your feed processes to the ‘live’ inputs?
Do the ‘dead’ inputs re-appear?

Paul


(Niklas) #3

Hmm, it worked. It’s not very satisfying, though, not to know if or when it will happen again. :unamused:


(kevin) #4

Very familiar, easiest way forward is to delete the new inputs and the old ones will spring to life again
In my experience it is more likely to happen after several reboots in succession. See thread below for more details


(Niklas) #5

That could well be, I had a few involuntary reboots due to power loss the last time… thanks!


(Lee) #6

Zombie thread alert!
Hello chaps did anything become of this issue?
I’ve just had it again, which may well have been a reboot, I’m not 100% sure as it took me over two weeks to notice the lack of data.

Kevin’s solution of deleting the phantom inputs did work, but took a while as I have dozens of the things.

Is it now just part of the reboot procedure to check the inputs haven’t duplicated?


(Paul) #7

I do not think it has been specifically addressed, but there do seem to be less cases reported these days. I think there are less issues to trigger this problem rather than emoncms being best placed to avoid it.

There are some changes in the pipeline for “indexed inputs” which may (or may not) directly impact this issue, maybe @TrystanLea can confirm whether that might be the case?

When this issue occurs a simple way to recover is to use the http(s):/server.com/emoncms/input/clean api call. This api call will delete all inputs that do not have any processes attached, so I tend to add a redundant process to any valid inputs that I do not actually use, just to ensure they do not get deleted when I use the input/clean api.

Since this api can be called using a readwrite apikey, it is possible to setup a watchdog script or nodered flow to fire of a input/clean api call if an input stops updating. I personally do not like “fixes” like this but if you are suffering frequently and a fix isn’t in the pipeline it might be worth considering.

Once the newly created set of duplicate inputs are deleted, the original inputs automatically start updating again.

I believe this is because the way the inputs table is queried in the natural order from low inputid’s thru the higher inputid’s so the last “node 10, input 1” it encounters is the newest. I wonder if this could be fixed by simply reversing that search order (if that’s even possible) so that it always finds the original after any duplicates, it wouldn’t necessarily stop the inputs being duplicated, but the very next set of data would update the original inputs rather than any duplicates so only a single update is lost. Just a thought! @TrystanLea?

[edit]

Issue raised so it hopefully doesn’t drop off the radar and get resolved down the line.