UDP Broadcast

I’ve discovered the errors I had were to do with my setup. I was trying to send to another machine which was then switched off, so the UDP sending had nowhere to go. When I point to 127.0.0.1 there’re no errors.

Interesting. I just looked up try/except. It’s the standard method of dealing with errors and I can see how it’d be easy to print out for debugging.
I notice now your script is compensating for drift, print(datetime.datetime.now()) shows me that it’s roughly OK, although I’d like to understand why it works.
EDIT: I moved my datetime.datetime.now() line to double check and your script actually drifts.

Data sent: 1117,756,258.67,361,960.698,2249.962
TEST2
2018-03-15 09:37:14.025587
Data sent: 1117,756,258.67,361,960.698,2249.962
TEST2
2018-03-15 09:37:24.048733
Data sent: 1117,756,258.67,361,960.698,2249.962
TEST2
2018-03-15 09:37:34.164788
Data sent: 1117,756,258.67,361,960.698,2249.962
TEST2
2018-03-15 09:37:44.180699
Data sent: 1117,756,258.67,361,960.698,2249.962
TEST2
2018-03-15 09:37:54.343258
Data sent: 1117,756,258.67,361,960.698,2249.962
TEST2
2018-03-15 09:38:04.279020
Data sent: 1117,756,258.67,361,960.698,2249.962
TEST2
2018-03-15 09:38:14.337013
Data sent: 1117,756,258.67,361,960.698,2249.962
TEST2
2018-03-15 09:38:24.406613
Data sent: 1117,756,258.67,361,960.698,2249.962
TEST2
2018-03-15 09:38:34.460508
Data sent: 1117,756,258.67,361,960.698,2249.962
TEST2
2018-03-15 09:38:44.589877
Data sent: 1117,756,258.67,361,960.698,2249.962
TEST2

Your script uses half the CPU mine does, and slightly less memory, so must be a more efficient way of doing it.
I notice also, the ‘insulation’ in threaded methods is only a result of creating a new thread before the execution of the other operations like requests and udpsend, hence my CPU usage perhaps. But with error handling using try/except the insulation is unnecessary I think, as long as there’s bomb-proof ‘except’ code, correct?

Just a stab in the dark but I guess the other option for sockets is to leave them open and have emonhub close them if they’re not used after a minute or two?
Unless they need closing for the next client to use it…

I’ll change the original post once to something closer to your method once I’ve fiddled about a bit more.
The other timing option I’ve noticed is the python scheduler… Not sure what the advantage of that’d be.

I am no developer or something remotely close to that but did someone already tried the node red UDP nodes to see if that is usefull to your needs?

Is it for me, thanks, I think Paul wanted an option to be added to emonhub also.

Drift in it’s most raw form is where a sleep is added to the code to be run and the total time of each loop is dependent on the time it takes for the code to run rather than the fixed sleep element.

My very simple timer improves on that by always looking to a preset point in time to do the next iteration, that point in time is defined in each new iteration and can be really quite accurate with little or no drift, this is the common and basic way of timing loops, but the cost of that accuracy is cpu power, to combat the high cpu use, I added a 0.1s sleep so that it isn’t looping to fast, essentially < 0.1s could get added to each iteration causing a drift. If you removed that 0.1s sleep, the drift would (almost) be removed, but the cpu would be as high as your threaded example.

I do have my own way of eliminating drift, I use it a lot and I think it should be used more widespread in the OEM project but I haven’t had time to document why it’s needed and the benefits of using it so I hope it’s relatively self explanatory in this case use.

For now I have been calling this method “harmonized intervals”, it basically ensures all “10 s intervals” are always aligned to a unixtime timestamp at an exact whole 10s interval since 1970-01-01 00:00.

I’ve modded my script to use this “harmonized intervals” method and added a timestamp to the “data sent” print (nothing else is changed)

#!/usr/bin/env python

import requests
import socket
import time

apireadkey = "abc123abc123abc123"

feed_ids = "12300,12301,12302,12303"
#feed_ids = "12300"
# The fetch api works for a singular feed id too so it works in both scenario's
# Uncomment either one of the 2 lines above to try
emon_api_url = "https://emoncms.org/feed/fetch.json?ids=" + feed_ids + "&apikey=" + apireadkey

#socket code___
UDP_IP = "192.168.43.185"
UDP_PORT = 6400

interval = 10

def calc_next_time(time_now, interval):
	time_next = (((time_now//interval)*interval)+interval)
	return time_next

time_next = calc_next_time(time.time(),interval)

while(True):
	
	# Get the current time
	time_now = time.time()
	
	# Check if it's time to do another transmission
	if time_now >= time_next:

		# Request the data from emonCMS
		response = requests.get(emon_api_url)

		# Parse the data if the response was good
		if response.status_code == 200:
			datalist = response.json()
			
			# Convert any single value returned to a list of one value 
			if not isinstance(datalist,(list)):datalist=[datalist]
			
			# Make a CSV string of floats rounded to 3 decimals max 
			stringydata = ','.join([str(round(i,3)) for i in datalist])
			
			# Send the data over UDP
			sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # UDP
			sock.sendto(stringydata.encode('utf-8'), (UDP_IP, UDP_PORT))
			print (str(time_now) + "\tData sent: " + stringydata)
			
			# set the time of next loop
			time_next = calc_next_time(time_now,interval)
	
	# Don't loop too fast
	time.sleep(0.1)

When this runs you should get output like so

1521117490.07   Data sent: 237.72,-77.0,292.0,369.0
1521117500.02   Data sent: 237.77,-165.0,201.0,366.0
1521117510.09   Data sent: 237.71,267.0,631.0,364.0
1521117520.09   Data sent: 237.86,-109.0,254.0,363.0
1521117530.05   Data sent: 237.91,-158.0,201.0,359.0
1521117540.01   Data sent: 237.77,271.0,629.0,358.0
1521117550.1    Data sent: 237.92,-109.0,241.0,350.0
1521117560.09   Data sent: 237.92,-147.0,205.0,352.0
1521117570.05   Data sent: 237.86,269.0,622.0,353.0
1521117580.05   Data sent: 237.67,-117.0,237.0,354.0
1521117590.06   Data sent: 237.77,-157.0,204.0,361.0
1521117600.07   Data sent: 237.77,-157.0,204.0,361.0
1521117610.0    Data sent: 237.68,282.0,646.0,364.0
1521117620.01   Data sent: 238.03,-166.0,210.0,376.0
1521117630.06   Data sent: 238.16,300.0,680.0,380.0
1521117640.03   Data sent: 238.34,-121.0,263.0,384.0
1521117650.03   Data sent: 238.39,-177.0,209.0,386.0
1521117660.1    Data sent: 238.1,272.0,662.0,390.0
1521117670.07   Data sent: 238.14,-107.0,287.0,394.0
1521117680.04   Data sent: 238.24,-177.0,220.0,397.0
1521117690.03   Data sent: 237.95,284.0,693.0,409.0
1521117700.09   Data sent: 238.0,-45.0,354.0,399.0
1521117710.05   Data sent: 238.33,-84.0,326.0,410.0
1521117720.1    Data sent: 238.12,-26.0,371.0,397.0
1521117730.06   Data sent: 238.17,-179.0,224.0,403.0
1521117740.05   Data sent: 237.99,277.0,689.0,412.0
1521117750.05   Data sent: 238.08,-116.0,289.0,405.0
1521117760.01   Data sent: 237.93,-125.0,280.0,405.0
1521117770.09   Data sent: 238.08,-32.0,368.0,400.0
1521117780.09   Data sent: 238.03,210.0,621.0,411.0
1521117790.05   Data sent: 237.95,-44.0,369.0,413.0
1521117800.06   Data sent: 237.72,-45.0,374.0,419.0
1521117810.05   Data sent: 237.66,-36.0,393.0,429.0

That first column is the timestamp and you can see it is almost to a whole 10s interval. The few huntredths of a second are caused by the 0.1s sleep and although they delay the iteration by up to 0.1s they do not add to a “drift” as the next timestamp is set using a resolution of 10s it will ignore anything less than 10s, if a delay at the emoncms server resulted in a loop taking 11 secs, the next payload would be late the one after that would be timed independently and not be effected by that delay.

Hello Dan! What are you building there!? I just watched the video :slight_smile: What kind of input data and output visualisation are you planning?

Oh just fiddling around with numbers, you’ll see :wink:


That makes alot of sense and looks neat.
It’s so obvious to use some low-level timer and divide the result down that yes, it’s a wonder 'tis not done more often.

The CPU and memory looks fine and dandy, a little more spiky… caused by the division function I’m sure…
I see part of the trick is the // operator, which rounds the time_next to the nearest divider, in this case 10.
This is perfect for a 10 second timer performing a simple function.
So for emonhub there’d be at least three levels?
I guess I’d go for something like:
Sampling going flat out in the background or on very short harmonised interval (would a thread be useful for sampling?), sample buffer as appended python list or multiple lists, then network/other stuff floating on top send data on harmonised interval?

Do the emonTx and emonPi harmonise the intervals?
…Just checked the Tx v3.4, it doesn’t, it goes to sleep for:

unsigned long runtime = millis() - start;
unsigned long sleeptime = (TIME_BETWEEN_READINGS*1000) - runtime - 100;

This must drift in the microseconds…
Arguably, there’s a case also to be made for a dynamic sleeptime if the intervals are harmonised. The start and end times of the program can be used to deduce a new sleep time each round. This would at least be a good method to deduce a fairly accurate fixed sleep time that would be best power economy.

Perhaps this is how other modules such as sched and threading do it, using divisions like this. Another method comes to mind, which is to log the original starttime unix timestamp, and use that as a working number…
This is an extension to your concept, the time_now can be relative to any position in time…
In the case of a Tx it’d be the time since program start.
To reduce CPU load the relative start point for time_now can be refreshed to keep the division maths simpler…
Just prattling on now :slight_smile:
What’d make the Tx nicer also would be a way to send a simple instruction to start throwing back samples to the hub in realtime for local visualisation puposes… hand held realtime data updated by the millisecond.
Then issue a command back to the Tx to say hey, chill, you can transmit every 10s now.

Note: update ‘tags’ of this thread titled “UDP Broadcast”.

Exciting stuff.

Essentially yes, but emonhub doesn’t sample data directly (eg power voltage etc) so the faster sampling loop is done outside of emonhub, but yes it collates those smaller packets and uploads them using a bulk api. It’s own faster loop checks for incoming fresh data (eg monitoring serial conns) and monitoring the settings file so changes are picked up and implemented within 1sec.

No, unfortunately there’s no easy way to do it as they do not have RTC’s and the emonTx/TH go to sleep so they cannot respond to a poll over RF, the RF implementation isn’t great with exact timing since the way the RF lib works is that it checks to see if there’s a gap in the airwaves and delays sending if there’s traffic. So if you have several devices your emonTx payload could get delayed by up to 6secs before it gives up and moves on to the next 10s data. Since they have no RTC, the timestamp isn’t applied until it lands at emonhub, so even without the collision avoidance, it already contains an unmanaged element of time.

For serial connected devices it is my intention to poll for data (and there is no good reason the emonPi doesn’t already do this), I have previously had this working on a I²C setup with 10 or 12 emonTx type boards where emonhub would broadcast a trigger for all the boards to start their calcs on the last 10s of data whilst also sampling data towards the next interval. emonhub could then poll each device in turn for it’s data and submit that data with the trigger timestamp at any point between triggers.

Indeed but any sleep that is interrupted (eg pulse counting) is an unknown quantity as it cannot count whilst asleep so all it knows is that it went to sleep and didn’t reach the intended time of waking up. There are possibly ways of getting around that but the ATmega is short on resources and there are many other variables that make it pretty pointless aiming too high.

I can see why you might want to do this but there is a very good reason I don’t. This eliminating drift is a bi-product of the main aim. In emoncms there are fixed interval feed engines and any mis-matched intervals posting data to those feeds will either result in apparently “missing” data eg with 11s data every 10th 10sec data slot will be empty or “discarded” data eg with 9s data every 10th packet will be overwritten before it gets persisted.
either my anchored start time or your suggested floating start time will deal with this issue ensuring that all packets are sent at 10s intervals BUT…

Since feeds can be created on any whole second a (for example) 10s fixed interval feed could align with anyone of 10 possible patterns. This means in theory you could set up 10x 10s feeds and save the data from a single 10s input to all 10 feeds and when you run reports and graphs you could get 10 different sets of results.

The main intention of the “harmonized intervals” is that all feeds and sampling devices are harmonized to the same intervals. (for example) all 60s intervals accross all devices (with an rtc) are syncronised and exactly 60s. Any 2 60s feeds will show exactly the same data if derived from the same input. Plus any 6x 10s intervals will align with any single 60s interval etc etc, it makes comparison much easier and more valid.

Yes there is some fudging of the timestamps if they do not align exactly with the source, but for the purpose of energy monitoring a fixed offset of 0-10secs presents far less of an error than approximately 10s arriving on-time and being persisted to one of 10 possible time frames.

1 Like

I have so many questions… deciding which route…

What’s the future direction for emonhub and Tx? I heard there’s a emonhub project on the go…
The improvement of Tx sampling and radio, this is on the cards currently? @TrystanLea

I’ll soon have a Tx and emonhub posted to me here, I was going to get it set up somewhere and have a play around with the code.

EDIT: I’ve just seen the meeting notes Trystan posted, I’ll have a read :slight_smile:

1 Like