I think my original enthusiasm for merging interfacers and reporters came from attempting to work out a bi-directional communication approach between rfm nodes and emoncms with node data being read by the EmonHubJeeInterfacer decoded with the [[[rx]]] datacodes, e.g:
Posted then to emoncms with the MQTT interfacer, which also subscribes to an emoncms topic issuing control signals. These are then put on to an other emonhub internal channel which the EmonHubJeeInterfacer subscribes too and sends out the data over the serial port encoded using the [[[tx]]] datacodes e.g:
I saw both the EmonHubJeeInterfacer and the MQTT interfacer as having both input and output components and so the distinction between interfacers and reporters seemed unneeded to me.
Thinking about this a bit more and reflecting on the fact that the input and output parts of an interfacer are blocking I wonder if this combined input/output interfacer only applies to the âEmonHubJeeInterfacerâ which has to share a single serial port connection.
Your right (If I understand what your saying) we could well have two MQTT interfacers or one interfacer for incoming data to emonhub and one reporter for publishing data to emoncms, and perhaps a third for MQTT data in a different format. It would be perhaps be important for the input/output routes to be non blocking on different threads.
Im trying to think through the different scenarios, how many interfacers are there that need to share a resource (e.g serial port) for both input and output routes? and how many could have seperate interfacers, reporters or reporter-like interfacers for input/output and different formats as I think your suggesting?
Im thinking aloud here a bitâŚ
Perhaps we do. I took the use of node and input names as an obvious given and intended to alter this throughout emonhub core, not just in individual interfacers. I do not see this a huge problem, possibly a sizable job but not complicated. You have to remember emonhub was primarily handling JeeLib data and even emoncms restricted users to node ids of just 0-32 so it appeared to make sense at the time to handle the nodes as integers for better error control, in hind sight I regret that was done and look forward to undoing those restrictions. Even the socket interfacer would benefit from using a name rather than a nodeid. This will impact how the [nodes] section is used and I have thoughts on that for another discussion.
And that would be the correct way of doing it. If you want to buffer data whilst you change the node names you need to do that in the input stage not the output stages, emonhub should faithfully reflect the data as it was at that time. If the network is up 100% of the time the data will be posted before you change the names so I do not see why new changes should effect old data. The interfacers (and reporters) originally had âpauseâ settings sp you could pause either the input or output of any interfacer (or reporter).
No there is no real direct comparison between a interfacer and a reporter, remember reporters were threaded at an earlier date because OEMG was blocking serial data when the network was down. When I threaded the interfacers and made them bi-directional I made no/minimal changes to the reporters whilst we reviewed the interfacers.
The run method doesnât exist in the interfacers. Basically we do not need to store the cargo objects because what ever parsing the interfacer can do as date comes out of the buffer, it can do before it goes into the buffer, the way the methods are currently may need adjusting to suit the ideal solution, but I do not see an issue implementing the buffering, the bigger issue is deciding where,when and if buffering should be implemented.
Again I draw your attention to the distinction between a local/real-time/unbuffered/QoS1/key:value 2 way interfacer and a remote/confirmed_delivery/buffered/QoS2/indexed/timestamped/ reporter or reporter-like intefacer.
The use of the cargo object in the internal buffering allows all settings to be pulled in, as I have previously stated at some point, my long term goal was to perhaps move the process_rx and process_tx methods to the Cargo object so that there was a distinction between âinterfacingâ and âprocessingâ the naming and decoding etc as well as the routing is all part of âcore processingâ. This is just a design goal and doesnât need to be changed now, I only explain this to perhaps help you compartmentalize the emonhub structure in your mind.
yep. I got that, you may have mentioned it previously (a few thousand times ) but my opinion differs and so we need to asses it on other points too.
Great, we can at least then compare the options and movement in either direction will be smother as there will be existing examples to mimic.
I would need to look closer at your code to try and understand why that is happening. Yes there is one thread and either a delayed send/publish or read/subscribe could block the other but what are you going to send to if a website isnât available to read? likewise if canât publish to a topic what will you be listening to if the server isnât available?
That is not ideal as there should be default values to remove the necessity of defining everything explicitly.
In this instance that would be correct, they are both bi-directional interfacers and the mqtt should be unbuffered and âbroadcastâ locally on a per value QoS1 topic tree for local observation/consumption BUT IMO there should also be a mqtt reporter(-like interfacer) AND/OR a http reporter(-like interfacer) for buffering data and delivering both swiftly and concisely whilst ensuring no data loss.
Almost! as above I do not think they both need to be one way, there are 2 clearly different types of outgoing trafic, instantaneous, real-time âstatusâ and âhistoricalâ data to be processed and persisted by emoncms even if it is days old. The distinction is the type not the direction.
There could be dozens of different types for specific purposes once we break the restraints of a single http or mqtt interfacer to do everything. This is why we need to have the basic http request and mqtt connect subscribe and publish methods available to all interfacers and there should be a generic mqtt (and http) interfacer. A very basic example using a single mqtt interfacer INSTANCE you should be able to subscribe to /base/topicA, define the same channel in the subchannels and subchannels and republish that data to /base/topicB.
As a absolute minimum you would need to include all serial connections and most usb/serial connections such as bluetooth, modbus and inverters etc. I also believe the socket interfacer would fit in this âsetâ as you can open a single 2 way communication, listen and reply.
But I still donât see why mqtt and http interfacers canât be 2 way despite being seperate actions, but if we find it canât be done or that we need more threads we can explore that too, but for now I do not think it should be necessary.
As Iâve said above, I think emonhub should support the use of names throughout, the biggest hurdle there for me is that I must retain the ability to use indexed csv as Iâm looking to make the payloads as small as possible for efficiency with either GSM or LoRaWAN. I too would like to see the use of naming but not at the cast of doubling the memory used to buffer and the size of every request. In monetary terms it will costr twice as much to send key:values as indexed csv over GSM, If you roll out the use of named inputs with the bulk upload as the only option, that would force users like me to have to name out inputs as â1â, â2â â3â etc just so it will work, the requests will be double the size and we will no longer be able to tell what the inputs are by the emonhub.conf as the (currently unused in csv) input names would need replacing with numbers, (very messy). Alternatively a setting or test will be needed which could cause confusion or if used incorrectly create duplicate inputs.
The root of this issue pre-dates any emonhub version/variant. It lies in emoncms and the way that indexes and input names share the same field forcing a decision to use one or the other but never both. For these different formats to co-exist without splitting users into 2 separate camps there needs to be a âindexâ field added to the input table to allow any input to be referenced by either itâs name or itâs index, the default behaviour can continue as it is so that new and unnamed inputs have their ânameâ set to the same as their âindexâ until such time as a user changes it.
Once we have that we can streamline the interfacer to use indexed inputs and when ever a change is detected in emonhub.conf settings it could send a change of name rather than creating a new input, which is what you current implementation would do. eg changing
which would preserve the feeds and the processing. An example of this type of action can be seen in Stuarts emonNotify, that too sends input names to his device only on change.
Please consider looking at emoncms to allow key:value and indexed csv to be used together at the same time. I am not trying to make indexed csv the only choice, or even the default or prefered choice, I just see the benefits of both and I want both, if forced to choose, csv wins hands down, which is why we will never agree on key:value only. I feel you are so invested in one specific approach that it doesnât even occur to you that others may have a different opinion, application or requirements, and that there might actually be good reason for that.
Awesome! That is great to hear, that could resolve many issues and give us many more options. Not to mention resolve the current naming confusion surrounding keys nodeids nodenames inputnames and descriptions etc.
Iâve sent you a pull request on the experimental branch with the names support as currently implemented on the emon-pi variant but without the rx/tx sublevel which I would propose in a second pull request. Would you be happy to accept it?
So that not to add to many things at once, I will wait for your reply on this before asking a question on the interfacers and reporters relating to your comments above.
I will pull it into a fresh branch of the experimental branch so I can compare in-situ rather than as a patch for better context.
I am still using the âoriginalâ version that only has reporters threaded, Iâm not familiar enough with the experimental branch to just recognize code out of context since I havenât really looked at it for over 2 years aside from checking bits when debugging the emonpi variant.
The introduction of naming is a core requirement and I need to evaluate what you propose and think about how I would have done it (ie if we werenât working backwards) . The first thing I notice is that you are adding the names and nodename to the cargo, which is probably the way to go, but not even the datacodes and scales are held in the cargo yet so before committing to include changes you have tagged on to emonhub to get a specific job done, I want to think about the overall structure a little with in mind.
This sounds a little odd as the names must be part of a sub group surely?
I will delve in deeper later or over the weekend.
It would be good to get all our ideaâs requirements and concerns out before coding particular fixes, we should be deciding what we want it to do and then implement stuff with the overall picture in mind rather than just tackling small parts of the overall picture and then trying to force the next bit fit the last bit.
Reading your comments above I think I may have missed one crucial intention behind the reporters in at least the experimental branch. Looking at the MQTT interfacer which is an âoutputâ interfacer rather than a reporter in the experimental branch it uses send which is called from run and there is no buffering. Whereas the reporter has run which then calls add which then formats and adds the formatted output to the reporters buffer. For the reporter the actual sending of data is triggered through run > action > flush > _process_post while the interfacer has a much shorter chain run > send.
The run method doesnât exist in the interfacers.
Ah ok in the development branch it doesnt look like its used but in the experimental branch it is used I guess because the experimental branch implements interfacer threading.
Was it always your intention that interfacers where input/output constructs? and that the key difference with a reporter was the buffering? I had always though of interfacers as being inputs and reporters as being outputs and did not realise that is was actually the buffering that was the difference.
In the emon-pi variant I have now implemented the mqtt interfacer as a âreporter-like interfacerâ with buffering and the run > action > flush > _process_post chain.
Should I change that back to run > send and remove the buffer? Im not really using it as it is. That said it could start buffering if MQTT went down?
To double check that I understanding correctly, it is your intention that the MQTT coms to emoncms are always via the buffered QoS2 âreporter-likeâ route? i.e the emoncms phpmqtt_input script would only subscribe to that topic and not the QoS1 topic? and that the format of the payload would be a âbulkâ format array?
The key distinction here is?:
Interfacer direct QoS1
Interfacer buffered QoS2 (previously known as a reporter)
Can an interfacer be configured to be a QoS1 direct output or QoS2 buffered output with a single settings change? Is there any reason why this would not work?
You could have:
if buffer_enabled:
self.add(cargo)
else:
self.send(cargo)
or you might call it later in add
if buffer_enabled:
self.buffer.storeItem(f)
else:
self.send(f)
or you might pass it through the buffer in all cases but never return a False from _process_post
I guess in the case of the MQTT interfacer publishing on a topic of the format emon/emontx/power1 and payload: 100. There is no time sent and so even if the broker went down and _process_post returned false an attempt to send many emon/emontx/power1 updates sequentially would not make sense as they would overwrite the last and be a waste of bandwidthâŚ
The MQTT implementation in the development branch was only a proposed MQTT interfacers implementation, the basic framework is there as a PoC but there is no real processing of the data in either as there was never a proper discussion on the format we were aiming for. This is essentially an incomplete generic mqtt interfacer, but it is bi-directional. With mqtt emonhub doesnât need to perform a âreadâ function, it just needs to maintain the mqtt connection and when a message comes in the âon_messageâ function is called (in this instance all it does is prints a log message), only when publishing a topic does the emonhub code need to call send, hence why it looks like a âsend onlyâ interfacer because the interfacers âreadâ function is redundant for mqtt.
But you are right there is no buffering, that is because at the time I was undecided about whether to add the buffering to the interfacers (doing away with reporters) or to retain the buffered output-only reporters.
Essentially yes, The interfacers have always been bi-directional, the JeeInterfacer has sent out the emonLCD time since day one and it has also configured the rfm2pi using serial output. The reporters have always had buffering and âhappenedâ to also only be one-way.
When putting together the routing code for the experimental version I retained reporters (and included them in the routing) as we din not have any equivalent interfacers yet and my experimental version was compatible with existing emonhub installs with reporters (and will still need to be) so that we could introduce new interfacers that potentially replaced the reporters. However, whilst experimenting with reporter-like interfacers I discovered that there may well be a benefit to keeping the distinction between non-buffered 2 way and buffered one-way interfacers/reporters.
Yes! Although the priority here is that the connection between emonhub and emoncms is as efficient and reliable as possible. Using buffered data and confirmed bulk delivery over MQTT or HTTP(S).
I understand why buffered output may seem unnecessary when you are thinking about MQTT because you only picture emonhub and emoncms on the same machine. My view on the connection between emonhub and emoncms is that it could well be on the same machine but it could also be LAN or WAN connected, the physical location of the components doesnât need to dictate the connections characteristics, if we make a connection that works regardless of the location of emonhub and emoncms relative to each other it will suit all cases including a local install.
In fact if you really wanted a connection that assumed emoncms and emonhub were on the same machine and that nothing would ever inturupt the connection between them, you could use a non-buffered, QoS1 per key topic (2 way local realtime âstatusâ only) type interfacer.
But I fail to see why we would need to force a situation where a frame of data is split up and published on across several different topics individually only so emoncms can them subscribe to the base topic and process all those individual connections, rather than just passing the complete frame to emoncms in one go, MQTT is supposed to provide a light weight framework, which it fails to do it you are increasing the workload at both ends by forcing per key topics unnecessarily. The per key topic tree can be supplied separately (if required) by emonhub for local consumption.
Plus. making users understand why a local emoncms uses a different interfacer to a remote emoncms would be tricky and supporting 2 methods depending on the location of emoncms is too complex.
Always using a reporter (or reporter-like interfacer) when connecting to any and all emoncms instance(s), regardless of whether itâs mqtt or http(s) is straight forward, very clear and suits all scenarios with one solution.
This was a consideration and possibly even the favorite solution during initial development, but I thought that may be overly complicated and at that time I did not feel it was a decision that was needed at that point in time since I was retaining reporters for compatibility and the framework was more important than any individual interfacer implementations. Since then (over the last 2 years) it has become increasingly obvious they need to be distinctly different. Even if we did implement a âswitchâ setting I would concider making that a harcoded switch that was the only difference between a âmqtt-localâ and âmqtt-bufferedâ interfacers so it was clear what the differences are, this ongoing discussion I think is testament to the need for that.
I think the best way forward would be to retain reporters (initially at least) to retain backwards compatibility with the original version. Then if the reporters are there, whilst still supporting the existing emon-pi mqtt and http interfacers we can transition emon-pi users to using the reporters (or all emonhub users to reporter-like interfacers) to provide a more streamlined buffered-bulk-confirmed-delivery to emoncms.
I will answer the questions about the ârunâ âaddâ and âsendâ methods in another post as I want to keep discussion about desired functionality separate to the coding, they are different discussions and I want to avoid the structure of the Python functions defining the desired functionality of emonhub, it needs to be the other way around.
Yes, In fact I chose the name âinterfacersâ because the previously named âlistenersâ suggested they were only one way despite the rfm2pi âlistenerâ transmitting data out over RFM to other nodes.
Previously in OEMG this was further confused by the âOemGatewayRFM2PiListenerRepeaterâ which was an alternative âlistenerâ (as the serial rfm2pi could only have serial conn) that had an inbuilt âsocketâ listener so it could receive data via a socket (like a socket interfacer) and transmit over RFM so technically it was a 2-way RFM âlistener/transmiterâ and socket âlistenerâ rolled in to one.
I chose âinterfacersâ because the code it refers to interfaces with other stuff, potentially in both directions.
Likewise the original OEMG had âbuffersâ (rather than âreportersâ) and those âbuffersâ were responsible for both buffering and sending on that buffered data to emoncms. When the transition to emonHub was made the âbuffersâ were renamed âdispatchersâ and I later renamed them to âreportersâ, IMO the name âdispatchersâ didnât fit the fact they buffered data, dispatching the data sounded (to me) like it was just casting it off into the WWW without a care, where as âreporterâ sounded like it was compiling info and then ensuring that info reached itâs audience. The latter change was less important but I rolled it out at the same time as renaming the âlistenersâ as âinterfacersâ which IMO was important.
So,
Back to emonHub. Originally each interfacer(listener) had a run() method which was called for each interfacer instance every time emonhub looped in addition to a separate call to a read() function and any data that was received from any interfacer was then added to each reporter (dispatcher) via itâs add() (as in add to the buffer) function, then still from within emonhub.pyâs run() each reporters flush() was called regardless of whether data was present or not. (See here)
When we made the reporters threaded so they didnât block the serial port comms, it was necisary to have a run() function in the reporter for the threading module to call, this run() was not implemented to be the same as the similarly named run() function in the interfacers. Itâs name was a requirement of the threading implementation, the new run() function then called the previously used add() function and also because of the threading, the flush() method had to be called from here too. (See here).
The run() function in the interfacers was there just to action any tasks aside from reading/sending data in each loop (eg transmitting the time in the JeeInterfacer). When I implemented threaded interfacers in the experimantal version I renamed the interfacers run() function action() so as to free up the run() function for threading.
The run() function in the current interfacers basically loops through 3 functions, call read() and add any data found to the rxq (RX queue), call action() (eg send emonGLCD time in Jee interfacer) and IF thereâs any data on the txq (TX queue) call the send() function for each Cargo in that queue.
Where as the run() function within the reporters is still the previous incarnation that came about while threading the reporters. IF I were to make the reporter structure align with the interfacers now, it would use the same/similar interfacer run() method and read() would pass as it wouldnât get subclassed, send() which is essentially âdeal with out-bound trafficâ would add the data to the reporters buffer after parsing and the action() function would just keep trying to flush the buffer. There would possibly be no reason to have separate add() and flush() functions as they may well be effectively just renamed as send() and action() functions.
Some code does need to be separated out, such as a âsend requestâ function so that any http based reporter (or interacer?) can just call send_request() and a reply is passed back.
So currently, yes the reporter to have a longer string of methods because they have never been updated to the new way of doing things as per the interfacers, the long chain is a result of changes not by specific design.
Further to all this the current run() function in the interfacers (and reporters probably) will probably be undergoing some further change in the near future in relation to the discussion about crashed threads not reporting a traceback to the log and the recent use of @decorators to try and pass the error details out to the main thread.
Thanks again for taking the time to write such a detailed set of answers, I prepared the following reply in relation to your first post and have now read your second (thanks again!) which I will reply to in a second post.
Iâve tried to summarise in order to check my understanding and answer in a set of points:
Experimental branch MQTT interfacer is a PoC. Pub format not decided, on_message is a stub, but concept of generic MQTT interfacer is an important point, certain aspects of the generic interfacer can be reused by specific format interfacers: init, connection etc. What should the generic interfacer be? does the reporter-like MQTT interfacer proposed inherit the generic interfacer? and just overwrite send with a blank method perhaps? Perhaps we should come back to this question in future.
Implement a Bulk mode MQTT reporter-like interfacer (reasons understood, emoncms can be on WAN, QoS2, bandwidth efficiency). Im happy with the reasons for this and working towards this as the main way to send data to emoncms.
Keep a per topic MQTT interfacer QoS1 as an option where users need it, but switch to the buffered QoS2 reporter-like interfacer for the emoncms coms.
A QoS1/QoS2 unbuffered/buffered switch while possible may not make a lot of sense, in the MQTT example the format emon/node/inputname:value cannot be switched into QoS2 buffered mode as there is no time included. A significantly different send or _process_post method is required for the bulk format vs the per topic tree format.
We still have an ongoing question as to whether the long term direction is to have reporters as a seperate thing from interfacers or whether to have reporter-like interfacers. A reporter like interfacer can re-use the interfacer code in emonhub.py, loading, starting of the interfacer thread and calling of run. A reporter-like interfacer is just an interfacer that implements a buffer achieving QoS2 with transmitted time and proof of receipt. Im siding towards the reporter-like interfacer solution, with the option for any interfacer to implement the buffered QoS2 if required, thereâs not a huge amount of difference the benefit is probably just the avoidance of about 20 lines of reporter specific code in emonhub.py. Im not sure that the seperate reporters is moore understandable than reporter-like, considering I completely missed the intention of reporters up to this point as being QoS2 & buffered. It might be better to highlight to prospective developers that there are two âoutputâ modes: un-buffered QoS1 and buffered QoS2 and that they can re-use inherited buffering code implemented in the base class for this rather than writting their own.
We have a question about backwards compatibility, maintaining development branch reporter support. But then for emon-pi varient users backwards compatibility would require the same EmonHubEmoncmsHTTPInterfacer which is a reporter-like interfacer. One possibility here is that we maintain backwards compatibility by translating the reporters entry in emonhub.conf to point to the EmonHubEmoncmsHTTPInterfacer reporter-like interfacer.
We have an ongoing question about interfacer/reporter-like naming.
We could remove the EmonHub and Interfacer parts as they are repeated and are nested in the configuration file, the main challenge of changing naming is backwards compatibility but not insurmountable.
As an interim step for existing emon-pi variant users as I think weâve still got a lot of work to do to implement all of the above , Iâve merged in my emonhub_buffer branch changes into the emon-pi branch, at least it fixes the http buffer issue where the interfacer would attempt to upload the entire buffer in a single request (something im quite worried about as Ive been seeing large spikes on emoncms.org for a number of months) and it reverts a whole load of code to be closer to the original experimental branch implementation.
We do need to give this considerably more thought, luckily we do not need to make a decision right now. A couple of further points to consider here. We can easily remove the code such as send_request, connect_serial and on_message etc etc to a emonhub_common.py file if we decide to retain both interfacers and reporters and want to share code, sharing elements of code neednât be the reason to lump them all in together and also we need to decide about potential blocking from trying to post a huge buffered backlog, does a reporter-like interfacer even have a read function? If it doesnât, is it really a proper interfacer?
[edit] a Third point is that the buffering may well be expanded to use databases or write to disk so the differences between interfacers and reporter-like interfacers may even become greater. Perhaps the buffer code should be unique to reporters only.
Much of this depends on the outcome of the above, something else to note here is that emon-pi users will undoutable update via the emonPi update script so it is possible to make regex changes to the emonhub.conf file at the same time as pulling in different code so changes in interfacer/reporter name will not require a manual change or user involvement/understanding, although any changes will need to be well documented as there will be a lot of posts and guides that refer to older settings, this latter part is unavoidable regardless of how we apprach this as there will be changes, regardless of the level of backwards compatibility
We do
Although we have a fairly clean sheet here, it is important to get right, and my current thoughts are to match the functionality of the serial and socket interfacers where a simple string of values are published or subscribed to on a common base topic with pub and sub topic extensions, all 3 being defined in the conf, with perhaps a switch to use the last topic level and first value in a frame as a nodeid/name to allow some separation, we will need to try/test a couple of things here, the key is to keep it basic and generic, we can always add another mqtt interfacer to do something particular, but a complex generic interfacer could block some implementations, this is the current situation with the âemonhubMQTTinterfacerâ in the emon-pi variant.
Essentially yes BUT! There remains the questions about what it is actually called in the end and whether it is a interfacer, a reporter or a reporter-like interfacer to be thrashed out.
As for the release of an interim step, Iâm not sure I would go that route, but I am not going to debate or even think about it too much as Iâm sure your mind is set.
Thanks Paul, great, I think weâre making progress with understanding the scope of the work required and compiling a kind of todo list of things to work on here. Weâve outlined both significant changes to emoncms (indexed inputs, mqtt bulk format) and emonhub. What do you think is the next step? How do we break this down into manageable chunks?
Not entirely sure but I do know I need to get into the code, currently, between working, the forum and these really long discussions, I am not finding any time for emonhub coding, so for me the priority is to set up a test environment and start playing, re familiarizing myself with the code based on what we have so far.
Iâm primarily working from memory at the moment so perhaps once I get rolling, some answers to the outstanding questions might start flowing.
You are right there are the changes to emoncms to consider as well, it would be good to get a âemonhubâ branch set up so we can start on those changes but NOT chamnge anything in the master/stable branches until we get both emonhub and emoncmsâs emonhub branch to a place we are happy with, we do not want to make rash changes to emoncms that then dictate a less than agreeable implementation in emonhub, however it would be very useful to have the changes there in emoncms âemonhubâ branch when we get to test emonhub.
Regarding the emoncms âmqttâ api, what are your thoughts on basically making âmqttâ a parallel route to similar/same api calls?
eg the bulk upload of mqtt would be to /emoncms/input/bulk and the payload would be [[ts,id,v1,v2,v3],[ts,id,v1,v2,v3]] authentication and identification are something we need to think about too, I would like to see the apikey in there somewhere, Iâm guessing it should be in the payload so it isnât exposed to unauthorized users (that do have access to the mqtt server) although a apikey or username or userid in the topic tree would then allow for an ACF with the apikey or password granting access to that one branch. I know this isnât needed for the emonpi install but I bring it up now so that we might move closer to a multi-user mqtt implementation in the future, currently there can only be one mqtt account per emoncms server.
I would also like to add a emoncms http interfacer (non-reporter-like) that uses the âfetchâ api so we can publish emoncms feed data via emonhub rather than using the âconnect send and dieâ publish to mqtt process. Could we also do this via mqtt? if we mimic the http apiâs in mqtt, in theory we can use the fetch api, but instead of ârelpyingâ to the calling http request it could trigger a publish to topic (eg emoncms/outbound/username/reference) where the âreferenceâ is a unique reference for this query that emonhub supplied and can identify to (thinking out loud a bit here I know what Iâm after, but not the best way to do it yet). This would allow users to use emonHub to
Regards the âbulkâ reporters or reporter-like interfacers, are we agreed that they will use indexed CSV format to minimize RAM used for buffering and bandwidth used for passing the payload to emoncms? Do we therefore want to look at updating emoncms input names with a api call on changes detected to the node/input names in emonhub.conf (and restart of emonhub)?
I will have a think about the emoncms changes and the topic/payload formats you mention and the apikey question.
The fetch via mqtt should be possible but I think quite complex to implement, not sure will have to think about that as well.
yes agreed
yes i guess so⌠thats a complex action to add
Il try and summarise all of our points in a list so that we can refer back to it as we work on it. It might be good to know which one of us is going to work on each point or whether we both need to explore a point and come back to it once weâve both had a chance to test ideas. It looks like a big job, how much time do we need to give ourselves to do the job properly? it looks like it will take much longer than a couple of weeks!! especially given other commitments