It appears to be Windoze-only.
And it uses the Eclipse IDE.
I gave Eclipse a try about a year ago. The learning curve, while not difficult, was considerably more than the Arduino IDE.
The other potential problem for OEMers is how they’ve implemented breakpoints. Once you have a breakpoint anywhere in your code (even somewhere you wouldn’t expect to be time sensitive) they enable a permanently asserted IRQ. From then on in, the CPU is spending pretty much all of its time in their ISR. Upon executing an ISR, the AVR always executes one mainline instruction before it checks to see if another IRQ is pending.
Effectively, after executing each instruction of your code they execute their entire ISR which checks the interrupted PC against a list of breakpoints to see if there’s a match. If not, it returns and executes one more of your instructions and repeats, so performance will be drastically reduced. For a lot of simple Arduino programs that probably doesn’t matter, and the utility of gdb outweighs the performance hit. But when you’re interfacing to something external, that timing difference can completely change the behaviour of the code. Anyone who’s put printfs in the main sampling loop quickly learns that’s not the way to go. Those 50/60Hz mains cycles are going to keep coming in at the same rate regardless of whether or not you’re debugging.
That’s pretty much true of any debugger, is it not?
Usually when you set a breakpoint, most debuggers replace the first instruction that corresponds to that line of C with a break/trap/int instruction, and make a note of what instruction was there so they can make things right again if/when the execution path gets there and you decide to continue. But until the execution path does get there, the CPU runs normally at full speed. That’s a bit harder on the AVR because it opcode-fetches from flash not RAM, but there is a version of avr_stub.c that does. It reflashes the flash whenever you set or remove a breakpoint.
In that case, it’s ok to set a breakpoint outside of the critical timing code, safe in the knowledge that by the time you get to your breakpoint, the critical timing code will have executed normally and you can now take as long as you want at your breakpoint to poke around in variables.
That’s not the case with the above approach. Even if you avoid putting breakpoints in the critical timing code, it’s going to make that critical timing code limp along because it’s effectively doing breakpoints by single-step/check/continue. All the way through your critical timing code it’s checking to see if execution has got to one of your breakpoints yet… it’s kinda’ breakpoint by polling.