Having worked with a lot of instrumentation and control stuff, I really, really like RS-232. It just works. I made the mistake on one project to go all USB, and it was a complete nightmare. Basically, anything you can think of went wrong, from USB controllers maxing out on number of devices (despite the number of devices being a mere fraction of the allowed devices in the USB spec), devices causing other devices not to work if on the same hub, devices disconnecting, etc. I couldn't even get manufacturers to tell me how many USB devices their USB controller supported. I often had to use tools like USB Device Tree Viewer (https://www.uwe-sieber.de/usbtreeview_e.html) to understand what was going on. There was another USB debug tool that I used that I've forgotten the name of (maybe USBDeview). And if USB devices disconnect, the only way to guarantee getting back to them is to restart the OS process, which makes your software very fragile. Same thing with cameras with USB vs something like Camera Link. A camera's USB driver crashing would make you restart your entire program, making it very hard to build systems. Camera Link, another serial protocol, also just works.
RS-232 and RS-485 are just so reliable. The higher voltage of +/-12V makes it more resilient to noise, and the protocols are just simple. It isn't the fastest around, but it can still be pretty fast depending on how the protocols are implemented.
I like it too, but as a comms engineer working in electrical substations I have encountered many, many situations where it didn't just work, rs485/22 too. Issues with inter-chatacter delay, timing of control signals, different earth potentials, electrical interference, mangling of signals when going through multiplexers or media converters are common. Now that everything is fibre Ethernet my commissioning times are waaaay faster.
As I recall, there was a 1990s Macintosh where the serial port used a proprietary connector (of course) with fewer pins, so Apple decided to double up and use one pin for both Data Terminal Ready (DTR) and Clear to Send (CTS). (Or we had cables that connected two pins?)
Many modems would hang up if you dropped DTR. Enabling this is a good practice to prevent the modem from accidentally staying connected after you're done.
Enabling hardware flow control is also a good practice. If the Mac can tell the sender to wait for a moment, that's better than dropping data.
Perhaps you can see where this is going. If you enable both of these, everything appears to work fine for a while. That is, until the Mac falls behind (scrolling a lot in a terminal window, for example) and needs to actually use hardware flow control. Then, rather than pausing the flow of data, it hangs up the modem.
And your first thought when a modem hangs up out of nowhere is that it's a modem issue: noise on the phone line, a bad modem implementation, an incompatibility between two different modems, etc. So you waste time looking at those as causes.
The solution was to either disable hardware flow control or to configure the modem to ignore DTR and use +++ ATH to hang up instead. Disabling hardware flow control makes PPP (etc.) perform horribly because packets get corrupted and re-sent. And this is another deceptive problem because the modem speed appears to have plummeted but actually the modem is working fine.
Well... as long as everyone has their configuration correctly hand-configured. As the video states, RS-232 doesn't have any way to transmit a clock. So if one end is talking 9600 baud and another wants to talk at, say, 56000 baud, then... no it doesn't just work.
My favorite was the time we received a new batch of controllers from a vendor, same revision number, and their RX and TX pins were swapped. When working with RS232, it's best to have a handful of null adapters and gender changers in your pocket.
Laplink cables had DB-9 and DB-25 on both ends with the crossover built-in.
There were some male to male cables that had crossover while others didn’t. Then there were null modem adapter dongles for straight through cables. They were either male-female or male-male depending.
Reminds me about older Ethernet: before Auto-MDI/MDIX on most NICs and switches, crossover RJ-45 cables were needed.
Different item. While Gender benders could optionally be built with a null modem inside, as a single item their purpose was to link incompatible male and female ports.
Having both a null modem and a gender bender end-to-end was common.
Still is! I carry around gender-benders, null modem adapters and 120ohm CAN db9 resistors every day, along with usb-to-serial, peak can to usb, kvaser to usb, and ixxat to usb.
These aren’t dead technologies by any stretch, not that you were implying that.
> Sorry if you don't see the pejorative connotations in the other term. We avoided it because we didn't wish to offend people.
You remind me of one of the tech leads I have the occasional displeasure of interacting with. He'll blow in with some new work for us to do, and a stack of reasons to do it. The topmost in the stack he'll provide, and the rest of the stack he keeps hidden. The problem with his stack of reasons are twofold:
1) It takes no less than five and -typically- fifteen minutes to get through the stack.
2) Only the reason at the base of the stack is non-bogus. All of the others are calibrated to sound _great_ to folks who don't work on the thing (and talk to customers who use the thing) day in and day out.
Next time, start with the reason at the bottom of your stack when you're talking to technical folks like us. ;)
It does because that is solved trivially by documentation. By just work I didn't mean plug and play. (USB isn't really plug and play anyway by virtue of it being terrible.)
You can also design the protocol to auto-determine baud rate. Some protocols even transmit 0x55 at the start of every packet, allowing for clock synchronization.
Sometimes you get a device from a client who got it from another company who got it from etc etc. And it's been configured via internal memory to talk at some baud rate with some parity, and if you're really unlucky, it won't transmit unless it's received a command. And, like somebody else commented above, sometimes the Tx and Rx pins are just the wrong way around.
Your experiences with USB sound incredibly frustrating. But RS232 can also be crap in its own unique ways.
But to be fair, I do still rather like working on well behaved RS232/422/485 equipment, where you plug it in, set it to 9600 baud 8N1, and you just start seeing a stream of easily parsed text scrolling down your terminal :)
Most of the issues with RS-232 can be front-loaded though, and they are understandable. Once you get it going and document what's what, it's fine. For USB though, reaching an understanding of the actual problem is basically impossible.
With RS232 you can hook up an old oscilloscope and measure what the right baud rate should be. Even in the "won't transmit unless it receives" case, a sweep on a waveform generator and well configured trigger will get you on your way. USB you'd need much nicer tools handy to get to the bottom of an issue.
You are right on about the joy of it being 9600 8N1 on the first try.
I've done a bunch of work with RS232 in the machining industry (CNC). Getting my mind around both software / hardware control sequences was the most challenging part.
You can overcome the baud challenges with scripts that loop through common baud rates until alpha numeric characters are found.
It's also nice that the same few windows applications have been in use for 20 years or so (I specifically worked with RS232 to TCP/IP).
With a buffer, it's probably pretty easy to guess the baud rate. Similar to encoding guessers for strings or the CSV.sniffer in Python standard library.
You do need decently stable clocks for 232 and friends. Anything clocked by an RC oscillator is probably going to cause some sadness eventually.
I2C is terrible in so many ways. (Don't send it off-board. Just don't. Ever. Trust me on this one. I don't care what you read about the bus capacitance spec, do. not. do. this.) Sending out a clock on a 400pF on-card bus isn't too bad, but when you have a kilometer-long cable... yeah, we'd rather not send a clock down that, thank you. Hence the use of self-clocked or pray-we-have-similar-clock-frequencies protocols.
3.3V or 5V devices aren't RS-232 and people need to be careful about that. They're just UARTs with regular old logic levels. RS-232 or 422 or 485 are Serious Business levels to go out and do battle with the mean Real World and the only place they should ever land is at a dedicated transceiver. Full stop.
No, I'm saying you can't send I2C one meter off-board. Even six inches in anything but a gentle EMC environment can be severe trouble without careful design. And yet I've seen people try to run it ~180cm next to horrifying noise sources. Surprise, surprise, that doesn't work out very well.
The root of the problem is that I2C has serious EMC immunity issues. It's well known and appreciated that its drive is weak and open-drain. The bus buffers and especially differential drivers can and do help there. (And will get you past a meter.) What's less well recognized is that a single glitch pulse on SCK knocks all the internal state machines out of whack and requires a bus reset to fix them. Hope you're doing that when your bus is idle or when you're getting anomalies! Most people don't. The Nexus 4 phones sure didn't; this is why their light sensors went dead or crazy or both after a while of uptime.
All of that gets easier to handle if there's a nice, big, low-impedance ground plane nearby, which is why you don't see so much trouble when it stays on the PCB.
> oops, just connected 12V RS-232 to a 3.3V device
Maybe or maybe not. I ran 12V RS-232 levels into a 5V Atmel 8515 for a year or two 24/7 without issues, and tens of thousands, maybe more, have too. And that's a CMOS part.
(This was for a paytv interface sitting between an iso7816 slot and an x86 based emulator. The Atmel 8515 did the inversion in software).
The RS-232 on the PC side would have no issues with interpreting +5 as logic0 and 0V as logic1.
Some would setup a Max232 or Max233, but it wasn't necessary.
Others would use an MC1489 to at least convert the -12V and +12V into 5V and 0V by doing an input and then output into the chip, but again, it was out of caution and not actually necessary.
You can sometimes get away with this if your cables are short and your EMC environment is gentle. (An EE wouldn't try it, but only because sticking the transceiver in is quite easy.) You can never get away with this if your cables are long and your EMC environment is hostile. That's why we have 232 and, especially, 422/485: for when the job is hard.
Did it over ~100' of dollar-store grade RJ-11. (Maybe that's long, maybe that's short, depends on perspective). Was a residential environment, so EMC situation was nothing like some industrial environments.
When I was doing embedded work all our terminals simply remembered the last settings and it was like we would set it once and not have to think about it again.
The voltage thing is real (and real stupid IME) but we never ran into clock issues that would skew baud rates one way or another.
Hmmh, I rather have a love/hate relationship with it. It depends on context, I suppose. In an earlier era, Unix servers (and non-Unix Minis before that) and other equipment (some network routers and switches to this day) offered their system console via RS232. 9600baud 8n1 was common, but not universal. The developers in charge of our enterprise file server were impatient and hard coded the console to 115200baud (because that was the maximum speed PCs generally supported at that time), which not all "console servers" were able to cope with ...
Then was the question, how is it wired? DTE or DCE, i.e. do I need a null-cable? Flow control? And if it's not a DB9 or DB25 connector, but a RJ11 all bets are off and you need to find the manufacturer's cable.
Well that thing make sense. For example you can wire the console port as one port in a patch panel for ease of access. For example in my current office we did this for the console port of access points on the ceiling, this will save you taking a ladder if you need to change the configuration (and while doing so the access point doesn't have a valid IP) or if something goes wrong.
I ran a hundred feet or so of RS232 over RJ11 (dollar-store grade). Did it for a year or two without issues. It was -12v and +12v in one direction, but just 0V and +5V the other way.
> Camera Link, another serial protocol, also just works.
USB is also a serial protocol on the wire ;-) But there are so many layer of complexity on the top that make it indeed a nightmare in many situation.
I worked with a control system using USB, where the connection to the controller had to last for weeks. Regularly, the device stopped working (usually entirely disappearing from the device list) and I had to add support on the software to transparently allow the device to return (and people had to unplug and replug the board when receiving the "device disappeared" alert). Same stuff on RS-232 just worked without a single issue...
I understand USB is a serial protocol, but it's the worst I have ever used. (Was just clarifying for Camera Link.)
Your example is exactly the type of stuff I had in mind. We had the same issue with a camera. We also wanted to power cycle part of the system, since the camera was water cooled to turn it off, but this was basically impossible with the USB communication without farming out the camera communication to an entire other OS process/program such that the communication could be restarted. That manufacturer, for whatever reason, only implemented streaming the images over Camera Link but didn't implement their settings over Camera Link. And I swear to god, another USB camera in the same system wouldn't work through a hub and only worked reliably when directly attached to a specific USB port on the computer. Mindblowingly frustrating.
Interesting fact: the macOS kernel still has code to output debug messages to the serial port, and when enabled, does so by directly writing to port 3f8h; this port address has been used for the first serial port ever since the first IBM PC in 1981.
Also very inexpensive to make, a MAX232 or MAX485 chip costs cents and will allow you to connect to any microcontroller. For sensors and similar stuff it's ideal, more 485 than 232 because with 485 you can have multiple devices on a bus (but unlike 232 is half-duplex).
It is best to say it is possible to troubleshoot RS-232 connections, rather than to claim they are more reliable. There is a limited number of parameters to configure, most software made exposed those parameters, and a lot of hardware would document the pinout. That is contrary to USB, which is more reliable yet there is little one can do when things go sideways. That is to say you are either doing something trivial (e.g. trying a different cable or port) or need to be a developer with a very specialized skillset.
I like RS-485 - RS-232 is not particularly electrically robust - I have a hard time calling it reliable except in ideal conditions. +12V single ended, is still 12V single ended. Ethernet is a pretty good sweet spot of robustness, speed, and cost I think (PoE is nice as well) - and I've ended up using it for a number of industrial asynchronous data communication systems throughout the years. Also: It is so ubiquitous that installation and test equipment is readily available (that moreover can be used by technicians) when things inevitably go wrong.
RS-232 and RS-485 are just so reliable. The higher voltage of +/-12V makes it more resilient to noise, and the protocols are just simple. It isn't the fastest around, but it can still be pretty fast depending on how the protocols are implemented.