Pragmatic 65: Everything is Analogue

23 October, 2015


We tackle the subtle differences between analogue and digital data transmission and why frequency changes everything.

Transcript available
Welcome to Pragmatic. Pragmatic is a discussion show contemplating the practical application of technology. By exploring the real-world trade-offs, we look at how great ideas are transformed into products and services that can change our lives. Nothing is as simple as it seems. Pragmatic is brought to you by ManyTricks, makers of helpful apps for the Mac. Visit for more information about their amazingly useful apps. We'll talk more about them during the show. Pragmatic is part of the Engineered Network. For other great shows, visit today. I'm your host, John Chidjie, and I'm joined by my co-host, Carmen Parisi. How are you doing, Carmen? Pretty good, John. How about you? Doing well. Wonderful. Yeah, so I'm going to dive straight into this episode and I'd like to call this Everything is Analog because- I like the sound of that so far. Keeps me employed. Yeah, you would like the sound of that. One of the things that I found fascinating about digital is the idea that things will have a discrete state. It's either on or it's off, but the truth is that that's just not reality. It's kind of like how mathematics is an approximation of reality. Like, maths is not real. You know, there's no such thing as one of something. There's no such thing as two of something. It's always an approximation. And I don't want to go into that too much, but I may have opened a big can of worms just by making that statement. We're getting trippy already and we're only 30 seconds in. I'm sorry about that. Anyway, but anyway, the point is I want to dig into the differences between analogue and digital, why it is the way it is and why digital is really analogue. So, this is going to annoy some people, I just know it is, but that's okay, I don't care. So, I did talk about some of the things I'm going to cover again on this episode, but I really did skim over them. Last time that was on episode 12, about 6 minutes, 8 seconds in, and again at 8 minutes, 30 seconds in for about a minute and a half each, roughly. But we're going to dive a lot deeper than that into the theory behind why things have evolved the way they are and why we are where we are and why I would make such a silly statement like everything is analog. So, would you like to describe to the listeners what an analog signal is? Sure. An analog signal in the time domain is continuously varying. So, you know, if you're trying to measure the signal at any point in time, you can shrink down time to, you know, some infinitesimal scale that even the physicists say doesn't exist and there should still be a value there. It can exist in any amplitude, so if you want to measure the voltage, you can get down to microvolts, nanovolts, picovolts if you want to go crazy, femtovolts. There's not just one or two different levels it can exist at. Excellent, very good. Digital, of course, then you just suggested is it's either one state or the other. It's either on or it's off or it's one or it's zero. But I guess problem is in the real world if we just want to talk about voltage levels for example which is one method of carrying digital information digital logic is in that case about that voltage level but how do we determine what's a logic 1 and what's a logic 0. So digitally we carry data you know these days silicon via transistors and real-world examples of those historically are things like like CMOS, ECCL and TTL. So, you've got a mid-coupled logic and TTL is transistor logic. - Yeah, and before that you had diode transistor and resistor transistor. - Yeah, and go back-- - There's all sorts of flavors. It's a whole outfit that soup. - Oh, absolutely, and you could go back to valves if you really, really wanted to, but could valve diodes and so on. But I'm gonna be selfish and go with TTL 'cause I play with TTL predominantly when I was younger. So you know what, hey, I'm going to talk about TTL just briefly. So logic zero in the case of TTL, for example, is when an analog voltage is between zero volts and 0.8 volts. The region from 0.8 volts to two volts is technically undefined. And then from two volts to VCC, VCC typically being 4.75 to 5.25, that's a logic one. you'll often see VCC written down. You'll also see others like VDD and VSS. If you ever wonder what they are, is that's voltage from collector to collector or collector-collector voltage, if you want to think about it that way. So, what that's all about is the bipolar junction transistors or BJTs as they're sometimes called, they'll have a base, a collector and an emitter. And one of the TTL connection structures involves multiple collectors tied together to a source voltage and that gives rise to the expression VCC but the truth is that there's actually a lot of TTL configurations like NAND gates for example they're technically not VCC connections you know it'd be like a VBC like a base collector or something like that but irrespective it doesn't really matter that's just where that came from now these days of course TTL is pretty well dead you know as a door now so it should but by but never mind because we switch to things like FETs which switch a heck of a lot faster and in the case of field effect transistors FETs the common usages are VDD and VSS because in a FET we will have a gate and we'll have a drain and a source so VDD is drain to drain VSS source to source so if you see VCC VDD VSS now you know where they they came from. So there you go. Cool. So anyway, moving right along. So why on earth would you want to do this? Why would you want to take a perfectly good analog signal, perfectly good and say, you know what, I'm going to make it digital. And then I guess the point and then transmit it and redo the same reverse on the other end. It seems like you've given up a lot of like infinitesimal value information, infinitesimally small value information, you throw it out the window and you've just gone to one or the other based on two state conditions. I guess the reason that it's good is because any noise that the analog signal picks up between the transmission and the reception can be cleaned up, provide the voltage to stay within valid ranges of course, and that would then lead to data corruption if it wasn't valid. That means that we can then take that at the other end and basically repeat the signal in its original form and we can get rid of all the noise. So we can overcome line loss and noise up to a point. So when have you got a digital connection of any kind, you know, they're going to be limitations you're going to place on that. So we will rate cables for example and we'll say right well the cables have got to have certain characteristics otherwise you know you've got a maximum range of like 100 meters. In some cases cheap cables like a cheap Cat5 cable or something like that might struggle to take 10/100 ethernet 100 meters. You know, so we'll just say, oh, it's generically 100 meters, but if you use a really high quality cable, you'll probably get 150 or 180 meters or whatever that is in feet, you know, 300 feet, whatever. So be sure to get the gold plated vacuum sealed with built in antivirus. You know, I did the cold plated thing on an episode before. So that's yeah. But yes. No, dear listener, he's joking. Please do not buy gold plated anything. Thank you very much. Anyhow, moving on. I mean, feel free to, fine. If you want, just don't expect anything out of it. It'll look nice though. Okay. - Very pretty. - Very pretty. Yeah, that's pretty much it. Okay, so digital is good from the sense that we can recover noise. However, the problem is that there's a range limitations because voltages can't be guaranteed to be in the same ranges from side to side. But there's also other problems that you get with digitizing data. And that's what I want to focus in on a bit. So cumulative noise, that's out the window, we can regenerate the data, fantastic. But the thing is that beyond resistance, which is and resistance is essentially just in a nutshell, is when electrons pass through a conductive material, essentially, as those electrons bump along, they will lose energy as the next one is bumped and the next one is bumped and the next one is bumped. And that's just based on the number of free electrons they are and how attracted they are to the nucleus of whatever they're being bumped past, if that makes any sense. Insofar as, you know, you can't beat resistance unless it's a superconductor, and that's another story. But anyway, so I'm not talking about resistance, I want to talk about the two other big, bad, well, they're not big and bad, in the sense of data transmission they are, and that's inductance and capacitance. - It's not just data transmission that they'll come back to haunt you. It also plays a big part in the switching converters I work on every day. Oh, absolutely. Oh, sure. And yeah, we should probably put that down as a topic for the future someday. But absolutely. But for the moment, for the moment, data transmission. So capacitance. You want to talk about capacitance for a sec? Sure. So capacitance, Q equals CV, just throwing out the basic one. I don't know why. But yeah, So if you have two wires running side by side, one with current and the other with none, it will present a capacitance between them, sometimes called parasitic capacitance. It also doesn't even have to be a wire. If you have a wire going over a ground plane as well, you will make a capacitor, however small. What is the capacitance for epsilon s over d? Epsilon is constant based on the material. S is the surface area and d is the distance between the two wires or the wire in the plane or the two planes. So as you decrease the distance, you'll get more capacitance. If you increase the surface area, you'll get more capacitance as well. And you can also change the dielectric constant as well to adjust the capacitance. But usually on a circuit board, it's FR4, and I believe the 4 stands for the dielectric constant, correct? I think so, but don't quote me. Yes, we can look that up and confirm, but I'm pretty sure that's what it is. So capacitance will ruin your day because it'll slow down your edges. So if you're expecting a nice clean you know transition from 0 to 1 or 1 to 0 and there's a lot of parasitic capacitance or your transistors have a lot of capacitance you know it's not an infinite rise time like we'd like to expect and like to model it will you know have some rise time in the nanosecond range usually give or take faster if you're doing really high speed logic. But yeah, so the capacitance will slow it down instead of rising in one nanosecond, maybe it's five, six, seven, eight more if it's really bad, you have a crappy board layout. And that can affect your transitions. If you have a clock that's gonna sample your signal five nanoseconds after the signal goes high and you're still slowly rising, you could be in that undefined state that John talked about between your input high and your input low and you get some corruption of your data stream. Yes, exactly. Jump into the punchline and that's okay. But I guess the thing is that why it takes time, why it has that effect is that it's kind of, I think it's kind of, think of it like if your target is to reach, if you're filling a bucket with water and your target voltage level is the top of that bucket, then if you start out with a, with a track or a series of tracks on a circuit board or in a cable, wires in a cable, and you put current through one of them, what's gonna happen is it's going to take a little bit of time, not long, like you said, nanoseconds perhaps, in order for that current to actually build up to that level in the top of the bucket. That amount of time is, and that's what you'll see is you'll see that voltage rising, right? You'll see that level rising instead of it going straight to there like a sharp edge, like all the drawings in textbooks would have you believe, no, it actually has a rise time and that's because the capacitance is you have to overcome that and fill that up before you actually reach your target. So the thing about capacitance also is it's measured in farads and that's actually named after Michael Faraday and one farad capacitor is essentially one coulomb of electrical charge, and that has a potential of one volt between the plates. Now, I just realized that I then used coulomb to describe farads, which is not helpful. A coulomb is, and I don't know this off the top of my head, I did write this down, is 6.241 by 10 to the 18 electrons worth of charge. And that sounds like- That's a lot of electrons. Yeah, I mean, if that sounds like a lot, that's because it is. So one farad capacitors? No, you don't have many of them, right? No, that's in the super cap range. Yeah, that's crazy big. So, most of the ones that we use in circuit boards and stuff, we're talking about micro farads or less. So, and if we're talking about parasitic capacitance, you're in the pico farad range. Exactly. Yeah, very low number pico farads. Yeah, pico farads is 10 to the minus 12. Yeah, exactly. And it's the sort of thing that's why your rise times, you know, for example, if it is like 100 nanoseconds or whatever it is, it's because that capacitance is so low. And you may say, okay, shrug, what's the big deal then if it's so low? Okay, well, we'll get to that in a minute and not want to jump ahead too far. So, okay, inductance, who is kind of related to capacitance, but in a different way. And with inductance, that's because when you flow current through a wire generates an electric field. And if you then take that wire and, you know, wind it into a coil, then that will actually create a much more intense electric field, electromagnetic field. And, you know, that presents an inductance, which is essentially as you're building that electric field because matter, that is to say matter is in, you know, material, whether that's air, whether that's wood, water, steel, doesn't matter, whatever it is, everything has what I call permeability. And that permeability is going to resist the creation of that electric field and different materials will have different permeabilities. So when you generate an electric field, it's not instantaneous, just like capacitance, same problem. And once you've built up that electric field, if you turn the current off, the field will then collapse and the funny thing is about that as the field collapses it continues to drive the current for a short period of time so you sort of get this charge up period and then feel collapse period and that effect by how much energy it takes to overcome that and and so on that's actually measured in Henry's is actually named after Joseph Henry and the funny thing about that was that although he discovered inductance about the same kind of time as Michael Faraday it was actually kind of more widely credited to Henry for some reason but anyway if you really want to derive the equations for all this I know you mentioned some of them before there's a link in the show notes to Maxwell's equations you know and they're all derived from that so if you want to go ahead and go and derive some equations you can in your own time. It's like there's an exercise to the listener. Yeah exactly see how insane you're feeling or you know, whatever, have fun. Okay, so capacitance and inductance are essentially the dynamic components of a changing voltage when you're carrying a current. Together with the static component, which is resistance, they are collectively assessed and referred to as circuit impedance. Now, I don't want to go any deeper than that. Suffice it to say, when you're trying to transmit digital information and power supplies, but that's another story, another story, another episode. With digital data transmission, that is your enemy. Yes. In an ideal world, when you can, to a point, say for low-speed digital, maybe TTL level or probably not, I'm sure John can list some issues with it. You tend to neglect the parasitic capacitance and inductance in your circuit and just pretend it's not there. Well, in the early days, it didn't matter. I mean, you could lay your tracks any way you liked. It didn't matter because the data was going past in the kilohertz or the low megahertz. They didn't start to notice this until about 20 years, 25 years ago. So in the early days of electronic circuits and switching, it was all just like, oh, we can just make it fast. Yeah, we just make it faster. Let's just, you know, crank up the clock speed, you know. But then I started- - Yeah, that rise time in the nanosecond, if that became 10 or 100 nanoseconds, you didn't matter if you're working at a kilohertz timescale, that's still infinitely fast for your purposes. - Exactly right. So you could model it and you could get away with this simple, beautiful, straightforward picture of your line in your mind of a perfectly square function with no rise time or essentially negligible rise time and it wouldn't matter. And anyway, so I guess, okay, so where are we at? So just to hold back just for a second. So as the BJT or the FET switches on, the voltage rise is slowed down by your inductance and your capacitance. As the charge builds, the electric field forms, and that gives you that curved edge to the voltage as it rises, and when it reaches the maximum, and the same effect as it sort of slopes off and curls off as the voltage is turned off and the fields collapse and the charge dissipates. And the faster you switch, the more the chances that you're gonna actually switch before you reach that valid target voltage, is something you mentioned about five minutes ago. And that means that you're going to start corrupting your data. If you're transmitting data from point A to point B, you absolutely have to yeah, you absolutely have to know if you're in a valid voltage range. Otherwise, you've got nothing. Yeah. So there's a second. Sorry, I was going to say, you know, we mentioned curved edges. You can also have the the reverse problem where you you get a very sharp edge and you start ringing. So instead of just coming up and stopping at one volt or five volts, whatever you're working at for VCC, you know, you could swing up to six, seven volts and then come back down to four and a half and, you know, kind of ring out and then settle eventually at five volts. And that's just as much of a problem as a too slow of an edge. That's true. Very true. So, cause of overshoot and, and so on. Yes. Okay. If you're transmitting data, just getting back to transmitting data again from point A to point B, it's necessary to have a way to differentiate between the gaps between a series of consecutive values. So let's say you've got one, one, one, one, one, and then a zero. You need to be able to tell the difference of how many ones that is. And in the same thing with successive zeros. I mean, when you go from one to zero, it's obvious the transition between them is clear. So how do you do that? And the answer is you have a clock and the clock is guaranteed to turn on and off once every cycle, wherever there is a bit. So we sample the voltage at the transition points to extract the digital state at that point in time. So I'm going to steal some I squared C terminology and I'm going to talk about SDA and SCL. So, you know, single data, single clock, or serial data, serial clock. I forget what the S stands for, but anyway. So SDA and SCL. So two wires, one carries data, one carries a clock and it sends data like this. And eventually you reach a point where it's not possible to switch any faster because the impedance issues are preventing you from actually getting valid data from point A to point B. Now, admittedly, this sort of a problem was, you know, just as much as much at the time was just as much about BJTs and CMOS switching speeds. So I admit that it's not the transition to what we're gonna talk about with Parallel Serial and so on and modeling. It wasn't entirely driven by impedance issues because obviously, BJTs are on the way out, CMOS, FETs, they were coming in, and obviously switching rates improved and that changed things as well. So they were all contributing factors, I admit this. But anyway, the initial thought was, and I talked about this previously on a previous episode, was to go from a single data stream to multiple parallel paths. In other words, going from serial to parallel. So instead of just clocking through one bit, one data line, you would clock in like eight bits and have an eight bit bus or 16 bits or 32 or 64, 128, whatever. And that just then created a whole new problem because now you have to align each of those individual data lines with the same clock. So that created the problem of bus skew whereby, you know, bit zero could arrive before bit 15. And when you would clock it through, you would then start to get skewed data. So then you could have clock skew relative to the bus lines and of course you can have skew individual lines of the bus relative to the other lines on the bus. Anyway, so it was around about this point where people started to realize that serial was actually a better way to go if you wanted to get high-speed data from one point A to point B. And they realized that we had to stop modeling, or rather we had to stop ignoring digital data lines when we were doing modeling. 'Cause ordinarily, if you're designing an analog circuit, you would model and you would control the impedance of that set of tracks, because you wanna make sure that your impedance is matched on the whole route and at each end. Yes. Because if you- Yes, sorry. I was going to say, we didn't mention this in our previous episode on radios, but that whole radio chain is usually an impedance match circuit. 50 ohms is pretty typical or 75 as well. That's true, absolutely right. But in the bad old days, they never bothered because they didn't have to, you know, 10 kilohertz. It didn't matter. But once you start going higher and higher frequency, you cannot avoid this. You can say it's a digital signal as much as you want, but the truth is that, you know, the capacitance and the inductance will kill you. Well, not kill you literally, but it'll kill your signal. And that's kind of, when you're an analog or digital designer, it's the same thing as being killed. That's what it feels like. You look at it and it's a very sad waveform. ManyTricks is a great software development company whose apps do, you guessed it, many tricks. Their apps include Butler, Kimo, Leech, Desktop Curtain, TimeSync, Usher, Moom, NandMangler, Resolutionator, and Witch. There's so much to talk about for each app that they make, so we'll just touch on some of the highlights for five of them. Which, you should think about which as a supercharger via command tab app switcher. If you've got three or four documents open at once in any one app, then which is beautifully simple pop-up quickly lets you pick exactly the one that you're looking for. Name mangler, let's say you've got a whole bunch of files, you need to rename them quickly, efficiently and in huge numbers. Well, name mangler is great for creating stage renaming sequences with powerful pattern matching, showing you the result as you go. And if you mess it up, just revert back to where you started and you can try again. MOOM makes it easy to move any of your windows to whatever screen positions you want. Halves, corners, edges, fractions of the screen, and you can even save and record your favorite window arrangements with a special auto-arrange feature when you connect or disconnect an external display. It's really awesome. I use it every day. Usher can access any video stored in iTunes, Aperture, iPhoto, and on any connected hard drives on your Mac, allowing you to easily group, sort, tag, and organize them all in one place. Install some plugins, and there's no need to convert anything to an iTunes format to watch it. So if you've got a great video collection scattered across different programs, drives, and formats, then Usher can help you neatly sort it out. Resolutionator is their latest app, and it's gloriously simple. It's a dropdown menu from the menu bar, and you can change the resolution of whatever display you like that's currently connected to your Mac. The best part, though, you can even set the resolution to fit more pixels that are actually there on the screen physically, and it's still usable, handy when you're stuck on your laptop screen and you need more screen real estate and you're missing your desktop. That's just five of their great apps. There's still another five to check out. All of these apps have free trials and you can download them from manytricks or and try them out before you buy them. They're also available from their respective pages on that site or through the Mac App Store. However, if you visit that URL, you can take advantage of a special discount off their very helpful apps exclusively for Engineered Network listeners. Simply use ENGINEERED25, that's engineered the word and 25 the numbers in the discount code box in the shopping cart to receive 25% off. This offer is only available to Engineered Network listeners for a limited time, so take advantage of it while you can. Thank you once again to ManyTricks for supporting the Engineered Network. Anyway, okay, so problem is that in order to overcome this problem with digital data transmission, we actually had to, we the industry, had to start modeling it as an analog transmission line. And that's when I come back to everything is analog. So we're trying so desperately to convert information into digital to overcome the problems with noise, which we then achieve. But the more data we push through, the faster we push it through, we come back to analog modeling techniques to handle our digital data. And the transition seems to be around about 100 MHz. And I say around about because there's a lot of variables. There's all sorts of things you can do to try and extend it a little bit further and tweak this and tweak that and so on. But inevitably around about that frequency we start giving up on parallel. We start giving up on just laying tracks randomly or having tracks that are just precisely the same length so that your propagation delays are all the same and laying them out perfectly in parallel. we just give up on all of that and we just say you know what I'm just going to go with a Transmission line two strips, and they are always the same gap sandwiched between exactly the same kind of planes and everything is completely impedance controlled from end to end and That is essentially when we start talking about what they have come to coin the phrase as high-speed serial So and where I talked about this previously was when I talked about parallel ATA and serial ATA on episode 12 So refer you back to that if you want to know more about the hard drives and solid-state drives and and their switch to high speed serial for interconnecting So that's if you'll hit your motherboard on those squiggly lines typically you see it around the memory as well Although I don't know if that's parallel or serial off the top of my head Well not talking about addressing lines but the point is that there's other issues as well with high speed that I just want to quickly touch about and my favourite one that's annoyed me is jitter and the first time I saw jitter it kind of blew my mind because you have this idea that a clock is pure. I don't know why I ever thought that but you know for whatever my brain thought oh yeah that's a clock so clocks are always on and off one cycle right they're always the same period and they drift with time right forward or back like there because they never if you say i've got a clock and it's exactly 100 megahertz then it's always 100 megahertz maybe it's going to be like 100.001 megahertz so it's always going to creep forward so we've got an rtc like a real-time clock based on that frequency then it's going to drift ever so slightly with time when the first time i saw on a digital sampling oscilloscope and i saw the jitter on a high speed clock it kind of blew my mind. Have you ever seen jitter? Oh, yeah, that plays a big part in switching regulators on the phase node, which for the purposes of this episode is a digital signal between zero and whatever your input voltage is. It's got very sharp edges like the digital we're talking about, and it suffers from all the same problems from capacitance, inductance, jitter. So, the way that I've seen jitter on a DSO is that you'll set the trigger level to, let's say where the clock edge rises up to maximum. You'll set that and lock that in the center of the screen. And then you just set your span such that you can actually see the width of that actual pulse, that clock pulse. And if you can see that on the screen, like half a waveform, slightly more than half on each side of center, and then you'll see the width of those clock periods changing and they sort of like wobble left and right. turn on the persistence feature to see how much and you know check every edge yeah that's it so the clock it does not I mean you may think hey I've got a period of like you know 10 10 50 nanoseconds or something like that or whatever it is and it's not it'll be like 49.1 and then it'll be 50.2 and then it'll be 49.7 it'll be 50.1 and it's all over the place it's not exactly the same every single cycle. And that's a big problem because if you're using that clock, even if you're using high-speed serial, if you're using that clock to try and extract and recover your data, you're in big trouble. You know, because you can't be guaranteed you're actually sampling the right point. You could be slightly behind, slightly ahead. And clock drift is simply the fact that your jitter will be predominantly positive or predominantly negative. So over a long period of time, those slight additions or slight subtractions will change the overall frequency and that will cause the drift. So I just find jitter fascinating. I guess the first time, like I said, first time I saw it, it just melted my brain. I thought it was the craziest thing in the world. 'Cause I just had this picture in my head that a clock's always gonna be precise. And it's like, no, it's not. So anyway. - A lot, a lot of money, you can get a very low jitter clock and that can be true. - Yeah, but if you zoom in far enough, it's still gonna have jitter. You know, you can never- - Everything is analog. - There you go, right there. Everything is analog. - Roll credits, we're done. No, we're not. We're not done. We're not done yet. Okay. We've only just begun. Man, we could go on for hours, but we're not going to for the sake of our listeners' mental well-being. So, okay. So, traditional... All right, so I've talked about it. Okay, right. So, now we've come full circle. One of the things that you can do to overcome this jitter problem and high-speed serial problem is we can actually start to encode the clock with the data. It's at that point where would you like to talk about that? Oh, encoding techniques. Yes. Taking a nap over here. Yes, so there's various ways you can encode your digital logic in your data. The first and the most basic one is called a non-return to zero. And like I said, this is the extremely basic, it's what you think about when you consider digital logic. Transitions only occur when the logic bits change. And it requires a very accurate clock and error detection to make sure your data's being sent properly. So if you have a string of four zeros, you will just sit at zero volts for four clock cycles and then transition to a one whenever that comes along. Very error prone, like we said. Archaic clock. Simple in implementation and thought process, but has its drawbacks. So you can move to what's called a biphase method of encoding. And the state transition in a biphase system happens at the end of every bit frame. So logic highs have an additional transition in the mid-bit. And it allows for some clock information to be passed along with the data stream. So if you wanted to send along a zero zero on your circuit board, you would send low plus high. So instead of just being zero for one clock cycle, you would transition high. Or it could be high plus low. So if you started at a one, you would transition to a zero mid-clock instead of saying low low. Cool. Cool. Same with 1-1. You would say, you know, low-high or high-low, depending on where you started. Did I get that right? Yep, that sounds right. Yeah, trying to make sure it was clear and I'm not just, you know, rambling for our listeners. Yeah, no, that's okay. It's the idea is that when we encode the clock with the data, it means that we're guaranteed that it's going to be essentially synchronized and that means that it's funny though because it tends to you would think oh that's wasteful and I guess it kind of is a little bit but at the same time it overcomes that problem and then you can then crank the speed up even more and that's the ultimate goal is to go faster and sometimes you have to take a step back to take a few steps forward and I guess that's the way to think about it And yeah, yeah, yeah, so by allowing more transitions, you know for 00 instead of being low low you go low high or high low you know you're You're taking away some of the error from clock skew and jitter and you know, you're saying there's a definite transition here There is another bit. We're not just you know, hanging out not knowing what the lines doing Yes, exactly. So yeah, that's it. Exactly, right so Manchester encoding, I think, is similar to biphase. Correct. Yes. You want to talk a little bit about that? Sure. So, it's similar to biphase in that you always get, you know, a transition at the end of every bit frame, but you structure your data in such a way that it always yields a DC value of 50 percent or halfway between the supplies. So, if you're at five volts, you know, it always sees a lot. The line always sees an average voltage of 2.5 volts. And similarly, if you're at 1 volt, you always see a half a volt. So what this means is that the average power over a long period of time is constant, regardless of the data stream. And again, the state transitions occur halfway between the bit time frame. So why would you want the same average power? Well, it simplifies your circuit design in your receiver, is how I understand it or your transmitter. - Yeah. - You can bias it easier, I think. - Yeah, I know that it is popular. The bottom line though is that they're all different, subtly different ways of implementing the same kind of idea, which is to integrate the clock with the data. And this is just, yeah, we're just talking about like bit level data transfer. We're not even talking about when you start to structure, like packets and CRCs and parity and all that other stuff. I'm not even talking about that. But obviously you also build in more redundancy on top of that so that you can then identify that when data's gone astray and so on and so forth. But not wanting to get into that, just talk about the lowest level nitty-gritty, that's it. And honestly, I don't have too much else I wanted to talk about. I guess I just find this whole thing that digital is, one is always a one, a zero is always a zero. That is so completely false. And once you get your head around that, you can sort of understand that everything is truly is analog and all the digital stuff that we play with, all the software that's- It's all defined. It's like, is it true? Is it false? You know, all that sort of stuff. It's all built on- It's layers of abstraction on top of the real world. And in the real world, everything is analog. Yes, and there's so many different techniques and stuff to get over the fact that digital is not a "real" scenario. One of my favorites I always like is channel equalization, which is, we talked about about 100 megahertz, you're transmitting along, say, a coax cable, and like John said, at about 100 megahertz, you start running into issues where your nice pretty ones and zeros are not. So if you look at just the Bode plot of the cable to get its frequency response, you'll see it starts to roll off like a low-pass filter at 100MHz in this example. So with channel equalization, at the other end you have your receiver and you send out a test pulse, see how it changes, and you configure a filter so that you add gain at 100MHz and above to keep the channel "flat". And it's really cool stuff. Yeah, actually beyond just channel equalization it sort of makes me realize that one of the things we've focused on a lot so far just talking about this topic is you know circuit board, circuit tracks and everything. I did sort of talk about cables but honestly cabling is the same sort of problem you know and yeah and it's like it's one of those things so long as we use cables and wires and all that sort of stuff to and I say all that sort of stuff for a specific reason is that all the stuff that we're talking about has to do with the problems associated with inductance and capacitance. I haven't talked about optics and I don't know if I should, maybe that's a topic for another episode but honestly a lot of the problems with cabling go away with optics but a whole different set of problems come onto the table with optics, right? So, you know, things like band radius, things about like splices, joints, patches, all of that stuff, which is more analogous to resistance. But, you know, you don't and of course, you know, fiber also like optics has also got other problems with different interference and, you know, wavelength division, multiplexing and all that other rubbish. So, you know what, another topic for another day. But honestly, it comes back to the cables and it's not about the plating on the end. It's how accurate and how well built the cable itself is. And that's why you'll have like tight tolerances for Cat6. and they'll say, if you get this Cat5 cable, it's not guaranteed to work over a 10 meter length at 10 gig. So, you know, a Cat5 cable isn't rated to run at one gig or 10 gig over a certain distance. And that's simply because the tolerances for capacitance and inductance on those twisted pairs inside the four pair structure, if it's a four pair cable, you know, that is not as tightly controlled on a Cat5 cable. You go to Cat6e, for example, and that's a much tighter spec. And that is guaranteed to carry gigabit or 10 gigabit over a certain distance. So that's why people say, oh, well, I'm gonna lay Cat6e cables now. They're only plugged into an ethernet switch that's doing gigabit, but I wanna go to 10 gig at some point. So by laying these cables now, I'm sort of future-proofing it to a point, despite the fact that cables are now more expensive to get a 6e than it is on a Cat5. That's just another example beyond the circuit board that people will deal with from day to day. So if it was me and I was putting, twisted pair copper through my house, I'd get the more expensive, higher quality cable design to run at 10 gig on the presumption that at some point in the next two or three years or five years, the 10 gig switches are gonna become more commonplace. And then because they are better quality, they can handle the high speed serial data better. There you go. Did you have anything else you wanted to add on this Or should we wrap it up? Pfff, well I could add quite a bit, however I'd be just riffing and my digital modelling knowledge is not good enough for that. Oh, okay, fair enough. Well then, in that case, if you want to talk more about this, you can reach me on Twitter @johnchiji, or you can follow Pragmatic Show to specifically see show announcements about the show and other related stuff. Remembering now that Pragmatic is part of the Engineered Network and also has an account at engineered_net that has announcements about the show and the network, all the shows actually on the network, and you can check them all out at If you'd like to get in touch with Carmen, what's the best way for them to get in touch with you, mate? Best way to get in touch with me is via Twitter, @FakeEEQuips, and I'm usually on every day checking during meetings. Oh, you're not supposed to admit that, but I also may or may not do the same thing. There If you'd like to send any feedback about the show or the network, please use the feedback form on the site. That's where you'll also find show notes for this episode. I'd like to thank ManyTricks for sponsoring the Engineered Network. If you're looking for some Mac software that can do ManyTricks, remember, specifically visit this URL, for more information about their amazingly useful apps. Thanks also for the ManyTricks for being a launch sponsor for the Engineered Network. Finally, the network also has a Patreon account that should be up now. So if you like what we're doing here at the Engineered Network and you'd like to contribute something, anything at all, it's all very much appreciated. And it helps to not only keep the existing shows going, but it also helps to bring new shows to you. So there are a few perks in there as well. If you'd like to go and check it out, it all helps. Thanks again everyone for listening and thank you, Carmen. All right. Thank you, John. And thanks again for listening. Thanks everybody. [MUSIC PLAYING] [Music] [MUSIC] (upbeat music) (upbeat music) (upbeat music) (upbeat music) (upbeat music) (upbeat music) [Music] (dramatic music) [BLANK_AUDIO]
Duration 43 minutes and 41 seconds Direct Download
Episode Sponsor:

Show Notes

Miscellaneous Links:

Premium supporters have access to ad-free, early released episodes with a full back-catalogues of previous episodes


Carmen Parisi

Carmen Parisi

Carmen is an Electrical Engineer working as an Application Engineer in analogue electronics and has a blog Fake EE Quips that he occasionally posts to. Carmen is also a co-host on The Engineering Commons podcast.

John Chidgey

John Chidgey

John is an Electrical, Instrumentation and Control Systems Engineer, software developer, podcaster, vocal actor and runs TechDistortion and the Engineered Network. John is a Chartered Professional Engineer in both Electrical Engineering and Information, Telecommunications and Electronics Engineering (ITEE) and a semi-regular conference speaker.

John has produced and appeared on many podcasts including Pragmatic and Causality and is available for hire for Vocal Acting or advertising. He has experience and interest in HMI Design, Alarm Management, Cyber-security and Root Cause Analysis.

You can find him on the Fediverse and on Twitter.