John and Ben discuss SSDs, data transfer bottlenecks and the evolution of PC data storage architectures.
[MUSIC] >> This is Pragmatic, a weekly discussion show contemplating the practical application of technology. Exploring the real-world trade-offs, we look at how great ideas are transformed into products and services that can change our lives. Nothing is as simple as it seems. I'm Ben Alexander and my co-host is John Chidjie. How are you doing, John? I'm doing very good, Ben. How are you doing? I'm doing well. Excellent. Thank you. A special thank you to our live listeners in the chat room and dive straight in. First of all, again, thank yous to everyone from Twitter and app.net, still getting lots of great feedback. Thanks, guys. Thank you, everybody. Really appreciate it. I also have a minor correction for something I said in episode 11. I said at one point it was a 1.44 megawatt power station, which is really tiny. No, it's actually 1.44 gigawatts. Sorry about that. Anyway, special thanks again to the listeners who have emailed me directly. I got a few more this week and I have got a bit of a backlog. I apologize, but I'll be hoping to clear that out next week. I had another four iTunes reviews, three from Canada and one from Portugal, which is kind of of cool as well. Thank you to Berkovits, JPC_PT and Cajunonrails, which is a pretty cool username. Again, much appreciated and glad you're all enjoying the show so much. Thank you for those. Just an update on Tech Distortion is migrating across to Stanamic. It's going well. I'm getting close but not quite there yet. Although this week, I finally transferred my domain from GoDaddy to Hover, which is something that I've been wanting to do for quite some time. So hopefully then the updated site will be up next week and then, yeah, feel free to let me know what you think at that point. So Ben, you had, this has been on the list for, since the very beginning and I've been pushing this topic back for a while, but I know that you were very keen for me to talk about this one. So would you like to tell everyone what we're going to talk about today? We're talking about SSDs today. Yes, we are. And one of the reasons I wanted to talk about SSDs is because I've always been absolutely fascinated by the idea of completely no moving parts because when I was growing up seeing hard drives and floppy disks and everything, it was one of those things that I thought, "Well, why don't they just build it all out of RAM?" RAM. And of course, then I learned that obviously RAM needs a battery on it and blah, blah, blah. Actually, when I was a kid, one of my friend's fathers was a bit of a computer geek and he built his own RAM disk, like literally a circuit board with a bunch of sockets on it and RAM in there. And he actually made himself a RAM drive. It was very cool. I have no idea what the specs were, how much memory it was. I just know that it looked really cool. But the other way you could look at it is, well, it's not something Apple ever would have made, but to the geek in me, it was just, "Oh, that is awesome." So the idea of a completely solid state drive has always fascinated me. But it's not just that, SSDs have become the quickest way to transform your computer. And a lot of people are using computers now with SSDs, but there's a lot of little things about SSDs that not all SSDs a created equal. And I guess that's why I wanted to talk about it is because there's actually quite a bit to cover in order to get your head around which SSD is the right one for me if and yeah, if I'm going to get one. So I just want to start by sort of setting some boundaries on this. I'm not going to talk about hard drives, not really, because that's not really the point of it, the point of it at all. I have to give a bit of history so that we basically know where we're going and understand why we are where we are. So it's gonna be one of those, a bit of a history lesson there. So in any case, the argument for SSD over hard drives, and there's a whole bunch of really good reasons. So let's start with, there's a whole bunch of analysis and reports and stuff that's been done by all sorts of different companies. One of the ones I found, quote, it's a very interesting numbers, I was able to corroborate most of them. In a given three-year period, there'll be 90% less failures of solid-state drives versus hard drives. That's mainly because there's no moving parts. 44% increase in speed for tasks such as waking from sleep, rebooting, operating an average of 12 degrees Celsius cooler, power savings up to 70% for some models. They're much stronger physically and can handle much larger shock and vibration, physical impact than a hard drive. SSDs generate no vibration and hence they create no noise compared to a hard drive. So whirring, sound, nothing. And SSDs are also lighter than hard drives. So irrespective of all of the advantages of speed, there's a heck of a lot of other really good reasons to go to SSDs. And that's why solid-state drives are in practically every performance laptop you'll find. So first of all, got to talk about bottlenecks. And the problem is that you've got data in one place and you want to get it to another place. You got some data on your CPU, call that point A if you want, and you want to get to point B, that's the hard drive. It's got to get there between point A and point B through a pipe. And if that pipe's not fast enough, then that is essentially a bottleneck, which means that your data will go no quicker than the minimum speed in that system, which is a pain in the neck. So a lot of the stories, a lot of the stuff with SSDs is all about the gradual iterative removal of all of these bottlenecks that have existed in computing design. So let's start by talking about serial and parallel. So initially, parallel won all the early battles with transferring data between point A and point B. The idea was simple, you could transmit more bits at once. So you'd have one wire for each bit and you could group 8, 16, 32, 64 of them together, these big honking wide ribbon cables, and you could clock through 64 bits at a time. well, early buses, eight and 16, but you know what I mean? You could literally clock all that through at the same sort of speed. If you did it in serial, you'd have to send one after the other, one after the other, and it would take quite a lot longer for a given clock speed. That was what everything originally was all parallel, or at least all of the high data rate data buses were all parallel. Originally, the ATA standard stands for Advanced technology attachment. And what it was originally, it was called ATA just on its own ATA, but it was actually Parallel ATA is what they came to call it once Serial ATA or SATA came about. So they sort of retrospectively renamed it to Parallel ATA, which is a bit weird. So Western Digital came up with that in 1986 and they released the Integrated Drive Electronic Interface, otherwise known as IDE. And it was first seen in compact PCs that year. The spec included not just a bus spec, but also the drive control was physically on the hard drive itself. And that was a first at the time. So essentially, an ISA bus, which is the industry standard architecture, is what that stands for, was created in 1981 by IBM. But all the ATA was initially really was an extension of the ISA bus. It really did very little in the way of modifying the signal or anything. It was just like the signal on the motherboard, but now connected through a card to the hard drive, because they're all hard drive control cards back then. Anyway, but these IDEs, of course, had the controller on the drive, so then you could go from the motherboard, essentially through a ribbon cable straight to the drive. Anyway, so originally the ATA bus started out about 16 megabytes per second, and then it progressed to 33, 66, 100, then it maxed out at 133. It was a 16-bit parallel architecture. However, one of the things that they began to discover in computing was that parallel wasn't everything it was cracked up to be. Because when you've got a multiple of cables running next to each other, we've talked about this previously with capacitive and inductive coupling. As a signal goes up and down, like positive to negative or from zero to five and back again, as you're sending bits and voltage pulses along there to represent the bits, what happens is that actually creates very small magnetic field and that magnetic field couples to the next wire which couples to the next wire and all the wires then start to basically mess each other up. You can think of each of them as a transmission line and a transmission line with a variable impedance. So the outside ones have a different impedance to the inside ones because the inside ones are surrounded by more wires. So what it meant is you get this skewing effect whereby the bits in the middle would arrive later than the bits at the outside. So if you imagine all of the bits being sent on the 16 wires at one point, and then by the time they get to the end of the cable, the ones in the middle are lagging behind the ones on the outside. And this clock skew was a big problem because what it meant was you were essentially, you would reach a point where the skew was so bad, every time you clocked and said, "Read the data on those 16 bits," you would not know what data word it actually belonged to because this clock skew was too bad. So parallel turned out to be not the best way to go because once switching technology for transistors and MOSFETs and so on got to that point where you could go faster and faster and faster, it became the losing proposition because of all of the issues with coupling between the cables. It sort of seems a little bit counterintuitive. It's funny it started out as serial, it was slow, then they went parallel to go faster and then they went back to serial to go faster again, which is kind of strange, but that's just the way it worked out. So a little bit more about ATA, they developed the drive interface standard called ATAPI, which is the Advanced Technology Attachment Packet Interface, and that then led to SATA, which was their attempt to overcome the parallel problem, the parallel bus problem, with all of its clock skew and all that stuff. So SATA was created in 2003 and it started at 1.5 5 megabits per second, and then to 3 and then to 6. So you can represent those, however, in megabytes, which is more commonly the way it's referred to, 192, 384, 768 megabytes per second. The problem is, it's a serial interface, which means that in the serial interface, you want to make sure that you don't have a separate clock. So what they do is they actually do what they call an 8B10B encoding. And what that does is that it represents eight bits in a word, eight bits in a byte, sorry, by 10 bits. And hence the number of symbols makes it such that after so many successive bytes, you're guaranteed to have a recoverable clock signal. So in effect, the clock is encoded with its own data. So you encode the data when you send it, you decode it at the other end and you decode the clock at the same time that you're decoding the data, which means that you have no skew issue, you have no synchronizing problem. It's beautiful. It works really well. And as a result, though, the raw data rate is not the actual data rate because that extra two bits for every single byte that takes up space and you lose that. So that ends up being 150 megabytes per second, 300 and 600. And that's for SATA 1, 2 and 3 respectively. So that's how we end up with SATA and the speeds that you'll often hear quoted for the different interfaces. I'm going to also talk a little bit now about PCI Express, and it may not make sense yet, but it'll become clear later. So I did start talking about ISA. So moving on from that, that was, as I said, 991 IBM came out with that as a bus interconnection for beyond CPU and memory. Initially it was 8-bit, moved on to being 16-bit parallel, but it could only handle a maximum of six devices. And in its initial, sorry, in its final evolution, fastest it ever got was 8 megabytes a second, which, you know, by modern standards is tragically slow. They were, if you remember PCs from that era in the 80s and early 90s, they were the long black, wider black plastic sockets. Did you ever see those in your travels? Yeah, yeah, I remember those. My grandpa had one. Yeah, I fiddled a lot with computers back at that point. And yeah, I was very highly skeptical when PCI came out. So that was the next one. And they were the white, they were a little bit shorter, but they were the white, not quite as wide, higher density white sockets. So PCI stands for Peripheral Component Interface. And it's parallel architecture like ISA. however, it was created in 1993 and it progressed from 32-bit to 64-bit, starting at 133 megabytes per second and finishing at 533 megabytes per second, which is pretty zippy. There are a couple of other little ones that happened along the way in PC land. One of them was something called AGP, which was sort of brief. That was the Advanced Graphics Port and that was released in '96. It was designed to be a point-to-point link, not technically a bus, and it was designed specifically for a graphics card, hence the name Advanced Graphics Port. But it was 32 bits and inevitably when PCI Express came out, it kind of died, which is probably a good thing. Is it a, I think, like a brownish or orange colored connector? I'm not sure. I don't remember actually, it was a while ago. And they put them on the motherboard, they were slightly offset to the PCI slots so that you couldn't actually accidentally put the the wrong card in them. That's still annoying. I remember I had one. It must have been on the, I think it was on the last PC before I got my Mac. Well, I was actually going through my old box of old stuff because, well, you know, being a geek, I keep stuff because you never know. I mean, I might need an AGP graphics card one day. And I found an AGP graphics card. I'm like, oh, cool. And I'm looking around the room and I'm just, there's no computers that this will fit in anymore. So I thought about keeping it and now I thought, no. But yeah, it was all a rage at one point. Another one that I, when I was, I actually forgot about this, but when I was doing the research for this episode, I came across and I'm like, "Oh, I remember that briefly." And that was PCI-X. Did you ever come across that one? Hmm, I don't know. It's a weird one. It was server-side. So what they did is they basically took PCI bus and they just doubled the width of it. And it was essentially only implemented in servers that I ever saw, which is why I only came across it once. and they had this honking great big long cards and inevitably in the end it was a very niche thing and ended up dying, which is again a good thing I think. And eventually PCI Express came about and hallelujah. So PCI Express is the motherboard equivalent of SATA. In other words, it was the transition from the parallel architecture to the serial architecture because serial is just better for higher speeds. So PCI Express, it's actually not technically a bus. And this is people get a little bit confused by this. It's not actually a bus because on a bus, you've got a whole bunch of address lines and each of the cards is given an address and all the data travels on the same bus. If I'm talking on the bus, you can't talk on the bus, blah, blah, blah. And it's a shared access medium, that's what a bus is. However, PCI Express isn't. It's like AGP. It's a point-to-point connection. So imagine that you've got a bunch of channels, or they call lanes, and you can then connect a card in and it'll say, "Look, I need two lanes," and the controls say you can have, you know, five and six or whatever. And so then that point-to-point link is established and that's what it talks on. No one else talks on them. There's no addressing beyond that. It's simply negotiation and then that's it. So there's been, I should say technically, there's been three, but the fourth one is imminent. So there's, call it four versions since it was created in 2004. So each of the versions is essentially defined, well, for all I care about talking about today anyway by the data rate for a given lane. So the 16 lanes in PCI Express, version 1, each lane is 250 megabytes a second, version 2 is 500 megabytes a second, version 3 is 985 megabytes a second. A couple of years ago, they started to release motherboards that support that. And version 4 is 1,969 megabytes a second. That's a bit of a screamer, but that's coming up soon. So the idea is a card could say, "I want one, I need one lane, I need two lanes, I need four lanes," whatever. And then of course, you simply accumulate those data rates to get all the bandwidth that you need for whatever your application might be. And that's the end of talking about buses. It'll make sense in a minute, so please bear with me. Okay, so now we get to talk about memory, obviously, because we're talking about SSDs. Okay, so breaking down memory into two sets of buckets. So the first bucket or category, whatever you wanna call it, is the difference between volatile memory and non-volatile memory. It simply relates to whether or not data can be retained without power, aka electricity applied to it. Don't really wanna go into too much depth about the history of RAM and ROM, like DRAM and all that sort of stuff. Not really wanna talk about that. Not this time anyway. In general, essentially, ROM is read-only memory, meaning that you cannot modify it. And random access, well, actually, in fairness, it can't be modified without difficulty. Perhaps that's a better way of putting it. And I know it sounds a bit fluffy, but I'll talk about it a bit more in a minute. Whereas RAM is random access memory, but random access memory is not the opposite of ROM. Random access simply means that you can access different parts of the memory randomly whenever you feel like. It's like, oh, I have this bit, I'll have that bit, I'll have that other bit. And they're all completely all over the place. It's totally random, and you can access them whenever you like. So that's RAM. So as I said, it sounds a bit funny saying that read-only is difficult to, sorry, difficult to write to, but there is of course the traditional standard ROM and it was burnt in a factory, sealed up in a box and inserted into a socket and that was the end of it. You never got an update. And I remember changing out with some updated firmware on an old printer a long time ago. I think I actually talked about that previously and you literally had to pull the thing out and put the new firmware in and make sure the pins, you didn't bend the pins. God, if you bend a pin, oh. 'Cause of course these things are shipped from the States and we're in Australia and if you bend a pin, 'cause you know when you bend a pin on those things, if you try and unbend it, it almost snaps off the first time you try and straighten it 'cause the material it's made out of just doesn't, can't handle dislocation. So you work it back and forth, it snaps first time. terrible. Anyway, so I'm glad that's gone. But anyway, the next one, next kind of ROM was an erasable, programmable, read-only memory, which was called an EPROM. And what they had is they had usually a small quartz window and they usually put a sticker over the top. And usually on the sticker, you'd have some kind of warning. If you remove this, then you'll lose everything and the world will end, that kind of thing. Sometimes you'd also put the version number of the firmware that that you wrote, that sort of thing, whatever. And in order to actually erase these things, you would take the sticker off and expose it to ultraviolet light. Now, the pros, they had their little ROM cookers, little ultraviolet thingies, like you put your ROM in there, take this lid off, slide it shut, turn the light on. And it was like a little black box with a light. Did you ever come across any of those, Ben? No. No. Well, the pros had those. When I was mucking around with EPROMS originally, before I started moving on to the EPROMS, which I'll talk about next, I actually didn't have an ultraviolet light source, except for the sun. And what I did in some of the early projects that I was-- because I was into electronics when I was a kid and used to build the kits from a local electronics store, kind of like Radio Shack, but called Dick Smith Electronics. Funny thing was I actually ended up working there later on. So anyway, and when you download the firmware, you had to start off with a clean ROM. So, yeah, I actually took the little sticker bit off and put it out on the windowsill and waited for the sun to erase it. Of course, it took a hell of a lot longer. And one time I did it and I didn't leave it long enough. And I got back and before you wrote to it, you had a program routine you'd run to check to make sure the thing was actually blank, like all FFFFF. and nope, it wasn't. So then you have to go back and start all over again, just to be sure. So you're always wondering, have I left it in the sun long enough? Have I cooked it long enough? Oh, it's painful. So very painful. Anyway, so the particular physics effect, and I won't talk about too much, but just to mention it is, it's called hot carrier injection. And that's important later on, we'll get back to that. And that's the way you would reprogram them. There's a wiki article I've linked to if you're interested. It's very interesting, but a bit too physic-y for what I want to go into today. So EEPROMs, which is what most people start to think of when they think of programmable ROMs, and they are electrically erasable, programmable, read-only memories. So EEPROMs, much better than trying to say that mouthful. So there's no mucking around with ultraviolet light, which is great, a hell of a lot better. Developed by Intel in 1978, but I did not actually come across one until, oh, jeez, would have been 92, 93, I think. And it was this big cool thing because I built an EEPROM, it was an EEPROM and EEPROM programmer, which was a kit from Dick Smith. And you'd load up the DOS software and it was a black box with a socket on top. And you would simply put your ROM and your EEPROM in there. And all it does is it would apply a high voltage to the chip and essentially byte by byte would wipe out those segments and then you could write to it. and turn into a ROM. The problem was they were expensive and honestly, they're quite slow in terms of RAM. So, they never really took off that much. So, anyway, they raised and wrote data using field electronic emissions. Feel free to look that one up as well. It's also called Fallon Nordheim Tunneling, which is much cooler sounding. There's a wiki article, read up if you like. Okay, now we're almost about ready to talk about Flash. So all this evolution, there were essentially two paths that silicon went. There was the bipolar junction transistor, BJT, or there was the field effect transistor, or the FET. And FETs are the ones I want to talk about. I mean, I have talked about IGBTs before, insulated gate bipolar junction transistors, and they are used predominantly for power switching. So, bipolar transistors eventually evolved to die GVTs, which is sort of a hybrid between FETs and BJTs. And that's great for power electronics, but we're only really interested in the FETs. So the FETs have a common channel, and then there's a, think of it like a groove around the outside. You put some end doped silicon around the P channel, and by applying a voltage to that, that generates a field, electric field, and that field will essentially choke off the flow. Think of it like a pipe with water going through it and you squeeze your hand on the pipe and that stops the water from flowing through. After a long period of time, probably hurt your hand as well, but still, that's essentially the principle. The problem with FETs is that they leak. When I say they leak, I mean the voltage you apply will eventually inject or it will extract electrons that are trying to go through the channel. What I need to do is I need to insulate between the gate and the actual channel. So what they do is they deposit a metal oxide semiconductor, silicon dioxide, which kind of sounds a bit weird because the whole thing's actually made out of silicon. But anyway, silicon dioxide acts as an insulator and insulates that gate from the channel quite effectively. And they call that a MOSFET. So flash memory is technically an EPROM, but the mechanism itself isn't exactly the same as electrical rewriting. It's kind of like a modified MOSFET. So if you can imagine a MOSFET with a single gate and flash memory has an additional gate that they call the floating gate. So what we would normally call the gate on a MOSFET in flash is referred to as the control gate. So all this actually was invented a while ago. It was actually invented back in 1980 and the name was coined, now I know I'm going to mangle his name, by a gentleman from Toshiba, excuse me, called Shoji Arizumi. And he noted that the process of erasing flash was just like using a flash on a camera. The funny thing is, though, despite the fact that Toshiba invented flash originally, it wasn't them, it was Intel that actually mass produced the first commercial flash memory. And that was in 1988. So it was actually quite a big gap before anyone actually went and took the idea and industrially produced it. All this time, of course, E-Proms were coming along and they still had their issues, but that was the first time it was built. And it was built, they called it NOR, NOR Flash, as in a not OR gate, a NOR gate. So NOR Flash, again, uses the same Fowler-Nordheim tunneling for erasure, and it uses hot carry injection for programming. So essentially, the UV effect that we were talking about for when you went to programming, that hot carrier injection is actually what sets the state of your floating gate, which is what stores the data in the flash. So NorFlash is more expensive. So they came up with a different kind of flash called NAND flash. I don't want to go down into too much detail as to the exact differences. Again, I've got some really good links. There's actually an EE Herald article that's really interesting. It's a good read and I highly recommend you checking it out. And of course, there's a couple of Wikipedia entries as well. So if you really want to look into the physics of the difference between NOR and NAND, please do but don't want to go on or I'll be going here for hours. So NAND flash uses a different phenomenon called tunnel injection and tunnel release for writing and erasing respectively. And it is essentially a much cheaper way to build, and it's, although it is not as fast, but still. There's actually also another article on Tom's Hardware entitled, "Should you upgrade to an SSD?" That describes a lot of what I'm talking about in pretty easy to understand language. So also recommend checking that out. So flash memory, therefore, falls into two categories, NAND or NOR. And why NOR is faster is it gives full byte by byte access. it's what you would call a non-volatile RAM. That's how it would be categorized. It's great as a ROM replacement for firmware, but because it's so expensive and the memory that's retained in it needs to be refreshed more regularly than a NAND gate would, it's not really useful for as a hard drive replacement. But NAND, although NAND only gives you block by block access, it can be packed at a much higher density than it has about 10 times the lifespan. It's a lot cheaper and therefore NAND is far more analogous to a hard drive with sectors, but with NAND, you've got blocks. So it's more analogous to hard drives than it is to RAM. And therefore, it's not really a good replacement for RAM, but it's a great replacement for hard drives. So there's two kinds, single level cells and multi-level cells, SLCs, MLCs, which we did briefly touch on in episode nine, I think it was, towards the end we're talking about flash as a backup method. So I said I was going to talk about it again, here we go. Which is essentially the difference of the multi-level cell is that you get two layers instead of one, hence the multi-bit. And therefore you can quadruple the data density. So you've got two bits per gate and therefore two squared is four. So you've got four options of four different states for that particular cell. In terms of longevity, it's about 10 years. And again, this is something I talked about previously. Sorry, it was episode 7a. My apologies. I thought it was episode 7. And that particular kind of flash is hereafter, when I say flash, that is now what I'm talking about. I'm talking about NAND flash. Okay. And mostly MLC, because MLC is the one that everyone likes to use, because it's cheaper and still pretty fast. Okay. So, SLC is faster than... Sorry, SLC is faster than MLC at writing, but MLC reads faster, not quite two times as fast. But this is counteracted by the difficulty in reading the multiple bits of the layers at once because you've essentially got to read two layers of information out of one cell. So that leads to more requirements for error corrections, they're more error prone. And so you can add the error correction in, that's fine, but obviously you take a performance hit for doing that. So the bottom line is you boil all that away, SLC is faster but it costs more and that's why you generally don't see it so much. Okay, so wear out mechanisms. Inevitably, the more you read and write to a block of flash, it degrades its ability to contain that charge in the floating gate and after a certain period of time, that charge will simply dissipate out of the gate and the data will be lost. So at which point the controller on the SSD will flag that as bad and it'll then not use it anymore. So the controller tries to manage the read/write cycles evenly across the entire SSD. They call that process wear leveling, trying to make sure that each little cell gets the same amount of usage as every other cell. Harder than it sounds. Just quickly, another case for SLCs is that they also average about 10 times more lifetime than and MLCs, but so SLCs will have about, let's just pick a number, say 100,000 cycles. Well, obviously being 10 times better, the MLCs will only come in about 10,000. But having said that, don't get too alarmed. If you had an eight gigabyte NAND MLC SSD, if you keep it up with all the acronyms, let's say it's got standard 4,096 blocks in it, then it's gonna take 75 years of average usage to actually kill it. So it's not like it's that big a deal. Like, when I say kill it, I mean kill every single cell in it. So, you know, it's not as big a deal as you might think. But still. All right. Just covered that bit. All right. So SSDs, just like ID hard drives, have two components. They have the controller on the drive and they have the memory itself, which are just flash memory chips or integrated circuits. And people often refer to the controller as just the chipset. So they'll say, what chipsets your SSD running? And those chipsets need to deal with address translation because computers are expecting to see a logical block addressing structure that you'd get on a hard drive that's got sectors. Yeah, usually 512 bytes, more recently, 4 kilobytes. But the point is, it needs to mimic, to some extent, translate between the structure of the SSD's memory and what the operating system is expecting to see. So anyway, the predominant popular chipsets tend to be the Sandforce ones, Micron, Intel and Samsung. So there are others of course, it's not an exhaustive list, but that's just, you know, those are the big ones. Sandforce is probably the most well-known. So I guess the the Intel ones are as well. So two principles behind speed on SSD controllers are data striping, which is essentially like RAID 0. Sandforce also push data compression. Now by using both of those techniques and based on the level of striping and compression that's possible, obviously that varies based on the kind of data you're compressing. So text will compress a heck of a lot better than images and video will generally. So that's going to give you quite a wide variance in performance between different solid-state drives. That leads to some of the confusion surrounding, well, which is better, which is faster, and under what circumstances is it faster? An interesting aside with RAID 0 is that RAID 0 does not actually give twice the performance. The idea of data striping is, and I encourage you to look it up if you're not familiar with it, the idea is you have two drives and you put a strip of the data on each drive, and Therefore, when you want to access it, you simply grab a piece from one half from one drive and the other half you didn't get from the other drive and you combine them together and then you're done. So if you've got an access time of, let's say, 10 milliseconds, which is absolutely atrocious, just picking a number, 10 milliseconds on one, you have two 10 millisecond drives. Well, if you're going to grab half from each, then in 10 milliseconds, you'll grab the whole lot. Whereas if you were to try and grab the whole lot from one drive, it would take you 20 milliseconds. And that's ludicrous. I mean, milliseconds is terrible, but still. The point is that it doesn't, in theory, you would think, oh yeah, well, twice the drives, therefore, I'm gonna get twice performance, but that's not true at all because you've got to combine the data between them and there's other issues you've got to deal with. It's not that simple. It's more like about 20 to 50%. And that's hotly debated because it varies based on the strip size, the usage pattern of the drive, what sort of data you're transferring. Some data lends itself to it, like larger files, for example, you'll see bigger gains. So, RAID 0 is not the perfect solution. Some people don't really don't like it. Anyway, so circling back to Sandforce again. So that was found in 2006 by two very clever gentlemen, Alex, oh God, Nakvi and Rado Danilac. I think that's pronounced, apologies if I've mangled those names. But anyway, they released their first drive in 2009. And then after a few years, they sold themselves out to LSI. And that was in 2002. So their drive controls are very, very popular. And I mean, they've got some very smart people working for them. And they're used by all sorts of different companies like OCZ, OWC, Transcend, Kingston, and that's just a few, there's a lot more. So Sandforce is very, very popular. And with good reason, it's a very good chip set. Their controllers are designed for NAND, MLC, Flash. I guess their business model was, they wanted to be a bit like Intel. Like Intel, say, here's the CPU, you put it in your computer and put an Intel inside badge on it. Unless of course you're Apple, in which case you don't put stickers on your damn laptops because it looks terrible. At least I think it does. Anyway, so that was Sandforce's business model and still is as far as I'm aware. There were some problems though. Not sure if you came across this one at all, Ben, but I actually did, I had a friend who had a drive who was affected by this. So they had issues with their firmware for a couple of versions where they were getting some corruption in their drives. Did you remember that was a few years ago? - Yeah, I remember reading about it, but no, I think I just got lucky or didn't get unlucky, I suppose. - Well, I didn't have any SSDs at the time. And the first SSD I owned was actually when I bought my MacBook Air and it's not a Sandforce controller anyway. I'll get to that later on. But yeah, I mean, they had issues and they suffered a little bit of a brand perception damage, I guess you'd say. Yeah. So, which I guess it was very unfortunate for them. And I'm not sure if that played a part in their moving to LSI or if that was a result of the move to LSI. I'm not quite, I don't know the exact timing of that, but I do know that they had some issues. So it is a kind of important thing to get right. Anyway, the thing about the Sandforce controller is part of their algorithms to get the best best possible wear out of your drives and the best performance out of your drives is that they do something called over-provisioning. When I first started looking at SSDs, I was tearing my hair out because I'm like, "Why are these drives 480 meg? Why are they 240 meg, 60 meg?" Because you know that everything's going to be in base two. So it'll be like 64, 128, 256, 512. Where is this other stuff gone. And it comes back to the Sandforce controller and the Sandforce controller over-provisions. In other words, it takes away some of our space, the user hour space. So you may be buying 64 gigabytes, but you're not allowed to use 64 gigabytes because they're going to keep about 7% and they call that over-provisioning. And what that does is that space handles background tasks that they use for things like garbage collection, for example. There's a whole bunch of little things that they do. I say little things, complicated. Again, a few good Wikipedia articles again, and then just show notes. I encourage you to have a listen, have a read of them if you're interested in exactly what they do. Some of it is very cool. And what that ends up though for us as the users, we only get 60 gig of that space. So four gig is gone. We just never get to see it at the hard drive level, usability level, which sucks. And a lot of the other controllers don't do that. like to add. So that's a Sandforce, predominantly a Sandforce thing. Okay, maybe you've heard about something called Trim. And what Trim tries to do is it tries to essentially follow a set of rules at the operating system level to reduce the amount of wear and tear on an SSD specifically. Because the wear out mechanism on SSD is, you don't want it riding to and from the drive all the time. So the idea with Trim was that it was a way to reduce that specifically for SSDs and for quite a while OS X, OS X, sorry, did not support that. I think it was 10, 10.6.8 I think it was that brought in the initial trim support. I don't have that in front of me. So it was, it wasn't, yeah, so what was that, five, four, five years ago, something like that. Truth is that there were SSDs around before then. So the trim support is a big deal. Again, not going to dive too much into trim, but suffice to say, the idea is that it's an operating system plus an SSD working together in a sense to a common standard in order to reduce the wear and tear on the SSD. So it's very important. But at the same time, every operating system now, I think pretty much has it. And Windows has had it since for or eight, I think even Vista may have had it. If not, definitely in seven. Other ways you can improve performance on SSDs, just like you did on hard drives is caching. So they put some memory in there to buffer things. So send a whole burst of data and then it sort of buffers it and writes it in a nice orderly fashion, packs it together and buys itself some time. So that buffering, that caching also helps. It doesn't help as much as on a hard drive though, because the read/write times are just so much faster on flash than they are on a hard drive. So still, every little bit helps. So again, there's lots of really good articles on this subject. So I linked to quite a few of them. And if you're really into this and you want to learn more about it, then there's definitely some good stuff. So there was a conference a few years ago and it was really good. There's a couple of ones there. There was a presentation by Ryan Fisher, senior application engineer at Micron Technology. I mentioned earlier, Micron, one of the controller manufacturers and designers. And that was surrounding flash performance. So if you want to go learn all about multi-planes, interleaving and multi-channel techniques, then feel free to read up on it. Don't really want to go to that level of detail. There's another one in the, another slide deck in that list as well by a guy called Robert Sykes from OCZ Technology. Good information in it, although I will admit it's probably not the best formatted slide deck. It was sort of one of those things where they, you know, the slide decks where they're a page full of text. That was one of them, but some of the information's good. So if you just overlook the slide formatting, then yeah, it's still good stuff. Anyway, so the structure of an SSD, let's talk about the structure of the memory now, 'cause we've talked about the controllers ad nauseum, so we're done with that. So the flash memory itself. When I quote these figures, I'm gonna quote the user data area as the first figure, and the second area is what they call the spare data for the one of a better name, I guess. Seems like they could have come up with a better name than that, but anyway. And the spare data is used for parity and for page health purposes. It never counts towards the actual storage total. And when you actually look on these flash chips and drives and so on, you will not see it quoted most of the time. Majority of drives that I've seen, you don't see the split. You just see the first number, the user data. That's all the user cares about. So they don't care about the spare data, so you never see it, but it's there. So we're gonna start on the smallest possible unit and work our way up to the biggest. So there are six levels that they talk about. Those levels are pages, blocks, planes, die, bank, device. Did you get all that? Start with pages. So pages, eight kilobytes plus 512 bytes. That's the smallest readable unit. So when you're reading from Flash, it's the smallest unit you can read. A block is essentially a page timesed by somewhere between 128 to 256 pages. And that brings you up to about one megabyte plus 64 kilobytes. That's the smallest writable unit. Of course, we're talking about MLC. So that's the smallest writable unit. So when you're writing a block, the pages are written sequentially, but essentially they're written as a block. So that page size will vary from 128 to 256. It's usually one or the other, depending on the overall size of the design of the chip. So planes are essentially blocks times 4,096. So that'll then take you up to four gigabytes plus 256 megabytes of spare data. And that's a plane. So what you do is you'll have all of this laid out in a plane in the silicon. So think of it like a physical layer. Now, when you actually put that silicon in a die, then you're gonna have more than one layer like a pancake. So you could have single layer, of course, single plane, but typically it's two. Some of the more recent ones have gone to four, but there's always an even number 'cause it's the same kind of base two progression, one, two, four, eight, 16, 32, blah, blah, blah. So we got one, we got two, some have got four. And now we're at the die level. So now you're at the die level, effectively now you're talking about a chip. So that is physically now you're holding in your hand a chip in a die, and that's up to about, let's say in this example anyway, eight gigabytes plus 512 megabytes of spare data. From that point then we start assembling them into banks. So we say, okay, I'm now going to have, let's say two of these. So I'll go two times eight, I'm now up to 16 gigabytes. So you got those two banks, oh, sorry, plus one gigabyte of spare data, it keeps creeping up. And then you are at the top level, the device level, now buying a physical box that I'm going to put in my computer or a physical car. So these stocks standard in SSDs. It's basically an SSD and hard drives clothing. Same dimensions as a two and a half inch hard drive. Okay, I love OWC and they make a whole bunch of different drives. No, we are not sponsored by OWC, but hey, it's just they're great for an example. So the Mercury Electra 6G and 6G Pro or Pro 6G, whichever way you're supposed to say it. Look at the 480 gigabyte models, $380 US and $448 US respectively. So that's the difference between the standard and the Pro. Now, here's what they say in the blurb. While any OWC SSD will provide significantly faster performance than a traditional hard drive, the OWC Mercury Extreme Pro is optimized to handle incompressible file types, like those used by audio video photography professionals for the fastest performance available. All right. So that's the marketing guff. What the hell does it actually mean? So how did they do it? Why did they do it? So the differences between the two really in terms of practical terms is they're going to give you an extra two years warranty on the pro and the higher incompressible data rate. So here's where it gets a little bit disingenuous is they quote the maximum speeds for these drives as 556 megabytes a second for read and 523 megabytes a second for write. And they are, you know, the pro and the standard model, they actually have very, very similar numbers. So they're plus or minus about 5 or 10 megabytes a second. They're essentially the same read and write rates. But then they have another one, incompressible data. And in the incompressible data, it's a massive difference. The pro model is twice as fast as the standard model. So this whole incompressible data thing, this is where I say it gets a little bit disingenuous. So this is using a Sandforce SF2251 controller. And just for the sake of completeness, there's 16 32 gig Intel Micron, 25 nanometer, asynchronous NAND ICs. And they all combine on the SATA3 interface. But the Sandforce controller is the point. So the problem is that because it relies on compression in order to reach the peak data rates that you see, 556 megabytes a second, for example. If you give it something that's uncompressible or is highly compressed already, then what's it going to do? It can't get blood out of a stone. You can't compress it anymore than it's already compressed. There's a limit. So essentially, it's never going to reach that top speed on the standard model. The only way you get higher than that is by having a high spec flash chip on it. Which is what the pro version has. So what you end up with is essentially they've used a different kind of NAND flash memory. They use Toggle NAND instead of the asynchronous NAND. And again, there's a Wikipedia article going to those subtle differences, but the point is that Toggle NAND is much faster. And essentially, we'll talk about that when we talk about the Excelsior in a minute, but it's more expensive. So, that's where the extra, what is that, extra $68 comes from, is from the toggle NAND. So, I do find it quite frustrating because when you look on the surface of it, the two drives look pretty similar. But once you read a little bit further down and you realize how the Sandforce controllers actually get some of their performance claims and performance boosts, it's sort of, to me, it's a little bit off. I don't think it's fair. There are certain conditions if you're transferring a bunch of text files, yeah, it'll blitz it, sure. But you give it anything else and it will not perform anywhere near as well. And that's what sucks. So that's something to be aware of. Okay. Next one I want to talk about is the one in my own laptop that I'm recording on right now. And that is my 2012 MacBook Air, 13-inch MacBook Air. I love this laptop, best laptop I've ever owned. and anyhow it has a 256 gig solid-state drive in it. Now you know I said 256 gigs, so now you know what you know because you've been listening and paying attention. It doesn't have a Sandforce controller in it because if it did it'd be 240 gig. No, it has a Samsung 830 series controller. The actual part number if you really care, and you probably don't, but here it is anyway, is an S4LJ204X01. Aren't you glad I told you that? 20 nanometer integrated circuit. So this particular one has a whole bunch of different mechanisms that don't include eating up that 7% that the Sandforce does. And it still performs relatively well, but I suspect it may have other issues with longevity perhaps, but you know, that's more difficult to quantify. So design choice, I guess, Sandforce made. Now, the particular model I've got is, has some 21 nanometer MLC 32 gig gigabytes ICs on it, 8 of them in total. And all of that is on a very narrow circuit board with an M-SATA connector on it. Mobile SATA. And that operates at SATA 3 speeds. speeds. So this particular drive, and also just for the sake of completeness, the actual chip is a K9 PFGY8U7B. And that particular chip with that controller, the 830 series controller on it, actually gets 447 megabytes a second read, 401 megabytes a second write. Now, if you really, really, really, really need every last little bit of speed, and I guess the way I say that, perhaps I'm suggesting you probably don't. But if you did, then OWC also sell a model called the Aura Pro 6G, and that will fit in a MacBook Air, and it'll give you an extra 50 megabytes a second read and an extra 100 megabytes a second write speed. However, you know, it is San-4, so you're not going to get quite as much space, you're going to lose that 7%. And most people are not going to see a benefit from that unless you're upgrading from 128 gig SSD to a 480 gig SSD, I don't know why you'd bother, I wouldn't. So, Apple's SSD that they supply with their MacBook Airs and their MacBook Pros are actually pretty good. They're not crummy ones, they're quite decent. So, it's the sort of thing that, you know, once again, reaffirms my belief that why I like Apple stuff is that they tend to put in better quality and better performing stuff. All right. The last one I want to talk about, and then we'll be done, is kind of gone past me now. And what I mean is that where once it was an option when I had the 2009 Halem quad core Mac Pro, which chewed through an enormous amount of power and for the sake of my electricity bill, I kind of decided it was time to sell it. It hurt a little bit to sell it, but it actually paid off. Our electricity bill went down significantly. Anyway, it had a PCI Express bus. I remember we talked about PCI Express and this is why. All of the drives up until this point have all been dealing on SATA 3, which is maximum 600 megabytes a second. So all the previous drives we had, the top speed you were going to get was for the OWC Mercury Electra Pro, and it was around about 560 megabytes a second. So it was getting dangerously close to that SATA 3 limit. So what's the point of having SSD that can transfer data faster than that if you're stuck on SATA 3? You'll never realize any faster because you've got that bottleneck like we talked about at the beginning. It's a bottleneck problem. So how do you then get rid of that bottleneck and get more speed out of an SSD? And the answer is you stop using SATA and you start using PCI Express directly instead. And that's exactly what I OWC did. And not just them, there's another driver that they call the RevoDrive you may have heard of. And these are PCI Express solid state drives. And the funny thing is, I talked about my friend's father who put together that RAM drive. And the funny thing is putting slots of memory cards into a PC kind of resembles what he built years ago. Mind you, this stuff's a lot faster and a lot bigger in terms of storage. So the OWC Mercury Excelsior E2. it's up to E2, Rev2 call it that. 480 gigabyte model, so the same size as the Electra. So the Electra, the pro model was $448, well 480 gig of this is $630 for the same amount of space. So why the extra money? Well it's PCI Express version 2.0, it's a two-lane card. So giving that that it's a PCI Express version 2, 2 LAN card, we know that that therefore gives us a maximum bandwidth in and out of this card of 1000 megabytes a second, which means we've broken past that 600 megabytes a second limit of SATA 3. So we're now able to flex our muscles a bit more with speed. It uses a Sandforce SF2282 controllers on each daughterboards. There's two daughterboards operating in RAID 0. And actually don't call them daughter boards. Sorry, that's a, yeah, they don't call them daughter boards, though technically that's what they used to be called. They call them blades. I guess that sounds kind of cooler, but anyway. Check out my blades. Anyway, so each of these daughter, I almost called them daughter boards again. Each of these blades has its own Sandforce controller. They're operating in RAID zero. And what they quote for, And there's a difference between Sandy Bridge and Ivy Bridge. Let's just stick with Sandy Bridge, where I'm reading write speeds. 780 megabytes a second, and that's your read speed, and 763 megabytes a second write speed. So that is well above the previous, the Electras, and definitely the MacBook Air's solid state drive. So it's huge. And the only reason it can do that is because it's on PCI Express. So, essentially. Sorry, I just lost my place. The next point. All right. I'd just like to point out that according to OWC at 820 megabytes per second, JPEGs ignite. Yeah, everything on this page is on fire. Yeah, that's it. Yeah, you're right. It does say 820. That, if you dig, is the Ivy Bridge figure. Right. The reason I didn't say that is because when you put them into a Mac Pro, then you would only get the 780 meg figure. It's all well and good to quote that for PCs. And again, this comes back to, look, I love ODBC, but come on, there's a little bit of BS going on, right? is the maximum speed depends on the architecture of the Intel CPU that's attached to. So in PC-Land, that's something to be aware of and you could realize the 820, but in Mac-Land, well, honestly, the Mac Pro is now a trash can, right? So you can't fit PCI Express in there, in a standard card form factor anymore. So you're out of luck, right? You're constricted to the smaller format. So this specific card that I'm referring to here is no longer an option if you're buying a brand new Mac Pro. The only reason I'm mentioning it is because I poured over the detail on this and I was obsessing about this. I'm like, I am so going to get one of these things. I'm going to make my Nihilan Mac Pro absolutely burn some JPEGs, just like on their website. But no, I didn't do that. Why? Because the power bill and that was the end of that dream. - Burning JPEGs is expensive. Damn right. I mean, and think of the JPEGs. I mean, there can't be a pleasant feeling. So anyhow, that's the unfortunate thing at this point. The Mac Pro no longer supports that, which is a shame. But if you've got a PC or an older Mac Pro, and plenty of people still do, then these things would absolutely blow anything else out of the water. The flash on them is MLC flash, 24 nanometer Toshiba. it's toggle mode, MLC, just like the Pro version of the Electra. Now I tried very hard to find a high resolution photo of these blades, but I simply couldn't find one that showed the exact chip identifier. I wanted to go down to that level and check the most likely options based on the research that I did, and I just sort of reached the end of my tether and I'm like, you know what, maybe it doesn't matter that much, but hey, either there's 4 128GB Toshiba Flash ICs on the top layer or there's 8 64GB ICs. So we've got 4 on the top and 4 on the bottom. Not sure which it is, it's more likely to be the 8 by 64GB. Either way, essentially it has 512MB storage which when you take it, you're over provisioning, you're down to 480MB which is what's quoted. In terms of speed, if you do the math, it's actually a 40% improvement due to the RAID 0 configuration, slightly more if you're on Ivy. But there may be some specific performance tweaking that they've done on that in order to achieve that because generally with rate zero, like I said, between 20 and 50%. The problem is it varies and it depends on the data that's being written and all those provisions I said before. But to pick a number, any number, they've picked, their numbers work out at 40%. Well, yeah, I think that you would realise that in some circumstances, yes, but perhaps not all the time. And that's it. I considered going into USB flash drives, but to be honest, I've been going for more than an hour and I thought that might be a good point to draw a line under it there. Happy to come back and talk about USB flash drives, but seriously, it's a variation on a theme, different package. And obviously USB 1.1, 2.0, and USB 3, all have different restrictions in terms of their bandwidths, but how the SSD is attached within them are very similar. So, you're gonna go and buy yourself an SSD now, Ben? - I already have an SSD, why would I need another one? - You can never have two, you can never have enough. I have a Fusion Drive in my Mac, in the MacBook. I didn't want to talk too much about Fusion Drives, but that is a fascinating development, isn't it? I mean, how do you find the Fusion Drive? Do you find that it's significantly faster than just the hard drive? Um, well, so I went from... Let me look it up. What is it? What is it here? It is a... Whatever dog slow, I think 4200 RPM drive that came with the Mac. 750 gig thing. You know, just one of those, I just, I think that should never have been shipped with something called pro. Um, so yeah, so then jumping to the SSD from that, it was like, oh my God, this is a completely different machine. This is what it should have been from the factory. Then when, you know, so then I went back, kind of slowed it down by going to the fusion drive. But honestly, other than occasional, um, I'll have weird slowdowns trying to, uh, find a file on the disk or just occasionally like certain directories. Um, and finder will just sort of, it'll sit there and only to spin for 20 seconds or so before it'll finally kind of figure out what's going on. And I don't even know if that's directly because of the fusion drive, but other than that, I haven't really seen a big performance hit. Um, but it is the kind of the same story as what you're talking about with, with, uh, certain things. If you're dealing with a lot of text files, I mean, a lot of programming projects, it's like, pow, all of a sudden, everything's night and day, you know. Yeah. And that really made a big difference. But yeah, beyond that, it can be hard to tell exactly what's going on. Yeah, I'm sort of in two minds, I am going to get a Mac Mini once, I think I may have mentioned this before, when they upgrade it to the 5000 series graphics. And of course, the the next CPUs, so the Haswell, I think it is CPUs. And I, yeah, I'm still waiting Apple, come on. Anyway, you can do it. I know that Clinton and I have talked about this. And one of Clinton's suggestions is that they're going to release a half height Mac Pro and that's the Mac mini, which I really wish they do 'cause that would be supremely cool. But yeah, I can't see it happening really, not in the short term, but maybe one day. I mean, if they're gonna release an Airport Express, an Airport Extreme that looks like a, geez, I don't know, like, you know, those some square drink coasters and you have like 200 of them stacked up like on your end. That's what they look like. It just looks so strange. But anyhow, so yeah, I'm gonna get, when I get the Mac mini, I'll just get it with the, as you say, the stock standard cheap ass hard drive. And I was sort of thinking, well, do I leave it as a, 'cause it comes standard one terabyte now, I think. And do I leave it as one terabyte or do I get one terabyte plus? I put in one of these Electra or equivalent SSDs. And then do I go Fusion or do I keep them separate? And it just seems to me based on what you've said and what a lot of other people have said is that maybe Fusion's the right way to go. And-- - Yeah, it's been, I mean, I haven't had any, it's one of those knock on wood kind of things, right? nothing's gone wrong yet, but it is a little bit black magic. Although it's, you know, it's easy enough to do to set it up. It's not like it's really tricky if you're comfortable with the command line. Which everyone listening to this podcast absolutely is. Of course you are. Command line is your friend. Oh, GG, you remember when that all happened, right? It's like, oh my goodness, with OS X, there's a command line. What's going on? The Mac is all over. Right. And then you look at all the things that we can do, all the really cool hacks we can do through the command line. How did you ever live without it? It's very, very handy. So anyway. So does this Mac run DOS? [laughs] Well, there's DOSBox. Yeah. Um, but anyway, and it was funny, I was thinking about what you were saying, find an article. There's a talking about the, the big old Ram disks and, and kind of what it looks like things are becoming now is 37 signals. The base camp guys, they did a, a sort of a walkthrough of their hardware architecture. Um, and then making just heavy use of caching at the database and kind of page layer. And they've got these, I think it's a service, I think it's a third party, puts these things together, but they're just these gigantic, essentially it is this gigantic RAM, RAM disks plus huge SSD storage. And that's just, that's all there for is making incredibly optimized SQL calls. Find it. Yeah, you might want to be interesting if you could add that to the one to the show notes. That sounds really interesting. I'd like to look at. It's um because I've been in the news recently I think regarding Signal still happily scaling on more more RAM and SSDs I remember they put out they had a picture they put on their blog. It was just I don't know maybe 50 or 60 Memory sticks just sitting on a sitting on a table saying this is our back end Cool. Yeah, I wouldn't mind one of those. Yeah. I'm actually, when I went to the, I know I rant about my MacBook Air's SSD, but it really was night and day for me. And I have a computer at work because, you know, work, corporations have got to give you a laptop. You must have a laptop. And so they give you this Dell or HP piece of crap, and it's just, it's a new level of shocking, you know, and anyway, so this thing is so tragically slow that you just, and I fire up my laptop and it's running and two or three minutes later, it finally comes up with a login prompt on the work laptop. Wow, I just saw the image you sent through to me. That's a lot of RAM. 864 gigabytes of RAM that they bought for $12,000. Wow. Okay. You know, that's what the article is about. And this was from 2012. Wow. Just essentially that, hey, yeah, these kind of architectures are now affordable and feasible, and you can start changing the way you're designing your applications. Yeah, and I know I've heard some people talking about having a unified memory. So one big blob, and you just address things wherever you like, and it all just works. There's no separation between them, between hard drive and memory. And that's a whole other discussion. But that is a fascinating idea. It's in any case, I didn't have too much else to add, actually, though. I think that's it. Okay. If unless you have anything else you want to know, I'll just I'll find that I'll find the couple of articles for that, because I thought it was interesting and they were, they were really excited about it. Yeah. So, uh, yeah, if you guys want to talk more about this, you can find John on Twitter at John Chigi, it's the same at app.net. If you check out John's site, tech distortion.com. And if you want to send an email, you can send it to John@techdistortion.com. I'm Ben Alexander. You can reach me on Twitter at Fiat Lux FM, and you should follow at pragmatic show on Twitter to see show announcements and other related materials. Thanks for listening, everyone. Thanks, John. - Thank you, Ben, and thanks everyone, especially thanks to the guys in the live chat room. Thanks for coming along. - Hey guys, take care. - Bye. (upbeat music) [Music] [BLANK_AUDIO] [BLANK_AUDIO]