Pragmatic 93: Optimal Interface

17 May, 2019

CURRENT

With a growing number of device categories and devices within them, it can be hard to pick what devices from what category will meet your needs. Jason Snell joins John to try to determine what the optimal interface is for what use case you might need and where technology is heading.

Transcript available
Real quick smell. Not much haha it's a sense I got a cover it welcome to pragmatic. Pragmatic is a discussion show contemplating the practical application of technology. By exploring the real-world trade-offs, we look at how great ideas are transformed into products and services that can change our lives. Nothing is as simple as it seems. This episode is brought to you by Backblaze, gimmick-free, truly unlimited cloud backup for your Mac or PC for just $6 a month. Visit this URL, backblaze, all one word, dot com slash pragmatic for more information. We'll talk more about them during the show. Pragmatic is part of the Engineered Network. To support our shows, including this one, head over to our Patreon page and for other great shows visit engineer.network today. I'm your host John Chidjy and today I'm joined by Jason Snell. How's it going Jason? It's doing, I'm doing great. How are you? Very good, very good. Thank you for coming on the show. It's been a while since we spoke and I really wanted to tap into your experience. You've reviewed so many different products over over the years and I wanted to have a conversation about optimal interfaces, which I know is broad, but. - Ooh, all right. - I know, so we're gonna try and keep this tight. So yeah, I'm gonna limit my depth to, I'm not gonna go down to a transistor level or anything crazy. So we're just gonna stay relatively high level. And I figured we kind of broadly group this and just go by inputs and then outputs and then talk about some devices towards the end. And what I'm trying to get out of this is understanding what the optimal interface is for what you're trying to achieve. And 'cause it's something that, there's all these different product categories out there now. It's like, what's the best thing for what application? And it's just something that I've been, I've wanted to do a show about this for a while. So what do you think? - I think it sounds great. I think that there's a lot to discuss about how we use technology and what is good about that and what is disastrous about that. - Oh yeah, exactly. So without further ado, let's start with inputs. And I tried to break this down because, okay, being human, we've got five senses. So having five senses, obviously, there's a certain number of inputs that you've got. So let's start with audible ones. Like we'll start with sound being the obvious one. And I thought about it. What was the first sound input device that there was? It'd be, this is input, this is not two or speakers. And I thought about in the '80s, did you ever come across that device called the clapper at all? - Ha ha, yeah, I actually just did a not yet released episode of the podcast I do with John Syracuse called Robot or Not. And we did an episode about smart devices and one of the questions was, was the clapper a smart device? Because you could clap two claps in a particular pace and if it heard two claps in a particular place it would basically toggle power to a switch. And so you could turn on your lights by clapping with a clapper. But I was gonna say, I had a Mac recorder in college, which was a peripheral for my Mac that was basically a microphone, it also had a line input. And that was in an era where you couldn't, audio input wasn't built into Macs originally. You could get audio into a Mac, you could record your own voice, you could do editing, you could get in music. And I spent so much time with that, making system beeps, but also I did a college project. We were supposed to do like an audio project of some kind and I did it all digitally when everybody else was sort of using tapes. And that was my, anyway, so the Mac recorder, that was my big first audio input device on a tech product, I think. - Yeah, I'm just circling back, Robot or Not, that's an awesome podcast. If you're not listening to it, you should. And so far as the clapper goes, yeah, it's a bit borderline in terms of that definition in my opinion, But sure, look, the clapper, I was just trying to think back to anything before the clapper where you could use sound to actually do something. And that was as far back as I could find. And I never owned one, but I remember watching the ads and thinking, oh, that looks really cool. But yeah, they never actually released them at that point when I had, I was too young, you know, I was, I think actually it was, 'cause that was like early 80s anyway. And- - Yeah, and we had a, I bought at one point tag for your keychain that was supposed to do the same thing that you were supposed to be able to make a particular Clapping noise and it would chirp The idea there was you could find your keys, of course, and it did not it did not work. It just didn't work Yeah, that's right. The key whistles. That's right because you could whistle and the thing would would be be back to you. Yeah Hmm, that's what I hadn't thought of. Yeah, interesting didn't work Okay. So, I mean, in terms of that being particularly useful as an input device to a computer, that's particularly not necessarily that great. But obviously, the dictation is really where it sort of started. And I played early on with Dragon Dictate in the, I think it was the early 90s. And Dragon is still around today. But one of the things I found with dictation is a lot of those early products was very algorithmic centric, and you had to train it about the way you speak. Yeah, yeah, I remember. So when I worked at Macworld, one of my feature writers back in that period, and he was also our columnist on the back page was David Pogue. And David had horrible RSI issues. And so he did all of his writing using Dragon dictation. And actually, one of my co workers ended up having pretty terrible RSI as well. And so she ended up with a computer that had Dragon on it that she did all of her writing and editing on using voice control. And that was really my first good view of that. There had been, like the Mac had some voice control stuff built into it, but it was all fairly rudimentary and more kind of a, you know, something you could look at and say, "Oh, well, that's kind of a clever trick," but it wasn't something that was entirely practical. But the Dragon stuff, you know, it definitely ended up meeting a couple of people who could not have done their jobs without it. Yeah. recall speaking with John Syracuse in the past when we did our RSI episode, episode 50, was he was using Dragon for a lot of the articles he wrote for the OSX interviews. David Pogue Right, that's true. So, that's three people then. I'll add him to my list of people I know who basically they needed to use dictation because they just couldn't do all that typing. Dave Pogue, you know, is an incredibly prolific writer and has always been as long as I've known him. And he was doing it with dictation, which requires, I will say, a real shift in how you think. Because I've tried to write things using dictation, and it is very hard because you have to completely change sort of like how you form the words. It's a fascinating mental exercise to go from writing with a keyboard to dictation. But David, you know, written dozens of books using dictation. So, you know, it can be done. Yeah, absolutely. And the technology early on was heavily reliant on you training it, like reading passages multiple times and would say, "Yes, okay, that's, you know, read it again, read it again, read it again." And that was great, I think, up to a point, but it sort of, I think it really plateaued. There was sort of a limit. And because you had to put all that time and effort into training it, a lot of people just wouldn't bother. If it was something that you did, like David Pogue or like John Syracuse, or you actually really trying to get the maximum benefit out of it, you'll put that time investment in. But for the average person, it was sort of the bar was too high, I think. And I don't think it ever really took off in any large way. It wasn't cheap either, but that probably didn't help. But I think more recently, machine learning and some of that, that has really been taken that to the next level. So there's been a massive shift in the last, I'd say, maybe five years or so, with a lot of these heavily reliant on the machine learning component, a lot of it based in the cloud and not all obviously. So I mean, examples of that, of course, the Apple versions, Siri, 'cause I didn't say the hey in front, so I can say it back to front, so that's okay. Anyway, then of course you've got Cortana by Microsoft, Alexa, which is Amazon's thing, and then the Google Assistant And those ones I think have really improved it significantly because you don't have to train it about to improve its recognition accuracy. - Right, people don't remember that to do voice recognition with something like Dragon, it wanted you to read a bunch of samples out loud. And actually, Siri asks you to do it now when you set up an iPhone, but I believe what it's doing is it's actually trying to do some differentiation so that it can filter out other people's voices and make false positives be reduced and sort of like basically only trigger on your voice. But it doesn't need to do what Dragon did, which was have you read paragraphs of text in order for it to sort of like begin to understand the way that you talked. And that was a pretty big breakthrough, I would say. And this is the moment where it became something a little bit more than just a curiosity and something a little bit more interesting. - Absolutely. I mean, the challenges with voice recognition just to sort of try and round that out and we're gonna move on a little bit. But the problem is that you've got different accents. You've also got people that will slur their speech and you have got local dialects to deal with. And I think one of the biggest challenges for speech in terms of an input method is that there's always gonna be a challenge. It's gonna be noisy environments. And it's not just the fact that there's noise interfering with the signal that you're trying to get into the machine that you're trying to actually do the recognition on. It's also the fact that by speaking to the machine, you're creating noise yourself and that's gonna disturb people around you. So there's sort of a, - Right. - Yeah, and beyond the technical challenges of being able to do it accurately, that will always be an issue. And I think that fundamentally as well, in terms of machines being able to do something with that information, it can translate the words for dictation, but then when you wanna take the next step and start using that for command input, tracking the context of the conversation is extremely difficult. And it's, 'cause I mean, we as humans, we have a conversation. We remember what we were talking about five minutes ago. Well, most of us, I'm sure. But you know, it's like that context can make a massive difference rather than me having to backtrack and say the whole thing about, let's see, so it's now five o'clock in the afternoon. It's rather warm outside. Are you gonna put a jumper on? I can just skip to being a human. Like we know all that context. I say, you're gonna put a jumper on. And it's like, you know, intuitively, the rest of it, whereas a computer will struggle. Yeah, it is. I think the possibilities, and we've seen it in the tech world in the last 10 years, the possibilities of voice interfaces are massive, but it is complicated. It is hard. They did maybe jump into this before the technology was good enough, and now we're in this point. I mean, this happens, right? Computer technology, PCs happened when they weren't good enough, but they happened anyway, and then we just kind of had to deal with that until they got better. And this is where we are with this. I have, you know, my two thoughts about this that we haven't covered yet. One is I'm reminded of AppleScript in the sense that I feel like, bear with me, there is an uncanny valley with voice commands, which is for people who don't know Apple scripting language, AppleScript is supposedly written in English. So it's a scripting language that just uses English words and it's supposed to be an English grammar and it's supposed to be readable. You could actually read it out loud and it wouldn't sound like code. It would sound like sentences. That's the idea. But the truth is that the word order matters. And what's worse, sometimes in order to get the command you want, you have to phrase things in a way that uses English words and phrases in ways that don't sound like anything an English speaker would say. So it becomes counterintuitive because it's so familiar on one level, using those tools unfamiliarly is frustrating. And I feel very similarly about a lot of voice command stuff is it's exciting, but I do think you have to get over. And really it needs to be the technology does a better job of parsing. you have to get across that uncanny valley where I realize I can't speak to this thing like it's a person. I can't. I need to instead formulate almost like a command line interface command using words as if I'm speaking to somebody, but I'm totally not because this is not something I would ever say to another human being, but I have to talk this weird way to talk to this computer and I think that is the greatest challenge of voice interfaces is can you push it beyond that so that people can have a conversation with the computer and it can, you know, I think a lot of that is back and forth. It's not just kind of like what's the context, but it's also like how do you pick out the context if you don't get enough information? So it's, you know, I would like to be asked a follow-up question if it doesn't know what I'm trying to say, but right now that doesn't happen. So I think that's a big barrier, but I think the upside is enormous because when you think about how to control a computer, we have not just like the mouse and the cursor on the screen, but we have keyboards and keyboard shortcuts and menus and lots of things that help drive software forward with features that are maybe not obvious, but they're available. And on a touchscreen device, on a small device, especially like a smartphone, I think that the makers of these products are right in sort of thinking there are lots of contexts on those devices and also like devices without any visual interface at all, like a cylinder that lives in your home, that voice is the shortcut. That instead of a keyboard shortcut, your voice is a shortcut. That you say, "Do this thing." And it's an interface mode that is better suited in some ways, and socially not in others, when you're just shouting things out loud while you're walking down the street. But anyway, I think there's a huge upside there, but there is a leap that we need to make for it to be something that is not in that uncanny valley of, "I'm talking like I'm talking to a human being, but I'm unable to say anything the way I would to a human being," which is where we are now, I feel like. Yeah, absolutely. And I think that's key is that when we learn how to speak and we have conversations with people, it doesn't prepare us for having to speak in a, as you say, a prescribed order in order for the machine to understand what you're talking about. And so long as that's the case, there's an additional level of training that we need to have as a human trying to interact with a computer until those algorithms improve and it can track context. that occurred to me when you were talking, Jason, is that in a room, if we say, "It's pretty bright," we might point at the curtains and the suggestion would be, "It's very bright." Well, could we close the curtains, for example? It's the sort of thing that I feel like because we also rely on other factors in the room visually as well, that just a simple, in order to be natural, more natural, it'll have to inevitably take in gesture tracking and movement tracking and everything which we'll get to in a minute. I think the biggest leap for us in the near future at least is not the context problem because I think the context problem is extremely complicated as you say. It's more just the dictation and for that it just needs to be accurate. At the moment, accuracy is still really not quite good enough. I think this has been my experience and I think that when I was learning typing at school, set a target for a certain words per minute at 95% accuracy. I think 95% is a reasonable sort of a number to pick. If I can dictate to a machine and gets it right 95% of the time and it can keep up with my spoken word without me having to slow down and that's an average of between 150 words per minute to 190 words per minute. I'm sure there's fast talkers that can talk faster than that but I'm not sure that's pleasant to listen to but like the guys that call out the race call at the racetrack or something like that or I don't know, people auctioning stuff sounds a bit, - Exactly. - Yeah, but not them, not them, normal people. Anyway, point is that if it can keep up with an average spoken word without me having to slow down, at that point, that's the inflection point. And suddenly then speech becomes faster than a keyboard might be, for example. And I guess ultimately we're not there yet. And it could be five to 10 years away, but I see the improvement is staggering in this, just in the last decade, it's been really impressive. - Yeah, I agree. - Alrighty, let's move on with touch. and you touched on this briefly. I actually wasn't trying to be funny then, but anyhow, you did touch on that briefly. I did it again. Okay, sorry. Hey, so direct inputs. I thought it might be interesting just to separate them from direct and indirect. So just direct inputs, I thought about, well, we've got directly inputting onto a touch screen. And I thought, what's the first example of that? And if you go way, way back, do you ever play with a light pen at all? - No, no. - I'd like to say you missed out, but you really didn't miss out. It was, so the light pen was just like a large pen with a little LED on the end and a touch sensor on the end. I said touch sensor, it was just like a little round plastic ring that you push down against the screen and that would close a set of contacts. And that was then cabled back through a cable back into the computer. And as you tapped on the screen, you would get a bright cursor and the bright cursor would scan across the screen top to bottom from left to right, top to bottom. And it would time delay how long it would take to get from the cursor point to being detected by the light pen. And that would tell you where the pen was on the screen. It was tragically, tragically slow, inaccurate tech. And I mean, anyway, so I just mentioned it because that was 80s tech and it was cool at the time. Not surprised it didn't take off. Anyway, got a touch on, I've got to stop doing that. Resistive touch. Yeah, I know. And you need a stylus really with resistive touch to be accurate. Although I used to use the edge of my fingernail, it kind of worked, but it still wasn't all that accurate. - It's not great, but you can do it. This is the, you know, anybody who had a Palm Pilot, right, knows about resistive touch interfaces. - Yeah, exactly. And the other problem with resistive that I found was that you couldn't do more than one touch at a time. So there was no real way of doing gestures either, because if you had to draw a line with your finger, you had to keep your finger pressing down whilst you dragged it. And it was, I don't know, just awkward. it didn't really work that well. - Yeah, it's not great, not great. It was as if we were waiting for a better technology to come along. - Yeah, exactly. And capacitive, when I think, capacitive was around for a few years, but when the iPhone came out, they really absolutely nailed it. And it was, I think it was a combination of the fact that they took into account like the viewing angle. So where you touched and where you thought you were touching physically, they corrected for that kind of error and just the accuracy of it. and then adding this, the simple gestures like pinch was just transformative. - Yep, yeah. The, for me, the most important thing about touch interfaces is the removal of that layer of abstraction. For me, that's the thing, that when you're using a computer with a pointing device, you're moving your hand on a perpendicular surface to the screen and an avatar of your finger finger or hand, a little floating cursor is moving around on the screen. And then when you get somewhere, you know, you press down here and something happens up there. And that moment when you first use a a touchscreen device, especially not one that is resistive, where you're, you know, kind of shoving your finger in there or using a stylus, but you're just touching with your finger, naturally finger right on the glass. That is that moment of the direct interface, which is, oh, that abstraction layer that I never even thought about is suddenly gone, where I'm touching the software. I'm making, like, the first time I used a calculator app, my friend James Thompson's P-Calc app on an iPhone, it was so strange, 'cause it was like, oh, now this is a calculator. Like, it felt, it went from being a piece of software to being a physical object. The phone wasn't a phone anymore, it was a calculator that I was holding and touching the buttons on. And that is, for people who don't remember that moment of transition, I think that was a remarkable moment of transition because it's much more intuitive in so many different ways to see something and touch it than it is to see something on a screen and then move somewhere else and move a thing around. Like that remote control kind of thing that we've grown up with on personal computers that simply has been that whole layer just got ripped out when touchscreens happened. Absolutely right. And I think that ultimately, that sort of interaction model is very natural because as we evolve and grow up as individuals, we learn that we push the ball over there and then the ball rolls over there. We crawl over to the ball, we grab the ball and whatever we do with it, but it's a direct interactive model, which is exactly not how you explain it to a child. You're interfacing with your ball through the direct interaction model and they'll just look at you and say something like, "Gaga." But anyway, the point is that's how it works for us every day. Having that built into a digital device makes it instantly usable and there's no training really required. It's just so accessible and straightforward. I think that that's amazing and that will always be the easiest to use for anybody without any training. puts it in a special place. But the funny thing is that I actually still think indirect inputs have a massive role to play. So talking about indirect and you sort of mentioned these previously well we've got keyboards obviously the original keyboards just mimic typewriter keyboards and they've essentially been unchanged since the first terminal computers through personal computers apart from some keyboards with dodgy key switch mechanisms not mentioning any but never mind. You know apart from that keyboards have been much the same kind of idea for a very long time. But beyond that, we also have the cursor that you also mentioned about we've got the mouse, computer mouse, we've got trackpads, trackballs. I had a Toshiba 200 CDS crummy Windows laptop back in 1997, '80 or something like that anyway. And it had a thing they called the AccuPoint. Yeah, those little pushy stick things you put your finger on. Oh man, that was bizarre. So, but the funny thing is that all of those things, what they do have in common is that they allow for extremely precise cursor positioning. And I think that that's the sort of thing that you can't as easily get with a touch interface unless you're using something like a stylus. But I'd argue that even a stylus is not quite as accurate as you can accurately position a mouse pointer, a cursor with an indirect input method. Yeah. So, ultimately, I feel like that technology is all very advanced. It is what it is. And people say, "Well, it's never going to go away." But there are going to be applications where it's always going to be the best input method. So things like for bulk text input, physical keyboards, ultimately, they're still the fastest way. But the reason that the problem is that in order to be the fastest method of data input, You have to train yourself. You have to go through typing training and learn how to touch type. I think just for those that are curious, the fastest recorded typing speed was actually set in 1946, which is a while ago. It was an IBM electric typewriter. That was 216 words a minute. In terms of in the modern age, the fastest English typing speed was on a Dvorak simplified keyboard was 212 words per minute. That was in 2005. The average typing speed is about 41 words a minute. I guess the problem with what I was just circling back briefly to whole speech recognition is that it's got a ways to go because we speak at 150 to 190 words a minute, whereas typing, the average person, you need training, but you'll never keep up with that. Once speech recognition reaches that level of 95% accuracy, at that point, I can see keyboards becoming less relevant. Yeah, it's I am so as somebody who types really fast and grew up learning, you know, self teaching how to type by entering in computer programs on my L2 keyboard like I and I am to this day I type in an unorthodox fashion but very fast. I am not entirely convinced that the platonic ideal of keyboards or of text input is keyboards is like physical keyboards. I'm not entirely convinced. I kind of like, I can see it, but I'm so close to it that I want to be open to other possibilities, including, yes, including voice input or sub-vocalization through some sort of future technology that allows you to kind of like read your brain and do text, I mean, there may be others, but for now it seems like it's the best one. I do think that we're going to see technologies around things like autocorrect, around vision and cameras and other things that are embedded in our devices. I wrote a column about this at Macworld a few years ago because Apple seems to be on a quest to reduce the keyboards away to nothing in some ways. And I think that has bitten them in their latest keyboard design. But I could see a scenario just even as a Devils advocate, I could see a scenario where if you put your hands down on a surface, or even just float them in air and start typing with your fingers on glass in air. And there's an input method with glass. There's a touch screen. Maybe there's haptics to give you some feedback. And then let's throw in that the software knows where your fingers are on the keys, but it also can see with a camera. It can see where your hands are like and you throw in some machine learning for like what you write and the cadence of which you type and what words you type in the in that cadence like I I think it's possible that typing can become something that can happen on any surface and at a very high rate of speed. I think it's possible that something like that could happen a virtual keyboard But there is, you know, you know, what it would really require is some dictation software that would blow it out of the water by really being amazing at interpreting what you're saying and automatically editing on the fly. I think I think the truth is that that's what it's going to take that that moving your fingers at a certain precision level is just going to be what it is and nobody, the average person is not going to be able to do that more than, know, 60 words a minute or 70 words a minute really. And, you know, I don't know, I don't know, it's, it's one of those things where I'm a big fan of keyboards. And yet, at the same time, I want to be open to the possibility that the keyboard is a phenomenon of our, our, you know, last, whatever, 200 years, 100 years, and not something that is the platonic ideal of text input for, you know, the rest of time. Oh, for sure. And I recall there was a product, I'm sorry, there a few years ago, four or five years ago, and it was a laser projector. - The laser? - Yeah, and it projected the keys on a desk. Did you ever play with one of those? - I think I saw one in action, but never actually used it for more than a few seconds. But yeah, it projected on a desk and it looked at where your fingers were. And then that was the input for it. And I had that thought about what if you made a laptop with a glass keyboard that was a multi-touch surface, but there were haptics to make it feel like your fingers really were hitting the keys and getting feedback back at you. And also what if the laptop screen had a camera in it that was looking at your hands so that even if you were off, like to the left or to the right or up or down a little bit from where the visible keyboard was, that it didn't matter 'cause it knew what your hands were trying to do and could even move the keys underneath your fingers to where they know you think they are, which is kind of a weird idea, but that's not outside the realm of possibility. So there's a lot of strange things that could yet happen to keyboards. But I think the big issue is, you would need to train a whole new generation into basically writing with dictation instead of writing. Put it all in your verbal cue instead of in this kind of whatever the writing cue is. It's a different process in the brain. And part of me wants to just say, if we trained everybody to write with dictation, writers would use dictation and it would be fine. But I'm not sure they're parallel. I'm not sure they use the same parts of the brain. And I say that as somebody who, when I was in graduate school, I got really sick for about a month and I was at home. I had to leave school and go home. And it was right as I had bought my first laptop, but the laptop hadn't been delivered when I had to go home. So I didn't have the laptop. It was sitting two hours away. And so I was writing longhand and it was the most longhand writing I'd ever done since I was, basically ever. certainly since I was a kid in school. And what I found about that experience was that my longhand writing was a completely different process than my typing because my longhand writing was slower. And so my brain, like the way I composed the sentences was completely different than when I was using type. So I have to keep open to the fact that that tactile writing approach may be good, but it may be supplanted and that might be fine. - So I'm glad you brought some of those points up because I wanna circle back to those in a couple of minutes when we start talking about neural stuff, because, you know, 'cause why not? And just to close out on the keyboard piece, I think that the single biggest problem with saying keyboards are, you know, I think keyboards will always exist. I think we'll always have some kind of mechanism, whether it's a physical keyboard or projected onto a surface, like you're suggesting, but I think that that as a data entry input works from the, not privacy, it's the, without projecting your voice out to an entire room, there'll always be situations where you're in an office and speech recognition is gonna make it. If everyone did all their work entirely through speech, I think that it would be even more noisy in an office environment than it already is. And I find that, I think that that's gonna be a problem. So ultimately, I think there'll always be a place for it and there'll be people that will prefer that. But in the end, it's gonna take speech recognition to really go to that next level in order for that to happen. So having talked a little bit about the movement and the tracking of the fingers, that's actually the next one I wanna talk about, which was movement. So as an input, tracking a person's movement, whether it's their fingers or their arms or limbs or their face or eyes, that can be used as a method of input. And one of the things that I have played a lot with, and I mean played a lot with because it's on the Xbox, so the Xbox Kinect. Have you played with those? - Yes, I have. - So what was your experience like with that? - It was okay. I played with the 360 Xbox Kinect more than with, I think we had an Xbox One Kinect, but didn't use it so much. It was kind of on the downside at that point. But you know, it was, there were moments where it seemed magical, and there were moments where it seemed like a disaster. And I think the part that really got me was that they said basically, "For this to really work, you need to move your furniture out of your, out of in front of your TV." Like literally, they're like, "We need a space this size." And I thought, "Well, that's basically the size of my living room. So that's going to be difficult to do without." And when we would use the Kinect, we literally would slide the coffee table all the way over to the doors on the side of the room and push one of the couches back so that basically our rug in our living room was a Kinect space because that was the only way we could do it. But that said, there was something kind of exhilarating about using your body positioning to control a device interface? - Oh, absolutely. We had, I had a similar experience. So we've got the Xbox One with the Kinect 2, which was discontinued in 2017, but still. And it was generally okay, but I found it was thrown off when people walked past, either in front or behind when you were trying to do stuff and it would then lock onto the wrong person. And then of course, my kids being, well, my kids would come in and intentionally do stuff while people are trying to do just dance or something like that and throwing it off. So you needed a larger space with not too many things cluttering it up, like you said, couches and so on. Plus, it gets confusing. If it's an average house and you've got people walking in and out, then you're trying to do something, they can't be walking past when you're doing it. So it's sort of, I think from a technology point of view, it certainly is something that's worth adding, but I don't think that it alone is ever going to be a sole input method for basic, I think it's going to be more of an augmentation. So like I was saying before with speech, it's like it can be used to provide context perhaps. And motion tracking on a smaller scale, they also, the ability to track what people are doing at their desk, like their head position and seeing where they can move the cursor with their eyes. there's a couple of devices out there that can do that as well. But ultimately I was just thinking about where that's going from Microsoft's point of view with the Kinect specifically, I think the Azure Kinect was announced in February this year and that's just a development kit at the moment, but they're not giving up on it, but it still, it seems to be, it needs to improve its accuracy. So it's not as easily confused. And I also wonder from a computer input point of view, I mean, for gaming, sure, it's a novelty, but what activities do we do where we're standing or sitting and motion tracking is going to be a useful interface for which tasks? I suppose noisy environments, I can see that because you could gesture to do something that would be essentially silent, but beyond that, I struggle. - Yeah, yeah, I'm with you. - All righty, cool. Let's hop and skip onto the last one I had on for inputs, which was neural, because, I mean, it's cool. And why not, right? - Yeah, absolutely. - So I dug into this one a little bit 'cause I've been a little bit out of the loop on this one. So brain-computer interfaces, or BCIs, they usually have in the past used electroencephalograms or electrodes in caps put on your scalp. I've never actually put one of these on, have you? - No, I haven't done anything like this. I've only read about it or seen videos about them. - Yeah, I just wanted to check. Of course you could also do functional MRIs because that tracks blood flow in the brain, but that might be more accurate, but that's not easily portable and not very easy to do. So yes, in the future of course, there's always implants and then we start to sound like we're becoming the Borg, which is both exciting and scary at the same time. But in any case, interestingly, in the mid nineties, there were a few of those neuroprosthetic devices. And a lot of those sort of to try and, to just try and get basic movements and move a cursor slightly up or slightly down or left or right. And when it comes to actually doing something like more useful, June, 2004, a guy called Matthew Nagel had the first implant of the cyberkinetics brain gate. And it was trying to overcome the effects of tetraplegia, which isn't like quadriplegia, So it's restricted movement in all four limbs and to varying degrees of success. And there've been a couple of iterations of that technology but ultimately it's still got a very, very long way to go. And I think most publicly recently, Elon Musk was noted 'cause he sort of invested $27 million in a company they called Neuralink. And that was in three years ago. And they're working on what he likes to call neural lace to interface the brain with a computer system. Yeah, futuristic stuff, but... Yeah, well, I mean, this is something that's going to take a lot of time, but there have been all of these really encouraging things where you've got something on your head, or in some cases, they've placed electrodes in people's brains as a part of their, you know, some brain surgery that they were doing. But the idea of like being able to move a mouse or something like that with your mind. That it is related. I mean, we talk about accessibility. You mentioned that with somebody who has a movement disorder, we obviously know about like Stephen Hawking having the ability to communicate with a, I think it was like a cheek twitch was his only kind of like reliable movement that he could do, but that was enough to build a system for him. And the more you can kind of unlock either by mapping to something that the brain can control or even better mapping to something happening in the brain that you have the possibility to do a lot. And then I would throw in, there are these things that potentially get unlocked by machine learning and by sensors that we've got to maybe intuit a little bit more about intention. There was a story a few weeks ago about being able to synthesize a voice based on somebody sub vocalizing a phrase and that it was understandable by, it still sounds really weird, but it was understandable by like 80% of people or 80% of the passages were understandable. And it was this idea of you have a brain scan and people are sub vocalizing a passage of text and it actually kind of comes out and it has to do with sort of like muscle movements and things like that. uh, even though you're not speaking, you're still thinking about how you would say it. And you know, that's all like, this is all really interesting. It makes me, it's a, I used to read science fiction novels about this stuff, like having a sub vocalization to an intelligent agent that spoke in your ear or whatever. And I think that's never going to happen. It's like, it's like a time travel or faster than light travels. Never going to happen. Now I think about it and I think that will probably happen. I don't think it's going to happen anytime soon, but there has been enough research about figuring out ways for us to understand what's going on in our brains. And it doesn't have to be like we've figured out that we can wire to a particular neuron and tap into the brain. It can be things that are kind of gross control levels and yet filtered to the point that it can be it becomes a usable interface of some sort. And then throw in behavioral imaging and things like that, which seems really weird. But yeah, the idea that a computer can like look at you and know what you're thinking or know, you know what you're trying to do and use that as a part of it too. So if you can imagine, you know, any situation where, like, imagine a kid who's fixated on something on the screen that they're trying to move a mouse to on a computer, and the computer can look at them and see where their eyes are looking and see the frustration on their face and go, "Oh, well, that's the target. "I'm gonna move it there." That's maybe not the best example, but like, that, I think all of that stuff is in the realm of possibility, which gets you to an interesting place where, whether they're reading your mind by literally reading your mind, by scanning your brain, or whether they're reading your mind by watching your body and going, "I know what you're thinking." Either way, I think that's going to happen. I think it's just a matter of time, and it may not be in our lifetimes, but it does feel like that is something that is technically possible as we learn more about the brain and about computer vision stuff, and also the equivalent of an fMRI, the idea that we can actually sort of see how the brain is working in real time and then maybe do something with that. - Absolutely, and I do agree. I can see stuff like that happening in the future, but it probably won't be in our lifetimes. The thing that's interesting about it is, I don't wanna get down into the whole limbic system or neocortex part of the brain or any of that sort of stuff, but it really actually is quite dangerous. If you wanna do implants into the brain, people could go bad in all sorts of ways. And I think "Black Mirror" sort of had a few episodes where it explored that, how it could go horribly wrong, for example, but there's also a limit if we don't have implants to how much you can actually extract when you've got sensors sitting outside of your skull on your scalp, that's gonna be pretty limited. So if you're gonna need to go to implants to directly connect with neurons to get data in and out at any rate that's ever gonna be useful enough to overtake our conventional senses. And I think that that's really more the point. And until we get to that stage, it's really not going to do anything much at all. It's just going to be a passing curiosity. And I mean, I have no idea how far off that's going to be. But the interesting thing as I thought about just the dictation use case, and this is the part you mentioned before I wanted to circle back to is, you know, no one really knows exactly how fast we think in terms of words, but there were some estimates that it's somewhere between 1,000 and 3,000 words per minute and that is massively broad for starters. But I suppose in terms of writing though, the word thinking rate is just, for the want of a better expression. When you're writing something, you'll read it back, you'll review it, you'll revise it, you'll rewrite parts of it as part of that creative process. And if you don't do that, then you're going to end up with gibberish or something that's really pretty horrible and doesn't make a heck of a lot of sense or is not very polished. And ultimately, it's not necessarily the speed at which it can actually read, it's how long it takes us to process and reprocess and the act of handwriting it and then moving to a keyboard and then moving to speech recognition. It's like all of these are going to increase the throughput speed, but that doesn't necessarily mean that it's going to improve the quality of anything because you still have to go back and do that process of reviewing, revising and yeah. and yeah. Yeah, yeah. It's I read a short story the other day about the it's it's it's very much like a black mirror kind of thing and it's by Ted Chiang who's a great short story writer and the the idea behind it is it's two things happening at once. It's it's this concept of changing human cognition by everybody's wearing a you know video cameras on their bodies all the time, and that all gets recorded. And then secondarily, a search engine exists to index your entire life, and this is the black mirror part of it. And what his argument is, at that point, do we stop worrying about committing things to memory? It's a little bit like how I think even today, having an always-on internet and a web browser at the ready means that there are things that you used to maybe file away for future reference that you just don't bother now because you could just look it up if you need to, and changes how you prioritize processing of information. And, you know, what he says about this is, do we need memory anymore? If you have a thing that's always in your field of vision, you know, it's a connected device, it's attached to you, it's in your eye, you know, it's in your ear, and you can say, you know, what was that time that we had that argument? And it can immediately show you the argument, like exactly what you were saying. And so you don't have the, the unreliability of memory, which is great, although there's also an argument that we forget things for a reason and it makes our lives easier to kind of smooth out that stuff. But also does it mean that we no longer have the capacity to think about this stuff and remember it because we now don't need to bother because we know that somebody else is doing it, and we use our brain for something else instead. But what is interesting about this story is it's juxtaposed with a story about a missionary coming to Indonesia to a tribe that doesn't have a written language and teaching a boy who lives there, how to write and what writing is, and how the concept of writing and writing things down was itself a fundamental technological shift that changed the way we think. And I love that story because of that juxtaposition, that he's making an argument about how technology may change how we think, but his argument is also, it's already happened because written language changed how we thought. And I like that because when we talk about this technology interface stuff, I think that's the truth of it is all of this stuff, it has the potential to change how we process information, but the human brain is incredibly adaptable. And who's to say, you know, when you talk about going from writing, handwriting to writing on a keyboard to using dictation, you know, that there may not be other paths that are different, if not better, then different and might be perfectly fine. But it is kind of amazing to see that, but also to think about how we are not the beginning of a process. We are the products of a long period of time where humans have been building technological solutions of various kinds that have changed our perception of language and history and even time keeping over the course of hundreds or thousands of years. And this is just the latest in that line. This is why I love talking to you, Jason, I never would have thought of going there. That's brilliant. That's absolutely brilliant. Thank you for bringing that up. I am actually curious about reading that book now. I'll get a link from you later. So that's, yeah, okay. So you've managed to blow me away there. I need to just, right back on, back on track. I think we've done a lot of the inputs. At this point, we should probably switch to outputs considering how long we've been going so we're just gonna say okay inputs were done. Before we go any further I'd like to talk about our sponsor for this episode and that's Backblaze, a cloud backup solution for your Mac or PC. You might have heard about Backblaze before and if you haven't tried Backblaze yet I strongly suggest you consider it and let me tell you why. Backblaze is an off-premises backup solution with no gimmicks and is truly unlimited. That's right if you have 8 terabytes of storage either in or attached to your computer or more than that. It's still only $6 a month for that machine. Backblaze backs up everything documents, photos, drawings, videos, music, project files of any size, the lot. You can set it to back up automatically once a day between specific hours if you like you're in complete control and Backblaze just backs up and then it stays out of the way until you need it. Once it's backed up you can get access to your data from anywhere via the mobile app via web browser and then you can restore anything from just one file to all of your files. It's up to you. If you've got a huge amount to restore and you don't want to download it all, that's fine. Backblaze have got you covered. They call it restore by mail. You can pick a thumb drive or a hard drive and they'll just send it to you with all your files on it sent by FedEx and in the US it's sent overnight. When you've got your files back safely, you can just keep that drive. But if you don't like the color, that's fine. Just send it back within 30 days and Backblaze will give you a full refund for that restore. How good is that? Backblaze are backing up over 750 petabytes of user data with over 40 billion files restored so far and that's a heck of a lot of memories and documents that otherwise could have been lost forever. So why do I love Backblaze so much? Actually it's because they saved a huge amount of my data years ago when one of my hard drives failed. Long before Backblaze were a sponsor I signed up on a recommendation from a friend after one of my 3 external 4TB drives died. A few weeks later my time capsule drive also died, and if it hadn't been for Backblaze when one of my remaining drives also died, seriously it did, I would have lost it all. But, because Backblaze had it, I got it back. My only regret was that I hadn't signed up months earlier, because if I had of, I wouldn't have lost anything at all. If you visit this URL, backblaze.com/pragmatic, you can sign up for a 15-day no-risk free trial. It's easy to set it up and using that URL means that they'll know you're supporting the Engineered Network as well. They have a great discount for annual plans too, so don't take a chance with your data. Start protecting yourself from a worst-case scenario like I did. And don't wait, start today. Thank you to Backblaze for sponsoring the Engineered Network. Outputs. And the biggest one is obviously our eyes because our eyes can process It's a massive amount of information and it doesn't disturb, well, generally speaking, unless of course you're lying in bed and you've got a very bright screen on your phone that's disturbing anyone else who might be there in any case, but generally speaking, screens, okay. So let's talk first about screens are, what would just work, flat, single, two-dimensional screens at a distance away. And I mean, essentially you've got things that you can look on them would be things like movies, just general shows. I hate saying television because you're using television in a sentence, but television shows, I guess, for the one with a better way of saying it, just general entertainment. But the problem is that it's not as useful as an information-rich display. And I just keep thinking about presentations and presentations that are put on big screens in a room, you can't have lots of text on them because when you're reading at a distance, it's extremely difficult to fit a lot in there at a size that doesn't cause eye strain. And And so, you got to have, you know, more, more visualization and less words. And it's, it's, it's, there's a limit. So you can't, it's like working on a computer, sitting in front on a couch on your laptop, projecting it up onto the screen. I actually tried that once a long time ago, and it's almost impossible. Have you ever, have you ever had a go at that? No. My advice is don't. Okay. Just don't. It's just, it's just, it's too difficult. you need to be close to the screen 'cause you need that resolution. And a lot of people in the industry, we like talk about pixels per inch or screen resolution. And I started thinking about this going back to the days of Retina. And when Retina first came out, for example, on the iPhone was almost 10 years ago. And I was thinking back to those days and displays on smartphones from the very beginning, extremely pixelated. And I found a really interesting blog article from a guy called Phil Plate. - Oh yeah, the bad astronomer. - Yeah, there you go. And he was talking about going through the steps of how the human eye tells the difference in terms of that pixel density. And it's not... So the human eye can't tell the difference between a real object and a picture of a real object. And the measurement at that transition point is one arc minute, which is for someone with 20/20 vision. And for people that don't think about arc minutes, like I don't generally, it works out to 0.0166 recurring degrees. And if you do the math and you hold a screen in front of you and it's about 12 inches or 30 centimeters or one foot, choose your imperial or metric measure, you hold that away from your eye about a foot away and that works out about 286 PPI. and the iPhone, which was the first iPhone with Retina, was it the four, I think? Was it the four? I'm trying to remember. I'm sorry, I should have researched that one. Any care, it had 300 PPI. And I remember- - I think that's right. - Yeah, and Steve Jobs on stage saying that something magical happens at that sweet spot at 300 PPI. And that's why. And it's the sort of thing that with handheld screens, you can hold them that close and hence that PPI will be sufficient. The further away they go, the less strenuous, that less of a requirement that becomes. And the closer to your eye it gets, the worse it gets. And we'll get to whole virtual reality later on. But the great thing about handheld screens, as opposed to a TV screen though, is that you can adjust the viewing angle really easy. You look up close, pull it closer, put it further away. Whereas if you're in front of a TV, if you want to get a better look at all that text on the screen, you're not going to drag the couch closer. I mean, well, well you might, but I mean, I'm not going to do that. Anyway, so if screens get too big, you got to move your head around to see everything. So there's another limit and the size of an average room is also a limit. So the whole point of all of this is that we need to sort of get a head around what is the optimal interface really for an output for a screen for what tasks and this is what I'm trying to get to. And I think that the reason that TVs, televisions and large displays so popular for watching movies, shows, playing games, is that you can sit in a comfortable chair or a couch. And you can't do that, have a comfortable couch one foot away from a screen because there's no room. And ultimately, that's not very relaxing. So the ability to sit back and relax. And so ultimately, I think that is one of the main drivers. And watching movies, shows, and playing games, there isn't a lot of text to read. So, because that's not very and from a distance that's not very relaxing, you're straining your eyes. So it's best if it's a TV, like 10 to 30 feet away, I guess it varies, doesn't it? But, and you don't have to hold the screen. So there's minimal fatigue. And I guess that's my take on why TVs are best for that. - Yeah, I think that's reasonable. I have definitely, my optometrist suggested that I project my computer screen on a far wall and it would improve my vision. And I was like, no, no, that's not gonna happen. But you're right, I'm struck by how sometimes, Sometimes we focus on like having a TV in our room, a big TV in a room, when if you held your iPad at that same distance, it would be bigger in your field of vision. And there are ergonomic issues with holding it there, but there's also that, just the idea of like, maybe it doesn't matter where those pixels are, as long as you are seeing the full quality and it's in your field of vision at that side. And people use virtual reality to watch a movie, which I've never really understood it's low resolution but it does it's a different way of of of approaching the you know cinema experience I know because that's yeah there's other things involved but other than your eyes as you said you're sitting there and relaxing and that's a different you probably enter a different kind of mental state when you're doing that then you would if you were writing on that TV set yeah yeah exactly right and I've actually hadn't heard of people where are you wearing a VR headset to watch a movie, that's a new one. I'll pass on that as well for a bunch of reasons. But I suppose the funny thing is that this is about optimal interface. This isn't about, is it possible? Well, yes, it is. Yes, you can hold an iPad screen and yes, you're right. It would take up, you're larger than that field of vision than a 40 inch screen that's multiple, tens of feet away, absolutely. And you may even get a better quality results in terms of visually, because the pixels per inch on that iPad might be superior to what the TV can provide even at that distance. So ultimately though, the downside of that is the fatigue. And my problem is that if you think about the vast majority of people on balance, I think the reason that television is so popular is because it is the most relaxing way of doing it because there is minimal fatigue. I mean, it's the ultimate in laziness. I guess you could also think of it like that, but maybe that's being too harsh, but that's really the point as far as I'm concerned. Yes, you can use an iPad for that. You can actually watch movies on your phone. And yes, if you really are crazy, you can watch them on a virtual reality headset apparently, but that doesn't mean you probably should. And it's the sort of thing that's never really going to be that popular because of all of the other compromises. And I guess just then to quickly jump across to the other screens like our computer screens. And ultimately I think in terms of size, there's a limit because again, as you're closer to the screen, your field of vision is gonna be very different to something that's further away on a wall. So about between 20 inches and 32 inches, I mean, I'm looking right now, I've got a 28 inch 4K display and on the left I've got a 24 inch display, which is just normal 1080p. And then there's a laptop screen off to the right and that's, it's a 13 inch MacBook. So 13.3 inches or whatever that screen size is off the top of my head, I can't remember about that. And that's one foot to maybe three feet away. And that sort of distance means that you can have lots of high definition images, text, numbers, pretty much any purpose you like. And you can watch movies on that quite comfortably, but you're not gonna be as comfortable as sitting in a couch, but it's perfect for information dense tasks. Okay, not too much to go on about that. The last, I wanna circle back to VR and augmented reality at the end, but just on glanceable information, 'cause this is sort of a newer thing, is that, I mean, it's not a new thing having a wristwatch, that's for sure, but it's been around a while, but the smaller screen smartwatches, and I think it leads us down the path of glanceable screens. And I think the problem with a big computer screen is that because you can put so much on it and so information dense, it's not glanceable. And you want something that's glanceable, you need a smaller screen, then you need to think about, well, what can I put on? What am I restricted to? I can't have lots of texts. I can't have high definition images. I can't do certain things like that. But what I can do is I can display things like the time, the weather, use more iconography, use more symbols, very low information density, but because it's glanceable, it's very, very useful. Yeah, I like the idea of having small devices that at a glance give you small amounts of information. It's not a computer. I'm not meant to intensively use my Apple Watch. I'm meant to be able to glance at it or maybe tap on it a couple of times, do what I need to do and move on. There's something to be said for that. That's a very different kind of approach than a device that you're meant to use intently. I thought you were also gonna mention, forgive me if I'm skipping ahead there, but I found that one of the great aspects of the Apple Watch for me is its touch output interface, which is that it taps my wrist or buzzes my wrist when I need to pay attention to something. And I love that because nobody else can see that. And I can choose to react or not react. And it doesn't make a loud noise. I actually have sound turned off on my Apple Watch. I never wanna hear it. - Me too. - But feeling it, I don't have a problem with. And I think that's a great use of that interface And if you've ever done that thing where you're driving using Apple Maps and you're wearing an Apple Watch and it taps your wrist when it's time to turn, that's so great. And it is an interface, it is an output via touch. And I think it's great. - Absolutely. And yes, you are skipping ahead, but that's totally okay. And we're definitely, yeah, absolutely right. So I don't wanna go and tell too much more about watches. We'll keep moving, but that's from a visual, those are the screen sizes that I was sort of thinking about. And one of the things just on smartphone screens is that there is a growing preference from some people to have larger on smartphone screens. And I think that that's good for certain use cases, but the problem is that the larger it gets, the less glanceable it gets. So it sort of occurs to me that the larger screens on the good XS Max Plus Mega phone, that's your eyes are then gonna, when you look at that screen, if you try and glance at it, you're going to be searching through that screen trying to find what you're trying to find. It's going to take more time because it's a bigger screen. And so ultimately, I feel like as we have more information that's available glanceably on a watch, you can actually get away with larger phones, but you wouldn't use your phone for glanceable information. You use it for things more like reading or watching videos as a compromise maybe. But in any case. Right. Let's move on to sound. Gotta keep moving. So sound really quickly. Don't wanna spend a lot of time on it. Headphones and speakers. So first of all, headphones. The great thing about headphones is they're private. And the great thing is there's lots of choices. And honestly, you can get headphones without cords now. They can need to be firmly attached or inserted into your ear canal for exercise, for example. And if you wanna wear them for a long period of time, then you can get comfortable ones. To be honest, I think that in group environments, headphones are the absolute winner for sound feedback. When you're in an enclosed space on your own, I think speakers are probably better 'cause there's less fatigue. There's nothing to fall out of your ears and so on. I actually have, I just thought it might be fun just to list the number of headphones that I've got. So the ones I'm wearing right now, Audio Technica M30Xs, and they're sound isolating over-ear headphones. They're great for podcasting, but I also have AirPods that are just hanging on by a thread. Like in terms of battery life, I need to get some new ones. Anyway, but they're great for just general exercise, walking to and from the office, working in the yard, whatever. Although interestingly, we shifted to a more agile, in air quotes, open office seating environment at work. And there's a lot of noise, a lot of distractions. So now I'm thinking about getting another set of headphones that are noise canceling to try and save me from the noise. - Yeah, I have a lot of headphones too. I think there's a difference between interface and media playback, but still, definitely, I thought there was a presentation at Macworld Expo a few years ago of the idea of an intelligent agent. And really, all of these agents that we talked about earlier, it's a two-way street. It's an audio interface, usually, in return. And the idea that maybe the smart glasses that everybody talked about, that Google got everybody excited about, that maybe those are a lot less necessary than something you could put in your ear that allows an intelligent agent to interact with you and tell you what you need to know. And I think Apple with AirPods, that is a thing that it definitely thinks is a place that this is going, is the ability for your devices to talk to each other and they know where you are and they maybe know where you're looking and which direction you're pointing. And very subtly they can say, it's time for lunch, cross the street and then go down a block and turn right. And nobody needs to know that. And the problem with our eyes is for portability reasons, like you gotta shine light on the back of somebody's eye in order for you to see it. And that means you need to have something hanging in front of their eye or on their eye. And that's a lot harder to do than stick something in your ear. So I feel like there's a lot to be said for these audio interfaces in terms of giving you feedback. And I use an audio interface all the time. When I go running, I'm listening to podcasts, but I also have an interval app running that tells me when to run and when to stop. And that's all by audio from my Apple Watch. So I don't have to look at anything. I don't have to keep checking to see if it's time for me to start or stop. I just have to do whatever I've been told to do by the interface and it tells me when it's time to do the next thing. And that's a very good way to approach that because when I'm running, I don't really wanna keep checking my watch or my phone in order to find when I'm supposed to start or stop. - Oh, for sure, absolutely. And I think that there was an iPod, the first iPod that didn't have a screen on it. And there was your playlists and it would go through and create an audio, sorry, a text to speech audio of what the playlist name was. So that when it downloaded to the iPod, you can actually flick through it. And it would tell you through the speakers or through the headphones, what each of them was as you flick through it so that you could then pick. And it was purely, it was a touch input, it was totally audible feedback, which was very nifty at the time. Just a little bit on speakers just briefly is that I think that speakers are great if we're talking about speakers that play music, that are intended to play music. One of the trends that I've seen a lot more of recently and HomePod is one example is smart speakers that, but the HomePod is a smart speaker that is almost more about playing music with beautiful sound than it is more so about speech recognition, although that's probably more of a Siri thing, but nevermind. But other ones in the market, other smart speakers, they're more about that input than they are about the output, although they do technically do both. The problem with it is with any speaker systems, you gotta be mindful of those people around you, because if you have anything coming through a speaker that other people are going to hear, it's going to annoy them and interfere with what they are trying to do. So headphones are generally a better option. But the thing is that if you're in the house by yourself, then having speakers playing and using smart speakers with audio feedback like a HomePod, it's still a better option if it's reliable because ultimately you don't have to have anything inserted in your ear or hanging off of your head. So it's more comfortable. - Yep. - Alrighty, no, no, no, let's keep moving. Touch, and you mentioned this one, so we'll start with this one, haptic feedback. And I thought about haptic feedback 'cause I absolutely love that on my watch. And I am exactly the same as you. I have turned off all of the sound on my phone and on my watch, it is all haptic. So when the phone rings, it never rings. And I love that because it means all, in my life I've had that many ringtones. I mean, a lot. And so whenever, and once you've done that, you cycle through a few dozen ringtones in your life and they're all standard ringtones and other people have their phones and they'll go off. And it's like, "Hey, is that my phone ringing?" And you have that moment of panic when the phone rings, "Is that my phone ringing?" And of course now it's all haptic on the watch. I know it's my phone 'cause it's only tapping my wrist and no one else knows, it's magic, I love it. - Yep, yep, 100%. - So haptic is fantastic. And I thought about haptic as well. There's also haptic feedback from touchpads. So the trackpad, for example. So my Magic Trackpad 2 and my laptop trackpad has got haptic feedback, which is fantastic. - Right. - And it's also great on the watch for alarms. I wear my watch to bed as well and it wakes me up with an alarm in the morning, tapping me on the wrist feverishly to tell me to wake up. Works really well. - Yeah, I really appreciate it for timer alarms. When I'm cooking, I will set one or two and then the audible alarms on the HomePod or the Amazon Echo will go off and that's fine, but I really prefer when I get that timer just tapping on my wrist 'cause again, nobody else needs to know, I don't need to invade the shared space that is our audio space in our house. It's just for me and I got the message and I can act on it and I like it a lot. I think software has a long way to go in terms of haptics. I think that I've been disappointed with the haptic support on my Mac and also on my iPhone. I think on the iPhone, it's a little bit better that people have really, because you can count on modern iPhones having that haptic engine in there, that the software is a little bit better at that. But I think that more could be done with using feeling of a tap as a useful feedback. And the Mac stuff, I've been disappointed by. I think the Mac stuff, even though I do have a Magic Trackpad I use it and I love it. The haptics on that beyond the actual click haptic have been disappointing in all the apps that I've tried it that support it. It has not really made a difference in my life, which is too bad because I think that I would like to believe anyway that software could carefully use that stuff and make it a valuable part of the user experience, but I haven't experienced that myself. Yeah, I think you got a point there. I hadn't really thought about beyond the trackpad, but you could use the haptic feedback on a laptop for all sorts of different purposes. And I think as a feedback mechanism, but I suppose that one problem I have with haptics is that there is a finite amount that you can do with it. It's not like you can, well, I was gonna say, it's not like you could tap out Morse code. Actually, you probably could, but I mean, I'm not sure you'd wanna do that, but you could. So there's always gonna be a limit to what it can do. It's great for binary events and you can have some level of like sequencing, like you mentioned the turn signals on the Apple Watch and it'll give you the feverish, so they boom, boom, boom, boom, boom, boom when it's a left and like a boom, boom, boom, boom, if it was a right or something like that, I'm probably mixing those around, but so you can get some differentiation between them for sure. And even different applications, like I can tell the difference between a tap from iMessage versus a tap from Outlook that an email has come in versus an iMessage has come in. And there are subtle differences there, but there is a limit. It's not like you're ever going to get anything to the sort of resolution and touch feedback and of something like text. And then I thought about it, 'cause although my eyesight is terrible, I can still see 'cause I have glasses and therefore technically I have 20/20 vision, lucky me. But in any case, I think we're both in that sort of camp. But the point is that if you are visually impaired, seriously visually impaired, there actually are refreshable braille displays. You can actually get feedback through braille, which is technically a touch mechanism for visually impaired people. mind you, that's going to take a lot of training and a lot of practice. And yeah, so it's not exactly like everyone's going to use those because it's a silent way of reading a screen with your eyes closed. In any case, the technology technically exists. Alright, so moving on. Real quick smell. Not much. I hate it's a sense I got to cover it. It is no I was I've been anticipating this all along. And what I will say is I was, I think I occasionally I see a movie where I think or a TV show and I think I am so glad that nobody has figured out how to communicate smell through a broadcast media because you know you you can very effectively show something and imply that it must smell terrible but the last thing you want to do is is gross out your audience by flooding the room that they're in with a terrible smell yeah and I feel like smell interfaces are probably not ever going to happen for that reason I'm also I'm also a pretty sensitive smelling person. I actually use like unscented or laundry detergent and things like that. So I would not really be interested in a random creator flooding my home with weird molecules that smell bad, but it seems like it's just never gonna happen and I am okay with it. - Yeah, exactly. Interestingly, we had the same problem with some fabric softener. We bought some fabric softener and it was on special when we tried it and it was so overpowering. It was so sweet to smell. It was overpowering and we had to throw it out. It was just, we couldn't have it. But yeah, look, absolutely. I think that there's a massively narrow field where this is useful. I was thinking about where would it be useful? If you're shopping for fragrances online, I'm guessing that might be useful, but I don't know. I mean, what are we gonna do? - I guess. - Buy a thousand dollar scent generating machine to plug in? - Well, yeah, you have to have built something with all the molecules in it that it can release in ways to emulate those and it seems, and then you'd have to have replacement packs. It'd be like an inkjet printer for your smells. Nobody really wants this. Nobody wants this. - I'm sure someone does, but I can't imagine it's very many people in the world. So probably not. And then the other thing I've thought about with smells, if it was an enclosed theater environment, you've got, well, if I'm walking into a room that has a smell, I can walk out of a room. And that's real life. But if you're in a movie theater, if you let a smell into the room, how are you gonna get the smell out of the room they change scenes in the movie. It's like, well, that smell is going to linger and there's going to be smell lag. And anyway, so no matter how much I sliced it, even with virtual reality, like augmenting that, I'm actually glad this technology really isn't very advanced and hopefully it never really becomes anything because I can't, yeah, I'm like, I'm happy with it the way it is. Anyway. And taste, can I just say, same kind of comment, but there's even less available. Yeah, yeah, no, I think that's exactly it, which is this is the, you know, the molecule detecting senses are problematic on one level and on another level it's fine, right? Like I can see how ultimately you would want, you would want taste and smell in a completely immersive like virtual reality application or something like that, but that is so far off when you've got something that is completely realistic in all the other senses and now it's like, "Okay, you know what we gotta do now? We gotta do taste and we gotta do smell." But until then, I just don't think it's, you know, because you could be in the perfect, imagine being in the perfect virtual reality simulation and then, you know, your roommate burns some toast. Like, okay, I get it. That would be, that would break the spell, but we're a long way from there. - Yeah, and I'm comfortable with that. - Yep. - In terms of neural, technically again, it is an output as well, but we talked about that in input. So input, output, same, I'm not gonna go over that again, we already talked about it. So that leaves us to finally just talking about some of the devices specifically that we sort of touched on and had, so this is the stuff we haven't really talked about and that's virtual reality and augmented reality. So the virtual reality stuff, It's been around for 20, 30 years. And I mean, okay, it was horrifically terrible. Like in the late 80s, early 90s, when it was first like, oh, this is the future and everything. And those things were so heavy and the resolution was terrible and you couldn't wear them for very long. And I thought about the problem with virtual reality. And ultimately the problem from my point of view is that because it has to wrap around your eyes to place you in that environment and that restriction, that creates a restriction around your eyes. And ultimately that means you're not gonna get airflow around your eyes. It's gonna be warm and hot and sweaty if you're exerting yourself. It's gonna not be that comfortable. And that's just from the wearing comfort point of view. So the weight of the headset's a massive problem. And the other problem is that because you're trying to simulate the real world, action and reaction and that time lag, that latency, that can be really jarring unless it's like really, really fast. And so I sort of, I did a bit of research into what the current best systems in VR are today and the lag, the best lag somewhere between 18 to 22 milliseconds, something like that. Have you played much with virtual reality recently? - I haven't, I have used it a few times. I used an Oculus Rift once and didn't throw up, which was very exciting. And I've also used those Google Cardboard kind of things where you put a smartphone over a pair of goggles, basically, and it uses the gyroscope and the phone to give you kind of a simple VR experience. And I think it's fun. I think it's great for gaming applications and probably other applications. And it's going to-- the pace of mobile technology innovation suggests to me that this stuff will get good. If it's not good already, it will be getting good before long. Oculus is doing a standalone device that, you know, it's still not as good as the one that's tied to a $2,000 gaming PC, but we're gonna get to the point where you're gonna have something that you can put on your head that is going to give you a very high frame rate, high quality experience where it's going to look and sound like you are in a different place. And I think that's great without like cables dragging back and attaching you to a wall somewhere. And I think that will be amazing for certain applications. And I'm not sure, again, I'm not sure if watching a movie in it is what I want to do, maybe, but definitely the idea that not only playing games but other like social applications and that could be fun. I feel like it's gonna be real, but I don't think it's ever going to be changed the world real in the way that maybe it's been sold or if so, it might be a while. - Yeah, I tend to agree. And I think that there's a certain limits that we have to, the technology has to reach or exceed, sorry, before it's going to even become that appealing. And I mean, you mentioned the fact that being wireless, that's a big one. I'm being untethered, I should say. And 'cause being tethered is a big issue and 'cause it does break the spell as it were. And in order to avoid, 'cause a lot of people get virtual reality sickness. It's actually a coined term apparently. which to me is just motion sickness really, 'cause I get horrible motion sickness. And I learned that if I put these special patches that I can order on an online chemist, 'cause for whatever reason, they're not sold in Australia, but nevermind. And it's fine, you wear them for a few days, you don't get seasick and then your eyes go all blurry. But don't worry about that because at least you're not seasick. Anyway, nevermind that. The point is that less than about five milliseconds, 10 milliseconds of lag, I think a minimum, and it needs to be probably less than five for most people to not get virtual reality sickness. So that's really a necessary step for starters. And that's also gonna require better motion tracking of head position and arms and legs and everything and movement. And that's also a challenge. And then I thought to myself, well, okay, so we need to have sub five millisecond latency. We need to have high resolution screens than we currently have because at that distance from the eyes, you know, the angle between the pixels is exacerbated quite a bit. It needs to be more responsive motion tracking. And I think ultimately the headsets need to be significantly lighter. They're still too heavy. And because after a long period of time, you know, you will get fatigue, you will, your neck and shoulders. Yeah. So at the moment, I think that all of those problems, I reckon two to five years, maybe, it's still a little ways off, but you can see that the technology is accelerating and we are getting there closer, but we're still not quite there. - Yeah, and it's a lot of the same stuff that we use for making smartphones, right? I mean, that's the thing is it's small sensors, it's very small, powerful processors and GPUs, it's high resolution displays, the gyroscopes and all of those things. Like the smartphone revolution has pushed that stuff to the point where it isn't ridiculous that you can stick a phone in a piece of cardboard and attach it to your face and have the right app sort of give you a VR experience. That's actually, you know, it's not ideal. And the ideal is gonna be something that's more purpose-built and that has the smarts of a smartphone built inside of it. But all of our tech is pushing in that direction, which is very good for people who are excited about VR because they're gonna be able to reap this, the advantages of this, even though it's not, you know, everybody on earth is gonna have a smartphone. Not everybody on earth is gonna have a VR rig, but in the end, the pressure from the smartphone industry will give that power to the VR industry. - Absolutely, and that's true. So that industry is gonna drive VR forward even faster, which is fantastic. And I was just thinking about, finally, about VR, what's the real value of it? And I mean, beyond gaming, which is obvious, well, I say obvious, I mean, that's been the market that's been pushed for the most, but there's also training simulators for certain tasks, and you wouldn't need to build physical replicas, for example. if you've got a control panel or a plane cockpit or the driver's cab on a train, those sorts of things, even learning to drive, you don't actually have to have a physical, you don't have to build a physical analog replica of what you're training people with, you can actually do it all virtually. And I think that's actually quite huge in terms of training simulations. But then again, maybe that's also a little bit too niche. But in any case, you're restricted with how long you can wear it. And ultimately, because it disconnects you from reality, 'cause that's the whole point, there's a much less use case for it. So you can't just use it when you're driving. Well, definitely don't use it when you're driving. Anyway. - I was gonna say that you mentioned driving. I think one of the funny aspects of virtual reality, kind of augmented reality, that we need to think about is this idea of, yeah, like a heads up display. There are cars that will project things off the dash onto the back of the windshield in front of the driver. And that's a case where augmented reality isn't blocking out the world, but it's adding layers on top of it. And the idea that you can have a heads up display that will tell you important things about what you're seeing in front of you, or even enhance it on a dark road, show you where the road curves and what cars are coming and all of that, like that's really powerful. And that's just an overlay on top of your existing senses to enhance them. And that's an exciting possibility. - Yeah, oh, absolutely. And that's exactly what I wanna talk about next. and that's our last topic by the way. And that's got to do with, well, hey, we're getting close to the end. But yeah, because you are overlaying additional information and heads up displays, you can add all sorts of context and how you do that. I mean, in a vehicle, and I was thinking about this, it's really an operating position because if you're sitting in a car, it could be a train or a bus, you could be an operator of a crane, like an overhead crane or something. I mean, all of that information, you can add more and more context to help people make decisions. And it's something that they started having in fighter pilots 30 years ago, the heads up displays in their helmets and so on, showing position and tilt and angle and all that stuff, not a pilot, anyhow, that kind of thing. And I'm really terrible at flight simulators, but that sort of thing has been around for a while and now it's just getting wider adoption. And that's gonna be a far more useful use case than something like virtual reality, because you're still connected to reality. So it's a lot less dangerous and it's gonna be a lot more widely applicable. But then if you're not actually in an operating position, then having something, as you were saying before, about backlight or some kind of an attachment that sits in front of your eyes, like on glasses. And let's just talk quickly about Google Glass, because that was intended to be, you know, like here is the future. And I remember at the time an article from Joshua Topolsky and the tagline I still remember was, I've seen the future and it is glass. And it's like, you know, the thing that's interesting about Google Glass is that the technology was way ahead of its time. And the funny thing is that what came out of it was not the tech, it was how people reacted to the tech that was interesting. - Yeah, right. The idea that like it had a little thing that would light up that would say that it was recording your social interaction. And there's this feeling like that was creepy and that people didn't want to, it broke a lot of social contracts by doing that. I think, and also you were visibly wearing an augmented device, which meant that you were in a way that, you know, maybe a pair of headphones doesn't do it. It was saying, here I am with this thing that doesn't look like a pair of glasses. You couldn't really go incognito. They tried to do that later, where they added kind of fashion frames, but it was hard to have you blend in. And that all kind of broke down. Also, Google oversold it, right? that was part of the problem. It was an extremely low resolution display that made it very limited with what it could do. - Absolutely. Did you ever get to try one at all? - I tried one for a very, very short amount of time and I found the experience kind of underwhelming. But also my vision isn't great and that was part of the problem too is that I think it was invented by people who've got really good vision and there was this whole idea of like, you have to wear glasses on top of my Google Glass And again, later they were like, we can do prescription Google Glass. But it was a, you know, it's a challenge, and it will be a challenge, this idea of if you could, you know, not everybody wears glasses. And this basically, for the physics reasons I mentioned earlier, and the biology reasons I mentioned earlier, you kind of need to hang something in front of your face. Glasses are a logical way to do it, even though they're maybe a little bit too close. You could miniaturize that and all that. So I think there are a lot of challenges there, But I do, as I mentioned earlier, I am kind of a believer in the augmented reality aspect of it where you're not blotting out the world, but you are getting your world annotated in a subtle way that only you see. Absolutely. And I think that there's all sorts of interesting use cases. And just on that thought, when you bring up not everyone wears glasses, exactly. And the interesting thing is that I know people that would wear glasses, but they don't like to wear glasses. They really hate wearing glasses. and they will wear contact lenses over glasses any day of the week. In fact, a lot of people I know that I work with and other podcasters are the same position. So, I totally understand that. And that's gonna be, well, if you wanna use augmented reality, you're going to have to, unless they come out with it on contact lenses, which is even more way off in the distance in terms of technology. But you're gonna have to put these glasses on if you wanna take advantage of it. But if that's okay, and of course it occurs to me that a lot of the geeks that want to develop this technology, that are developing this technology probably wear glasses. So they don't see that as an issue but anyway, interesting but never mind. So the point is that you can have all sorts of stuff like how fast you're traveling, what the current temperature is. If you're out shopping, say in a bricks and mortar store, you could do object recognition because right now you can take photos of things on your phone and have it looked up in an image search and tell you what it is. Well, It's the next step to say, "Well, now I know that this is this amount of money. This is worth $5 on Amazon or it's worth $10 at the next shop down the street." You can have all that sort of information. You can have information. I don't know what that landmark is off in the distance, hundreds of yards away. Have a look at it and it'll tell you what it is. "Oh, that's the Coit Tower. Okay, fair enough." Whatever it might be. All of that could be very, very useful information. I think that this that tech is far more useful and broader appealing than virtual reality is. Yeah, I agree. I think it is. There's huge potential there. I think the voice whispering in my ear is maybe a better interface for a lot of things, but not for all things. Having the ability to have yes, a recognition of who is this person? What's their name? I remember everybody you see and I can tell you who they are when you see them again, I'm really bad at. Like you said, annotating, where am I? Showing, I know that Google did a demo not too long ago with Google Maps being able to show you an augmented reality walking directions thing on their phones so that you can actually draw an arrow saying like, "Cross this street here," and all of that. I think that's great. We talked about glanceability earlier, and I think that augmented reality, done right, what it is because it's not replacing your world, it's overlaying it with information. And if we go back even further to me talking about how our brains process information differently in that Ted Chiang story, this is a part of that, which is we end up as kind of augmented humans who have these helpers to give us more information and let us process the world better. And I think that there's potentially a lot of power there. - Absolutely. And I think that one of the current use cases, for example, for augmented reality is just like holding up your phone or your iPad in front of, let's say a table or a room. And, you know, then it can overlay things over the top of that. And I think that there are a limited set of use cases where that's useful. I think it's obviously it makes for a really good demo and you've got this virtual battle going on in a game on the table or the virtual Lego demo. I think that they did at an event for Apple, if you are not that long ago. But because you're holding it, it gets very tiring. It can be a bit awkward. And I think that the one use case I could think of was holding up an iPad in a room for different color swatches if you think about painting a room. And I think that's actually a really good use case. It's certainly better than a cardboard color swatch, you know, the old fashioned way of doing it 'cause it'll give you a much better idea of what the room might look like. But in the end, I think for augmented reality to really take off, it needs to be something that is that you just put on a set of glasses or add it to your existing glasses, and it can give you all that information very discreetly. And I think ultimately though, I guess, unlike a tap on the wrist from the watch, which we love, 'cause we can choose to ignore that easily, if something comes up as a visual indication, there has to be a lot of care taken with whoever designs this, the final AR technology, that it doesn't obscure or distract or is in your field of vision, 'cause that could be really dangerous. and I don't think it's straightforward, but if you get it right, it could be huge, I think. - Right, well, when I mentioned the car example earlier, that's a good example of how that information does need to be placed very carefully because the last thing you wanna do is obscure the vision of somebody who's driving a car. But at the same time, if you could enhance the vision of somebody driving a car, that would be very powerful and increase safety. And so that is, once again, it's all about details. - Absolutely right, absolutely. So the only other thing I was thinking about is how far off is this? And I think that apart from obviously the social contract stuff, which you mentioned, and I think privacy considerations in that respect is really important. So with your recording, you don't, I think it probably would be better to have glasses or augmented reality where it's not possible to record. Like data comes in, they process it, overlay information on the top of it, but it's never recorded. It's the sort of thing that I would expect that Apple would simply say, look, we think it's creepy. We think it's an invasion of privacy. It doesn't matter if there's a red light blinking. We don't want it at all. I think that that would go a long way to avoiding a Google Glass or Glasshole or whatever they call people. Yeah, it would avoid a lot of that. But ultimately the only things technologically stopping it is just miniaturization and looking at what the Apple Watch has and how far it's come in four or five years. It may only be another year or two before we have something that's a first product out on the market from someone like Apple, maybe. - Could be, I think that it may come very soon. - Well, here's hoping. I would get one 'cause I couldn't help myself 'cause I'm a geek, but that'd be awesome. Anyway, all right. So ultimately, just to sort of put a bow on this, you can use a single device, I think, for multiple purposes, but it's never gonna be the optimal interface for every use case, and that's not possible. There's no such thing as one device for everything. And it's funny, I hear some of my friends complain about, there is now so many different products in Apple's lineup that it's hard to pick and across, whether that's desktops, laptops, iPhones of different sizes, Apple Watches, the HomePod. And it's just, there's so much variety. If I had to spend money on which one, which one should I get is a question I know that you get asked regularly. And that's a really hard question to answer. And I guess the whole point of having this, this going through all of the different optimal interfaces for different things and our different sensors is that if you can optimize, you know, for each use case, you say, I want the best, most relaxing device for this task, let's say, then hopefully some of the things that we've talked about can help sort of focus you on what's the optimal interface for this job. And therefore it's okay for me to go and spend a bit extra and get the best possible display, the best possible iPhone or the best possible watch because that's gonna satisfy the use case that I want. It's gonna be the best interface for glanceable information. I want it to be discreet. You know, that's the way to go. And a long time ago, I remember a tradie once said to me, "Write tool for the job." Yeah, meaning that you don't use a flathead screwdriver and a Phillips head screw. I mean, you could, but you could probably get away with it, but eventually gonna gouge the screw head and it's all over and then you'll be drilling it out. But you know what I mean? So I don't know, what do you think? - Yeah, I think we are, today, we're kind of in a transitional phase, but I do think we are headed for a world where we are departing from the world where you have a device, right? Like we have departed from the world of, I have a computer. That era is over. Now we have lots of devices, but it feels to me very much like we are entering an era with a constellation of devices around us, depending on our needs, right? And that does allow you to have the right tool for the job. And that's a challenge for some people because they are thinking from a mindset of, you know, every product should be for me. And that's where you get people, I think, who say, oh, AirPods are stupid, or iPads are stupid, or Apple Watches are stupid. And the truth is, they're not all going to be for you. In the constellation of devices world, it is going to be the right tool for the job. And if you don't have a job, you don't need the tool. but what we are going to have is a bunch of small interconnected devices, ideally working together to make, you know, to do the parts of the job that are most appropriate. And, you know, Apple definitely is going in this direction, but I think lots of other companies are too. But even just with my Apple devices, I have a giant screen that I can sit at and work. I've got an iPad, which I can use around the house. I've got my iPhone, which I carry in my pocket, but I've also got, I've got the smaller devices. I've got the AirPods in my ears and the Apple Watch on my wrist. And in the future, you know, are they gonna do augmented reality glasses? And the idea there is that eventually you've got little tech. It's not all one big primary interface. It's scattered. And the Apple Watch is gonna be way better at parts of the job, if you want those parts, if that's the right tool for the job. And I'm excited because smaller devices are going to be, I think by their very definition, more appropriate for very specific circumstances because they don't get in your way, because they're small and they ride along with you and they're part of a larger story. And I think that is exciting, but I do think that's where we're going is the right tools for the job instead of what we used to have, which is buy a computer and use the computer. And if that interface didn't work for you, it's too bad, it's all we have. - Yeah, it's interesting from the point of view, I'll completely agree. I think that when we look at, I used to get frustrated because, you know, there were always too many compromises for one device fits all and I've tried different things like just having the watch as the only thing that I had and like on me at the time and a pair of AirPods to be honest so I could make phone calls but I tried that. It didn't work out in the long term because it wasn't the optimal interface for a whole bunch of things. I couldn't do web searches. I couldn't search for anything on it and that turned out to be a bigger problem than I realized at the time but you have to try it. In the end, now we've got an amazing choice. I actually look at it as a good thing because you can actually pick if you want the best, most glanceable information and you want something that's gonna discreetly notify you when you have notifications, then get an Apple Watch. It's, you know, irrespective of the health component, you know, I think it's great to have that option. So I see that as this diversifies, you can pick the best device for the job that you want. And that's a really good thing. But the thing is that at the moment we assess that's where we are today. And the next inflection points are gonna be when speech recognition really becomes fast, reliable, and consistent. And I'm sure that people have been saying that for decades, but I actually think that's a lot closer than it has been in the past. So, and that'll remove a lot of the advantages of the keyboard, in which case, then suddenly that equation will change and we can go and reassess, 'cause we can do mass text entry, well, text entry through speech through a watch now, when that becomes reality, potentially. So all of that will change again and we will reassess and we'll refine, and then there'll be a new optimal interface. So this is a never-ending thing as the technology evolves, and I think that's fantastic. I agree. All righty. Well then, if you want to talk more about this, you can reach me on the Fediverse at [email protected], or you can follow Engineered_Net on Twitter to see show-specific announcements. And we've recently started a YouTube channel if you're interested in that sort of thing. If you're enjoying Pragmatic and you want to support the show, you can via Patreon at patreon.com/johnchigi or one word, with a special thank you to all of our patrons, a special thank you to our silver producers, Carsten Hansen and John Whitlow, and an extra special thank you to our Gold Producer, Only As Are. Patron rewards include a named thank you on the website, a named thank you at the end of episodes, access to raw detailed show notes as well as ad-free, higher quality releases of every episode with patron audio now available via individual breaker audio feeds. So if you'd like to contribute something, anything at all, there's lots of great rewards and beyond that, it's all really, really appreciated. Beyond that, there's lots of other ways to help like leaving a rating or review on iTunes, favoriting this episode in your podcast player app of choice, or sharing the episode or the show with your friends or via social. All those things will help others to discover the show and can make a huge difference. I'd personally like to thank Backblaze for sponsoring the Engineered Network. Remember to specifically visit this URL, backblaze.com/pragmatic to check it out and give it a try. Don't take a chance with your data, start protecting yourself now. And don't wait a few months like I did, start today. There's now a regular Q&A session and this live stream for the show on the network. If You can submit questions for the Q&A with the hashtag #EngNetQA on Twitter or the Fediverse or live in the IRC chat room on freeno.net on the channel #EngNet. I hope you can join us live. Thanks to everyone who did join us live today. Really appreciate it. Pragmatic is part of the Engineered Network and you can find it at engineered.network with a full upcoming live show schedule now included. If you'd like to get in touch with Jason, what's the best way to get in touch with you, mate? I don't know. You can tweet at me @jsnell on Twitter. You can find all my stuff that I do at SixColors.com, including all the podcasts at TheIncomparable.com and Relay.fm. Fantastic. Awesome. Well, thank you very much, everybody that tuned in live. Thank you also to the patrons and a big thank you to everybody else who listens to the show non-live. And thanks for coming back on the show again, Jason. It was a blast. It was a pleasure, as always. Thanks for having me. Thank you. [Music] [MUSIC] (upbeat music) (upbeat music) [MUSIC PLAYING] (upbeat music) (upbeat music) (upbeat music) ♪ ♪ ♪ [MUSIC PLAYING]
Duration 1 hour, 41 minutes and 1 second Direct Download
Episode Sponsor:

Show Notes

Related Article Links:

Referred During Show:

Miscellaneous Links:


Episode Gold Producer: 'r'.
Episode Silver Producers: Carsten Hansen, John Whitlow and Joseph Antonio.
Premium supporters have access to high-quality, early released episodes with a full back-catalogues of previous episodes
SUPPORT PRAGMATIC PATREON APPLE PODCASTS PAYPAL ME
STREAMING VALUE SUPPORT FOUNTAIN PODVERSE BREEZ PODFRIEND
CONTACT FEEDBACK REDDIT FEDIVERSE TWITTER FACEBOOK
LISTEN RSS PODFRIEND APPLE PODCASTS SPOTIFY GOOGLE PODCASTS INSTAGRAM STITCHER IHEART RADIO TUNEIN RADIO CASTBOX FM OVERCAST POCKETCASTS CASTRO GAANA JIOSAAVN AMAZON

People


Jason Snell

Jason Snell

Jason appears on the The Incomparable Podcast each week and for a long time was the Editorial Director of Macworld, PCWorld and TechHive but now runs his own site Six Colors and also podcasts on Relay.FM.

John Chidgey

John Chidgey

John is an Electrical, Instrumentation and Control Systems Engineer, software developer, podcaster, vocal actor and runs TechDistortion and the Engineered Network. John is a Chartered Professional Engineer in both Electrical Engineering and Information, Telecommunications and Electronics Engineering (ITEE) and a semi-regular conference speaker.

John has produced and appeared on many podcasts including Pragmatic and Causality and is available for hire for Vocal Acting or advertising. He has experience and interest in HMI Design, Alarm Management, Cyber-security and Root Cause Analysis.

Described as the David Attenborough of disasters, and a Dreamy Narrator with Great Pipes by the Podfather Adam Curry.

You can find him on the Fediverse and on Twitter.