Pragmatic 115: Use Sparingly Use Wisely

6 June, 2024


Scott Willsey returns to pull apart the Google AI Overviews feature, the risks of trusting LLMs and Generative AI and the OpenAI Scarlett Johansson saga. Also John’s returning to the USA in September and embracing Dumb Watches along with a host of other updates.

Transcript available
Pragmatic is a show about technology and contemplating the finer details in their practical application. Welcome to Pragmatic! By exploring the real-world trade-offs, we dive into how great ideas can be transformed into products and services that impact our lives. Pragmatic is entirely supported by you, our listeners. If you'd like to support us and keep the show ad-free, you can by becoming a Premium Supporter. Premium Support is available via Patreon and through the Apple Podcasts channel subscription. Premium Supporters have access to early release, high-quality versions of episodes, as well as bonus material from all of our shows not available anywhere else. Just visit to learn how you can help this show to continue to be made. Thank you. Before we begin this episode, by popular request, official Pragmatic t-shirts are available again for a limited time, with different drinkware and stickers also if you're interested. Visit for details. I'm your host, John Chidjee, and today I'm once again joined by Scott Willsie. How you doing, Scott? Good, how are you, John? I am very well, thank you. It's been a little while since we caught up on Pragmatic. Last time we talked about silicon and stuff. Yep. I had another topic that I know from listening to your other awesome podcast, Friends with Brews, about your AI discussions and tribulations, and we'll get to that. I definitely want to ... That's the main topic for today, but I got a few updates I just want to quickly go through. One of these you know about ... Actually, you've probably got the notes up. You probably know all about them. The first one is I've been asked to speak at a conference in Prescott in Arizona, which is an odd place I thought for a conference. According to Ronnie Lutz, it's a beautiful spot. It's not a very big city. I would have expected somewhere like Phoenix or, I don't know, somewhere bigger than Prescott, but in any case. What I was going to do is since I'm coming a long way over, I was last in the United States in 2019, just obviously pre-COVID, and I went to Houston to a conference in Houston. I didn't speak at that one, but it was a good conference. When I was there, I stopped in at Austin and caught up with Manton Rees, which was cool. I'm trying to gauge interest, if there's any interest in a listener meetup at different spots along the way. I might be able to make some pit stops roughly in that area. I've been teasing out options with different people, and Ronnie's like, "You should stop in Vegas," and I'm like, "I possibly should." Vic's like, "You should stop in Kentucky," and I'm like, "Hmm, it's not really on the way. [LAUGH] It's a fair bit further." We're not laughing, Vic, we promise. No, I have actually costed up doing a side trip to Kentucky. It's not completely out of the question, but it is going to take ... Yeah, it's going to cost ... Because the company won't be paying for it, it's going to be a bit X-y, but that's okay. I had a few requests to stop in Seattle, which is not too far from Portland, but I can't stop everywhere, I guess, is the point. If you're interested, there'll be a link in the show notes. I did a post on Patreon, simply because Patreon ... This is a public post. Anyone can access it, and you can vote and say, "Yeah, this sounds like a good spot," or, "No, suggest your own," whatever, if you are interested in a meetup. If there's enough interest, no promises, but just let me know, and we'll see what we can do. I've done ... I mean, the whole catch up with Manton wasn't technically a meetup, but it was fun, because there were a couple of people there that listened to the show, which is kind of cool. Anyway, we'll see how we go. Let me know what you think. Where do you think I should stop, Scott? Well, I don't know. Yeah, I know. Prescott is kind of an unusual place to have a conference, honestly. It is a nice place. There's a lot of, it's just north of Phoenix, so it's pretty close to that area. I know. But there's a lot of interesting things to do there. But where should you stop? Yeah, so even if I don't make a pit stop directly in Vegas, I'll probably meet Ronnie You should stop in Portland, which you're not going to do. You should stop in Boston, which you're not going to do. I'll let you decide whether or not you should stop in Kentucky. You should go to Vegas, though. Ronnie's there. And he, yeah, go there. halfway at the border or some kind of thing like that, like, I don't know, undercover Mm-hm. style. Yeah, I know I was just thinking of some kind of illegal deal happening at the border. I don't know, whatever. It'd be cool. Some kind of deal that's on the border, yeah, across state lines. I don't know. Anyway, cool. All right, well, it's in September, so it'll be late September when this is happening, or mid to late. So I've got a few months to get this sorted, and there'll be a few episodes between now and then, but yeah, please organize my travel in the next few months, so let me know what you think, and as I say, link in the show notes. Now as I said at the beginning of the show, I was going to give merchandise a break, but Cotton Bureau reached out to me, and by reached out to I mean sent a blanket mail-merged email thing that said, "Hey, we're doing drinkware now." I'm like, "Drinkware, you say? Okay, well, I'll have a look at this." And they're offering make-to-order, at least, so you can do tumblers, cups, water bottles, and the other thing, I posted it, and then I got feedback saying, "Oh, are we going to do coffee mugs?" And the thing is that Cotton Bureau don't do coffee mugs unless you have a storefront, and you don't get a storefront unless you're like a really big, super popular show. And as much as I'd like to think that I'm in the same, you know, it's that pragmatic and causality of popular shows, I mean, they are, but they aren't, compared to someone like let's just take a random example like the Accidental Tech Podcast, you know, they'll sell like 600, 700 shirts, right? I'm selling more like 25 to 50, and Cotton Bureau look at me and say, "Hm, nah." So the coffee cups, I went back to Teespring, and I didn't realize that Teespring, now called Spring, also now do coffee mugs, and they also do stickers and stuff. So I'm like, "Yeah, fine." I had some requests, so you can also have, there's also some coffee mugs and stickers up as well. So I only put the popular shirts back up, the ones that didn't sell like a year and a half ago when they were last open for the 50th anniversary, 50th episode of causality celebrations, I should say. Yeah, so in any case, they're up. If you're interested, just have a look on the Engineering Network on the store, and yeah, grab one while you can. I'm not going to leave them up forever, maybe a few weeks, maybe a month or two. We'll see how we go. But in any case, if you're interested, there they are. All right, what's next? I sort of talked to Vic last time, Scott, about my annoyance with smartwatch longevity. I think it'd be this way to summarize it, I guess. And anyway, so after a lot of deliberation, I decided to, well, I talked to my wife and she bought it. But anyway, so my wife bought me a, it's a Citizen EcoDrive stainless steel watch. So that's the one that you, it's solar power charging, and once it's fully charged, you get something like six, seven months of battery life out of it. And it is, not that I wish to insult it, a dumb watch. And it probably sounds strange coming from me, having been such an Apple Watch advocate for so long. But I've just reached that point where the notifications were starting to really trigger me a bit. I don't know how to put it. I was always a little bit anxious and on edge, and I found that since disconnecting just this little bit, I've actually felt more relaxed in myself. And at the moment in my life, it's just, it's working for me better. So I'm stepping back from Apple Watches for a while. And I like this watch. It's a nice watch. It's not ridiculously expensive, but it's also not necessarily cheap and tacky. So it should last. It's just, it will probably outlast the battery in an Apple Watch, to be honest, before it needs to be replaced. Oh, for sure. Yeah, I think without doubt. Are you still wearing an Apple Watch or are you wearing an Apple Watch? I can't remember how big of a... Yeah, I have an Apple Watch Ultra. Mm-hmm. Yeah, I just, for me it's indispensable for the fitness and I like that aspect of it. Mm-hmm. Mm-hmm. I did have, I had notifications way dialed back and it was perfect. And then I started helping a friend with his consulting business doing like some server related stuff and some client related stuff. And when I did that, he uses Slack, which is fine, except that he's got all kinds of information feeds going into the Slack. And I can't turn the notifications off on the Slack and I can't find a granular way to control which of those notifies me through Slack. So it's gotten a little more annoying since then because I get all kinds of notifications from Slack I don't care about and some that I do care about. But aside from that, I think if you're a person that's not getting work related notifications, you can lock it down pretty well. The only thing I ever used to get was stuff from my wife or daughter and that was perfect. I wanted that. But yeah, for me, it's the fitness aspect of it. That's why I keep going with it. I should probably clarify that, I mean, I still have both of my Apple Watches. I've got the Series 3 that I was using for sleep tracking and then the Ultra for day wear. And by switching to this, I've just disconnected from both of them, but I've still got them. And if I do go to the gym for a workout, I will actually put on my Apple Watch Ultra. Absolutely. And, and then do the whole thing where you're wandering around the gym trying to find the machine that actually will sync with gym kit. And it's like, "Nah, this one's busted. Nah, this one's busted. Oh, found one that works. Oh, there we go." Because I love the, like, I love the gym kit integration. It's a little thing, but it is really, really cool, really handy. Hmm. So in any case, so I am still using it for that. So that's, you know, that's, that is, it's not like I'm not, not using it at all, but if I only go to the gym a few times a week, then, you know, that's it. So anyway. Alrighty. Yeah, right. So there we go. Not too much else to say and other than I look forward to this watch lasting me more than about six or seven years. All right. There was an Apple event where they had iPads. Did you watch the event at all? I did. Thoughts? I didn't care at all, but I did watch it. I pretty well summarized how I felt about it. I, I'm only late last year. I bought an iPad Air fifth gen, that's got the M1 chip in it and USB-C, Apple, Apple Pencil two, you know, not, not, God, the second Apple Pencil, but not the latest Apple Pencil. So the, so the tappy, not the squeezy. And I got the smart keyboard for it, which is not the one with the trackpad, just the one with the keyboard. Right. And honestly, I use that for editing podcasts. It's my iPad, cause I had a back, a work iPad that I'd been using on and off. So it's all mine, using it for editing podcasts and just browsing on the couch and it's fine. I can't see why anyone would need, hang on, let me rephrase that. I can't see why people, how many people would actually get benefit from an M4 chip in an iPad. I don't understand. It's like, because anyway, I'm sure they're out there. I just don't know who they are. In any case, good luck to you guys that, that buy a new iPad. That's great. And I'm not convinced squeezy pencil is any better than tappy pencil, to be honest, except maybe it'll be a little bit more error rejecting because I will admit sometimes you move the pen pencil vigorously. It thinks you're tapping it. Yeah. Yeah, I always, the tapping was a great idea when it worked, it was magic, but I always had the problem that other people have talked about where sometimes when I really wanted it to work, it didn't. And then other times all of a sudden it would just think I was tapping it and it just drove me insane. I'd go to write something in good notes and I hadn't noticed that it had jumped over from scribbling to erasing. I'm like, Oh, hang on. I just, okay. Undo, undo, tap, double tap it back again. So yeah, but anyway, that's all right. It's still for the most part good enough. Well and truly good enough for what I need anyway. Anyway, I just mentioned it moving on iPad event. It was what it was. I'm more excited about WWDC coming up in a few weeks time. Mm-hmm. Actually, is it, hang on, is it is next week? Next week, I think, right? Wow. Time flies because we're in June. My God. Yikes. All right. So a few, about a month ago I had a situation where I sort of acted on a bunch of requests from different listeners and readers actually. So I've maintained my own blog since, so Tech Distortion has been on various platforms like between WordPress and it was on Statomic for a while. And then I created a version for using GoHugo and that's where it is now. Having said that, and that was all since I left Blogger in 2011, 2012, something like that. And anyway, so what I, I got the request of, look, we would, and sometimes you get this, right? Where people say, Oh, we would support you if you had a way that I want to support you. You know, like I'd support you if you were on Spotify, I'd support you if you were on Substack and sometimes you're like, well, yeah, okay. But I mean, you don't really know if that's true until it's actually, even until you've tried it. And so I decided in a moment of, I don't know what, that I would port everything into Substack and I'm not shutting down Tech Distortion, let's just be really clear. I'm going to run the two in parallel and I'm going to essentially post in Substack and then after a period of time, that post will then appear on Tech Distortion. And I'm just going to see if anyone even cares because I don't know if anyone does. And I guess for the people that asked for it, this is your opportunity if you're interested and if you're not, then that'll be that. So we'll run this as a little experiment, just like I did with Spotify. And basically after two years, I think one person supported me for like two months on Spotify and then that was it. So it's like, well, that's that. And it's since been, you know, essentially turned off. So you've got to try these things. So this is my Substack experiment, I guess. So for the moment, there's now actually two articles behind the paywall. Everything else is freely available up until two years from... Hang on. So the archive that's greater than two years old is also behind the paywall. But then again, you know, that's just a setting in Substack, honestly. And if you really want them all just like without going, without just... Just go and look at Tech Distortion. It's all on Tech Distortion. So anyway, so I set it a level of $5 a month US, $8 a month Australian. There's an annual discount and a founding member thing. If you're really, really insane and you really do want to pay that, you can. And if you want to post comments, because Tech Distortion does not have comments, then you'll be able to do that if you're a subscriber in Substack. In any case, I'm not really sure how this is going to go. The first article that I published on there was... I guess we'll talk about it next. But in any case, there's also a feature in Substack I might have a bit of a play with, which is a podcast feature. And I was thinking about doing host-read episodes available for Tech Distortion articles behind that as part of the Substack podcast feature. I haven't played enough with it yet. Still teasing out if I want to do that or not. We'll see. Anyway. It's funny, after it went up, someone posted a comment about... It's like, "Oh, Substack's the platform for Nazis." And I'm like, "Isn't that what they're saying about Twitter?" I don't know what that even means. So okay. Do you have any thoughts or opinions on Substack? It's regarding their policies and their stance towards moderation. Yeah. It's basically the stuff that they've decided to moderate versus not moderate. And they refused to -- they're trying to be very hands-off in moderation. And that's fine. I'm with you. It only really works to a point in the real world. I mean, the problem isn't the real world. You just have to moderate. That's all. And, anyway, that's basically what it boils down to. It's their moderation policy with respect to who all gets a voice on their platform and what they're allowed to say. That's what makes people angry. Well, I honestly don't know what to think. And I'm not saying one way or the other whether I understand exactly what their policy is or whether I agree with it or disagree with it, but I understand that that is the heart of the issue. I feel like... The funny thing is, when I actually went live with Spotify, with Causality on Spotify, as a paid premium thing for people to support the show if they wanted to, when I went live, I got something like two or three people come back to me with the hashtag #DeleteSpotify. Like, "How could you do this?" And it's like, I fully expect I'm going to get the same thing with Substack. Mm-hmm. It's going to be, "How could you do this? What are you doing on Substack?" I honestly don't know. I figure that for as many people that want it, there are as many people that hate it, and will judge you for doing it. Like, "Oh my god." Sure. But I'm not going to judge you for doing it. Honestly, my biggest issue with the platforms like this is, like, you're doing this in tandem with your existing site. You're keeping ownership of your own stuff. The thing that has always baffled me is, like, journalists and media companies that get on a platform, they use it solely and exclusively, or they build up their business based on it. And then when it goes south on them, they don't know what to do. And that always -- that to me is like, why would you let a platform be in control of you being able to make money? It's interesting. That's ridiculous. Websites are still a thing. I know that nobody believes that. But platforms come and go, and they do terrible things at times, and they do great things at other times. So to only rely on that -- and some of these professional organizations, surely they can figure out how to run a website and own their own content, but they seem to not want to -- you know, they seem to not want to bother. They want to offload that to other people, and then they're surprised when there's downsides to that. I actually listened to Decoder, and Nilay Patel interviewed the CEO of Substack. It was an absolutely fascinating demonstration of a CEO that was absolutely unprepared for all the moderation questions. Right. It was like he'd never listened to -- you know, Yeah, I know. never listened to the way Nyle asks people questions before. I love Nilay Patel. He's awesome. I've followed him ever since the Engadget days. I'm a big Nilay fan. It all came with Substack. It came to a head more surrounding their notes feature. When you log into Substack, it's like it's there in your face. I'm like, "I don't care about that. I don't care about notes. I haven't posted a note. I don't want to post notes." To me, it's just a blog. That's what Substack is supposed to be. Blog, subscribe, email newsletter, yada, yada, yada. It's a way that you can pay to support if you want to. That's all. That's all it is. I have no interest in notes at all. That's where all this flaring up happened, because they make it so front and center that they want you to share notes. It's like, "I don't want it." It's kind of like the Substack notes is the equivalent of YouTube shorts and Instagram reels and not reels. God, Instagram story? No. The short videos like TikTok. It's kind of ... Had that not happened, I do wonder whether there would have been quite such a flare up. In any case, I should probably link to that in the show notes, actually, because that was a fantastic episode. Yes, even after all that, I'm still trying the whole Substack thing. Now, if it goes nowhere, I'll just turn it off. I still own all my own stuff, and I'll still have tech distortion. I'm not getting rid of the domain. We'll see what happens. Support that way if you want to, or wait and everything becomes free anyway, and we move on. It's all good. All right. One of the things that I did post on there, I just want to quickly touch on it. It's not a big deal, but I had this ... I was going to say argument, lively discussion with someone at work, and they were asking me questions, because I drive a Tesla Model 3. Now, I love my car. I do not associate my vehicle with an individual, so my feelings towards the CEO of that company are irrelevant to how good the vehicle drives. Anyway, other people don't feel that way, and that's fine. They're welcome to feel whatever they like, but electric cars have this reputation, and I say a reputation. It's more of a ... because it's not based on much fact. People will say, "Well, the battery degrades over time." I buy a battery. The rechargeable batteries, they only last three or four years. I try them out and get new ones, let's say. A car is just a bigger version of that. They say, "Well, I've got a car, and I buy a car, and it goes for, let's say, 60,000 miles, 100,000 Ks, whatever, and in a few years' time, it'll still go the same distance." You double that distance it's traveled, you'll still get the same range out of it. Nothing really changes. Then eventually, you'll hit the wear-out mechanisms of the engine, at which point then, you're up for reconditioning the engine, maybe even replacing the engine. With an EV, it's different. With an EV, you go 100,000 Ks or 60,000 miles, and you lose 5% of your range. My Model 3 has 5% less range today than it did when I bought it three years ago. That worries some people, but I don't understand why. Because if you've got plenty of charging infrastructure around, 5% of range is nothing. It's like 20, 30 Ks, or whatever that is, 20 miles. You say, "Well, my car's less usable now than it was." But what's usable mean? Because everyone is going to have a different threshold. If I'm doing long distance backhaul travel, if I'm trying to cross the continental United States or go coast to coast in Australia, the range is probably critical. That's fine. But the vast majority of people aren't doing that. Even if you are doing that, if you have enough charge, it just means you've got to stop and charge more regularly, and that's not the end of the world either. People are struggling with this. I wrote a story doing the comparisons, and by my math, if I say my daily usable range is about 125 miles or 200 Ks, that means I can get into the office, get home again, a couple of stops along the way, and I'll have enough range, I won't go flat. Then I can keep driving my car for 30 years, at which point then, it will have traveled a million kilometers, or 620,000 miles, which is pretty damn good. I'm pretty sure that I won't be driving this car in 30 years, because I've never owned and driven a car for more than 13 years, that's my record. What's the problem? Because what happened is this debate I had at work was, "Well, thank you, John, for convincing me never to buy an electric car." I'm like, "Okay, I guess." Yeah, ever. Yeah. Some people have strong opinions. I know, who would have thunk it? I just don't get it, because if you were to recondition an engine, and/or replace the engine over the same 30 years, it's going to work out approximately the same cost, maybe slightly more, because in time, the cost of batteries is going to come down, the cost of reconditioning an engine is primarily labor, and that's just going to go up. How does that make any sense? I don't know. Anyway. So, yes, EVs are the future, at least the near-term future, until something better happens, which I don't know what that would be. It certainly isn't hydrogen, because that hydrogen just wants to escape. It's very hard to contain it and keep it. So being the smallest element. >> I just think it would be fun, though. You would see a lot more explosions on the highway. It would be great. Mini Hindenburgs. I've talked about this before. I'm not entirely sure that ... I mean, hydrogen's got a bad rap, thank you, Hindenburg, but my problems with hydrogen have just got more to do with it's expensive to manufacture clean hydrogen. You can get dirty hydrogen. That's cheap. That's fine. But clean, like green hydrogen, through electrolysis and so on, and other methods that they've got that are ... It's not straightforward, and transporting it, like ... People think, "Oh, hydrogen's a gas, so liquid hydrogen's the same like liquid petroleum gas." It's like, no. They're really different. There's a massive difference between containing hydrogen versus containing LPG. There's a massive difference. Complex hydrocarbon's much heavier, much easier to contain, whereas something that is so light, the lightest element on the periodic table, it will escape through any damn crack, defect, anything. And when I say crack, I don't mean crack. Beyond microfissures, but things like all the seals and everything. You've got to have multiple stages of seals and everything, and the cryogenic containment vessels, it's so expensive. And anyway. >> And those same concerns lend themselves not just to transport, but also to storage and distribution. Yeah. Exactly right. So, like, you go to fill up your car with hydrogen, everything in that facility has to be maintained and thought about the same way. Yeah. No, it's just ... Yeah, I can see it for heavy transport, but I can't ... And for industry. You make your hydrogen, you use the hydrogen to fire your furnace or whatever. I can see that, but I can't see it for anything else. It's just too many variables and too expensive. But anyway. Nevermind. All right. A couple more things, and then we can get into the main topic. I spoke before about Spotify, and what I was talking about was a Spotify thing with Anchor, where you put a feed in there and you could support through Spotify, and then you would get access to those episodes. So I ran a parallel feed for a while, now two years, for Causality. And like I said, no one was really interested. Okay. Now what's happened in the meantime is Spotify and Patreon have done some kind of deal, where now if you are a patron of a show on Patreon, if you want to listen to a podcast that you're a premium supporter, you get the premium feed, and you get all the episodes ahead of time and high quality and all the bonus snippets that don't go out anywhere else. So that stuff. All of that now can be linked into your Spotify account. So you log into Spotify, let's say you're already a Spotify member and you're a patron. What you can then do is link the two, and then the show will show up on Spotify, and you can listen to it through Spotify. Whereas now you've just got to know to copy and paste that RSS feed that is unique for you and add that to your podcast player. Most people that support are geeky geeks, mostly. We said that, engineers, software developers, and so on. So that is not a big deal. So I don't honestly know if this is going to be a huge deal or not, but if you are on Spotify now, you link the two and you will be able to listen to the master feed from Patreon on Spotify. So there is that. If that's of value to anybody. For me, it was a no brainer because I basically just, I linked the two, enabled it in the background and it just ported the links and the audio across into Spotify system and it maintains that synchronicity in the background. So it's like I had to do very little, but then again, maybe you're one of these #deletespotify people, so who knows? All right. Last point before we get stuck into the main topic. My fateful Pleroma, feel free to sign up server that I launched a few years ago is now officially formally closed. I launched it and had some early reliability issues that Scott, I think you were quite familiar with because in the early days you were using it. And by the time I'd figured out what was causing those problems, most people just found other instances and had joined other instances in the Fediverse. So it had 10 users at its peak of which I was one and then admin was the other. So it's not like it was super popular. And I thought at the time it would be a nice service to offer, but the problem is that it was another thing to maintain, to moderate and moderating an instance is very different from just posting whatever you feel like. And you'd think I would've learned that lesson, but apparently not. Ultimately though, I think what really didn't work out with was simply the reliability issues early on. And yeah, unfortunately I probably should have stood up the instance, worked out all the kinks and then announced it rather than announcing it and then standing it up. So I probably got those the wrong way around, but irrespective, it is no more. But you found a new home on I think, isn't that right? Oh, it's better Yeah, that's true actually. I hadn't thought of it like that. I like the cu domain because it's cheap basically. But yeah, cost is a feature. But honestly it was... anyway. So look, I apologize for anyone who was still keen to keep using it, but I did message all of the users on it and say, "Hey, are you going to be offended if this service goes away?" and everyone said, "Feel free to kill it. We're all good." Or words to that effect. I know the Aynward Fraser quite liked that, but it was close enough. >> And migrating is easy and possible, so it's not bad. Yeah, cool. Well, so that's that. It's not a big deal, to be honest. Another chapter closes. All righty. Main topic time. Now I know that AI, and I say AI, artificial intelligence, most of the stuff we're talking about is a form of machine learning and machine learning in the context of large language models whereby there's masses amounts of texts sourced from various places that are basically ingested into a machine learning model. And then you can play the game of what came before, what came after statistically, and it can be useful. Sorts of different things like predicting text when you're typing in text and say, "Oh, the most likely thing to say after this is this," and it helps you with autocomplete and so on and gets it wrong regularly enough and so on. But now things are stepping up a bit, and I know that you've been playing with Raycast AI, which maybe ... Let's start with that actually because you've got ChatGPT and you've got a whole bunch of other different competitors in this space. So just tell me a little bit about what Raycast AI is and ... Yeah. Raycast AI is a new model of machine learning. >> Yeah, so for me personally, I have -- I do also have It's a new model of machine learning that's kind of been around for a while now. OpenAI's chat GPT+ account. I also have their API account. It's a new model of machine learning that's been around for a while now. And the reason is because we're using the same model in a different way. And I have Raycast Pro, and I'm also paying for the Raycast So we're using a new model of machine learning that's a little bit different. API. Basically what it is, if you're familiar with Raycast, it So we're using the same model of machine learning as a way of predicting text. falls under what's called app launcher category. App launcher is a And the reason is because we're using the same model of machine learning as a way of misnomer in my opinion because it's really the least popular or least predicting text. So we're using the same model of machine learning as a way of predicting text. used activity for using Raycast. Yes, I do use it to launch apps And the reason is because we're using the same model of machine learning as a way of and to launch links, doing things like using Raycast AI, stuff like predicting text. So we're using the same model of machine learning as a way of predicting text. So we're using the same model of machine learning as a way of predicting text. that. And what Raycast has done is said, we're going to allow you to And the reason is because we're using the same model of machine learning as a way of access all these different models, and then we're going to try to predicting text. So we're using the same model of machine learning as a way of predicting text. encapsulate it in a way that provides functionality for you and And the reason is because we're using the same model of machine learning as a way of for you. So you can see here, I have a pop-up as like spotlight-ish. predicting text. So we're using the same model of machine learning as a way of predicting text. It basically pops up a text bar with a few other things. And you start typing stuff in there. If you want to use the QuickChat AI, you can type something in there and hit tab, and it will send that to AI instead of you. You can ask the AI things real quick right there. And you can do things like set up AI commands. Like you can say, give me a regex, a regular expression for whatever I have selected in the front window. And it will operate on that and return you a result. You can do all kinds of things. Then there's also an AI chat where you can have, you know, keyboard shortcut to pop open this window, and then it's very similar to using chat GPT or something like that. And you have conversation histories, you can have multiple conversations, all the stuff you would expect. A conversation now, stuff like that. So basically what it is, it's a way for them to charge you money, of course, to get access to different LLMs and use whichever LLM you want for a specific task. So you can assign, like in those AI commands I was talking about, like, you know, spell check this for me. I have some that say spell check, but don't change anything. Don't do anything. Only spell check. You can be very specific. And then with each of those, you can individually choose which AI -- which model you want to use. And then with the chat, with the advanced chat, which is a separate window, you can change models as well for individual chats. So it's basically just that. It's flexibility, consistency in the interface, integrated into a tool you're already using, and access to several different models. [ Applause ] [ Session concluded ] >> They do. I will say I've never, ever hit those. But it's not like I'm using it once, I use it a few times a day. I usually use it for, you know, simple programming things, like just to remind me, because in this language, how do I do this versus this other language? Or quick, write me a bash script to do this. Or I'm doing server admin thing, and I'm like, what does this mean? Where would I look for this thing? And for me, for chat GPT 4 and 4.0, it's very good at doing those things. So that's what I use them for. But that's not the way I use it. I've never -- I'm not sending loads of giant walls of text back and forth as it tries to maintain its context. And so I don't run into those limits. Okay. [ Session concluded ] Well, that's interesting, right? I -- so I have used 4.0 and 4.0 and, of course, 3.0, and I tried the -- I think it's a relatively recent feature in ChatGPT where you can throw a photo at it and say, "Describe this photo." And I put in a portrait photo I had recently done of myself, and I just threw that in there because it's like now the AI knows what I look like. Anyway. And I got it to summarize, and I was, frankly, blown away at just how accurate it was. It described my facial expression correctly, the clothes I was wearing correctly, the background bokeh effect, for example, my posture even, and obviously it also got the basics right, you know, a bald man wearing glasses. Well, it's important to get the basics right, too, but it got it -- it absolutely nailed it. And I thought, "Hey, this is incredibly cool," and it's the sort of thing that I can see as a huge application on the -- in a personal context. Like, I take too many damn photos. I mean, I know that I went through this. So my wife's 50th is coming up, and I'm going through all of the photos to pick out all the best ones at different times in her life to do a slideshow, you know, pretty common thing. And I just thought to myself, like, I'm going through these, you know, with my eyeballs, like an animal, you know what I mean? And this technology applied to my entire photo library of how many hundreds of thousands >> Yeah, for sure. >> Yeah, I'm going to go back to the discussion. of photos I've got, and I would say, "Find all of these images of Kirsten," and it would >> Yeah, I'm going to go back to the discussion. then be able to identify and pull them all out for me. That's massive to me, huge time saver. And so I can see that application. And I know that that's not just -- that's not really exactly a large language model, but in any case, the point is, though, that why I want to talk about this is this has been a rapidly evolving space, and I'm not really interested in the whole which company's doing what in terms of, like, you know, what's OpenAI doing, what's Microsoft doing, you know, what's Apple not doing or doing. We'll see. And it's like all of that is of lesser specific interest to me. I'm more interested in the technology itself, and the way we are trying to practically apply it and where are the boundaries here? Like, where are the edges? Because right now, there's no question this is a massive step forward, but there's a limit to what it's ever going to be able to do. And this is what worries me a bit. So I wrote -- I dumped probably a good half of my thoughts into an article that I posted on Substack, and there'll be a link in the show notes and all that. But just to go over the detail of it here, really, what worries me is if you look at the way that we train AIs and compare and contrast that with how humans learn, like contextual learning, and what worries me is that -- okay, what triggered this is AI -- is Google's AI overviews. So let's just start with that. Have you had a play with the AI overviews at all? Okay. So the AI overviews, for example, there's been a lot of clickbait where people have actually -- what's the word? Yeah, they've modified, they've tampered with the screenshots and created things that look like they're actually like, "Oh, if you ask the AI overview this in Google search, it comes back with some nonsense." And it's like, "Yeah." There was a blog post by Google only, I think, two days ago from time of recording where they pointed out that, "Yeah, actually, a whole bunch of these are fake." Then again, "A whole bunch of these are actually -- yeah, it did actually say that. Oopsies," sort of thing. So I'll have a link to that in the show notes as well. But the intent for people that are listening that don't know what I'm talking about, in the beginning, the original Yahoo and Excite, Lycos, and eventually, of course, you get iLoveMetacrawler, AltaVista, all these different search engines that came and went, they got killed by Google. Google search was all about page referral. So if a page is linking to another page, that is a vote for that destination page in terms of its quality. It would spit back links and so on and so forth, page ranking, and they had an algorithm for page rank. And you would type in a search term, and it would essentially spit back links and a few ads here and there. Anyway, not going into the detail of that, that evolved over the years. Now, I can't remember exactly -- I don't know, Scott, if you remember exactly when, but they started offering, Google started offering, suggested responses to your query at the top above the search results and ads. It must have been about four or five years ago. I can't remember the timeline. Do you remember how long ago that was? Yeah, yeah, yeah, that's fair. Yeah, I think it was about four or five years ago. And I mean, I use Google under sufferance because -- I just -- I use it under sufferance. I've tried all the other ones, as I'm sure you have, like Bing, DuckDuckGo, and a few of the others, but honestly, a lot of default searches at Google, and you just end up doing it, not because you just couldn't be bothered changing it. But in any case, the quality of the Google search results -- let's not go there. Point is, four or five years ago, they started to publish a little summary at the top. So it will -- it almost performs the function like the I feel lucky button, whereby you type in a search and say, you know, I don't know, tell me about John Chidjy, and go, I feel lucky. And so it'll just take you straight to the top search result that came back. And if it turns out that it's linking to a page that's my Wikipedia page, which, by the way, I don't have a Wikipedia page, and that's fine, I don't really want one, but whatever, the point is that that would just take you straight there. But instead of doing that, they've now provided, four or five years ago, a contextual, so like a summary of what would be on that page. And it attempts in very bold text, large text, and it'll highlight the section that it thinks you're looking at. Now, all of that was based on its existing search engine technology. And some people -- and I've had mixed results with this. Like, I'd look at that, and I'm like, yeah, but that's not exactly what I'm asking for. You know, so I will go down, and I'll check each of the links, and I'll go to some of the websites and different tabs, and filter through and make up my own mind. It's like, this is the one that's most helpful to me. So AI Overviews takes that to the next level. And it says, well, I'm going to tell you in that area at the very top, the AI Overview of, this is the answer to your question. I'm not going to give you a suggestion. Like, you can still go down the page way further down and look at links, but I'm going to tell you authoritatively at the very top, this is the thing you're looking for. And they're rolling it out in the United States. Now, it is rolled out in the United States progressively for users. And other parts of the world, they say, will be following. And it's all based on their own -- I think it's the Gemini AI that they've been developing, >> Yeah, I think so. I believe. I think they change his name once or twice. They tend to do that. Yeah. I lose track. Yeah, I lose track. I lose track. So anyway, Generative AI Google Search. So basically, I started thinking about what they're trying to do. My first issue is Google has built up -- I was about to say, and I'm remembering, Scott. I was going to say Google's built up a certain amount of trust. Have they? I don't know. Yeah. I think for normal people they have. I think they definitely used to have a lot of trust, and I think for a lot of people they probably still do. Yeah, this is the problem is that -- so for people like us, like we see the holes. We see the issues with Google. We see the way that they do business and the whole dropping of the whole don't be evil thing because let's be honest, for the last 10 to 15 years, that really hasn't been true. So I'm glad they just admitted it, that kind of thing. It's like we see the problems and we don't trust everything that Google tells us in search results. We just don't. But we are the skeptical few. There are a lot of people that will just type stuff into Google and just go with the first response as being the truth, and I'll just go with that. And what worries me is that people are going to stop doing that next step of research. They're just going to take that for granted because if the results are convincingly good, for 90% of the questions that you ask it, it will build a cycle of trust and potentially mistrust at some point because that final -- whatever the percentage is where it gets it wrong, and people will just have gotten out of that habit of actually looking for fact. And that actually got me back to what's the difference between fact and truth? Because when I write the article, it's like fact versus truth AI edition, right? It's kind of -- because to me, a fact is something that you can prove, whereas truth is something that the majority of people believe to be true but is not necessarily provable. So with the Venn diagram, you'll have facts that people don't believe but are still facts, and then you'll have facts that the majority of people do believe to be true, and then on the other side of it, when the non-overlap region on the right, you'll have things that people believe that are actually not provable, and that is a truth, not a fact. And why on earth I'm obsessed by this is because I look at the information that you feed a large language model, an ML model, and the quality of that information going in is crucial for it to give a sensible result. Because if you've got garbage in, you get garbage out. The same old adage will always apply. And so what worries me is play the long game, where does this end? And all of the information that's available on the Internet, it's like someone has to be the adjudicator of this is fact and this is not necessarily fact. Because otherwise, all your LLM is going to do when you ask it questions, it can't tell the objective difference. And this is what worries me. >> Yeah, I think the difference between a human and a large language model is that humans are also subject to the garbage in, garbage out principle, but over time people also have the capacity to keep evaluating things, and people do eventually change their opinions about things, whereas large language models see everything as equivalent, and whatever sources of data they're being given, to them that is information, that's fact, that's information ripe to be passed on to other people as fact. And so it is troubling, and also, you know, like they're paying Reddit to use Reddit as a source of information. I mean has anyone who's ever used Reddit thought that that would be a wonderful idea, to train AIs on Reddit? I don't think so. You know, stuff like that. So the whole -- there's many problematic things about using the Internet as a training resource. It's obvious why they would want to, because they need so much data, and they want to be able to take it, and, you know, but it's super problematic because the Internet is full of garbage. So you're basically feeding garbage into a machine that doesn't have the capacity for evaluating the relative merits of different information. Exactly. And this is what worries me. It's like if we take the context of this technology is no different to some random old person in the street. You ask a question, they may give you the right answer, but they also may not. What worries me is the way it's being presented now in Google as a response. It's like there's too much trust in Google and Google search and result suggestions. And what I worry is that people are going to take its advice as being reality. The funny thing is if I were to look through the list of links that Google gives me today when I say, I don't know, how to tighten the bolt or a nut or whatever, there's going to be probably half the results that are just garbage. They're SEO BS. You know, it's like you can't trust most of the responses. Like some of them you can, some of them you can't, but there's so much SEO optimization going on that's just spamming keywords and different various other, you know, back linking to other SEO sites in order to build page rank and all that other stuff. It's a constant cat and mouse game. And the suggestion that the AI overviews are going to do a better job, I just don't see how that actually manifests. I don't see how that can be true. But if people trust it, then suddenly these results are going to become more impactful in the real world. So I kind of, I figure that the only real way of doing this is to provide some kind of like data quality score of some kind, like some metadata around all data. And this is the funny part is it's like, I got thinking about Encyclopedia Britannica. And a lot of this machine learning large language model stuff, I have an urge to actually buy a complete hard copy of the Encyclopedia Britannica. I hear this is a thing that happens when people get older. I don't know. They want to buy sets of encyclopedias. Have you ever come across this? Do you have such desires or is it just me? I think I might -- well, I don't really have that desire, but I think it would be interesting for nostalgic purposes, because we used to have those at a school that I went to when I was a kid. So I'm familiar with them. I like them. I enjoy them. Yeah. Yeah, I think you're right. It would be cool to have those again. Yeah. There is a nostalgia to it. And maybe that's just me and showing my age because I remember my father bought, we didn't have Encyclopedia Britannica. We had World Book Encyclopedia, I think it was. And I remember that he bought them on an installment plan. We weren't that well off. He was a schoolteacher, but he got the entire set. And that was fantastic. Great reference resource. Anyway, why bring it up is Encyclopedia Britannica is probably the oldest institution. I guess that's the right word, I guess, maybe, that's been trying to get to fact and document fact on such a broad range of topics. It's probably been around the longest. And I'm not counting religions because theirs is a search for truth, not necessarily fact. And so you could argue that there are, you know, biblical texts that go back a lot further than Encyclopedia Britannica, which is only a few hundred years. But the whole point of Britannica was to document the world around us and get to fact and document fact. And the links that they go to in order to get truly trusted sources of information is insane. The funny thing is that all of this knowledge has been built up over time. It's a massive body of knowledge. And if you were to rate its data quality in terms of factuality, it would be extremely high. And it's because of all of that strict editorial going to the actual sources of information and such. Why bring that up is I got thinking about Wikipedia. Now, I have linked to Wikipedia for forever, and I continue to go to Wikipedia for reference information. But Wikipedia is not without its problems because it is much quicker and easier to put stuff up there that is frankly BS. And it can take a long time for people to say citation needed and then have that content taken down. But it's there and live for a long time, and the information is all factually correct. So I can create an account, and I can suggest an edit and say, "It turns out the sun is actually pink." And it could take months for that to be removed. And there is no such mechanism with Encyclopedia Britannica. Because you have an editor, it has to get past the editor to get released, to get published, to get printed. Multiple fact checkers, and so on and so forth. I know that's a ridiculous example, but I recently had a case on an episode of Causality where somebody who had been around for a long time in the industry said, "Oh, I believe that they didn't use this particular interlocking system, and that was the cause of the disaster." They went into the Wikipedia article and added two paragraphs that was unsubstantiated information. And all I could do was go in there and flag it as citation needed, because there is no publicly available information, nor insider information, to actually corroborate what they're saying. It's a supposition. So I have a mistrust of Wikipedia. Have you found – I mean, what are your thoughts on Wikipedia? I'm just curious. It is a useful resource as a starting point. And if I want to know general facts about something, but I'm not, I don't have a deep need to know whether or not it's completely accurate, like I'm not reliant upon that data for anything, I find it pretty useful. It's a good starting point. It's got information about everything under the sun. But I do agree with all your concerns. I think my – I think it's a mix. I think it's better than some people think overall, and I think it's worse than a lot of people think overall. I think it's a wild mix. It probably depends on what articles you're looking at and which people are involved in editing them. But I think that it should just be, like anything else, a starting point. And it certainly cannot be compared to something like, you know, something professionally curated like Encyclopedia Britannica. It can't be seen as anything more than quite subjective with attempts at removing the subjectivity from it. I think that's a really good point you raise, actually, Scott, is that it is a mixed bag, and it's probably a mixed bag based on the dissection of people – dissection? Cross-section of people – that are maintaining different areas of Wikipedia. Some people in there are very technically inclined, that know what they're talking about, subject matter experts. Other people are just hobbyists that don't really know what they're doing from a technical point of view. And so someone like me, who saw this supposition that was added to the Calite C Wikipedia entry, you have a look at that, and I'm like, "Well, but that's not what happened." And the reports say that that's not what happened. So you can flag that, but then will someone else then look at that and be the adjudicator of who was right and who was wrong? And so you get areas where it's quite good and areas where it's really terrible, and it is highly uneven, whereas something like Encyclopedia Britannica is far more even and far more level. So why I'm harping on about it is because if you look at the two different approaches to judging information quality before you import it into a body of knowledge, which is really what we're talking about with AI overviews or any large language model, it's exactly the same problem. It's just that it's done at a machine level. And so who is the arbiter? Who decides? And ultimately, it can't be the majority vote because the majority vote could be a truth that's not a fact. If someone says the sun is pink and a million people say it and everyone else is silent, even though they're not going to post all that's ridiculous, then that will show up in an AI overview, potentially. And examples of the insanity is that someone said, "How many rocks should you eat a day?" And the AI overview responded rather helpfully saying, "You should have one rock a day for good health." It's like, "What?" And when Google were pushed on this, the response in their blog post just a couple of days ago was, "No one had ever asked a ridiculous question like, 'How many rocks should you eat a day?' Therefore, it was an area where it had a blind spot." Right. Blame the user. Well, I know, right? Blame those rock-eating users, John. Don't go on them. For the Zelda fans out there, I'll have you know that Gorons love eating rocks. But never mind that. The point is that it's like, how many more gaps are there? It's kind of funny when people say, "Oh, that's just a blind spot. It's a gap." There are an infinite number of gaps and you don't know about them because you don't ask the question, "So how can you know about them?" This is the thing. At what point in the future will something become a gap that was not a gap today? Like eating rocks might have some kind of new meaning in 10 years' time. Like all the teenagers eating rocks might mean, I don't know, having pancakes in breakfast. "Hey, how are the rocks this morning, man?" And they'll be like, "Yeah, they were great rocks." I mean, you don't know. The usage of the language evolves and changes with time. So what is not a gap today will become a gap tomorrow. It's a fascinating problem and I honestly don't want to sound like a Luddite or I'm getting real old, but I don't see how you solve this. Because no matter what you do, if you were to have some kind of a data quality score, even if you could calculate one, how can you enforce that it's attached to content? Because ultimately, that's the only way you're going to know. If you have a page of text written by the Encyclopedia Britannica and a page of text written by Johnny Rando, who lives out the back of Uluru, and it looks like exactly the same words, more or less, subtle differences between them, who's going to flag the metadata from each in an authoritative way and say, "Yep, this is absolutely genuinely 99% reliable as fact"? And there's no mechanism for that. There's no ... that exists. And if you could create a mechanism for that, technologically, and say, "You can't publish text without a metadata component," well, how do you enforce someone to provide an honest answer to that? Because if I'm a spammer and I want to do SEO optimization, I can set the metadata to whatever I want. The honest system's never going to work. So I just ... I don't know. Maybe … I don't know. What do you think? Yeah, I don't know how that problem would be solved either, I think. I think what will happen will be either we'll find a better way of training these things, and then it's still up to people whether or not they believe their new little buddy, and to take it like they do a peer at work, where sometimes the peer will tell you things that don't make sense and aren't true, and sometimes the peer will know what they're talking about, and sometimes it, you know, some people are better than others. But I don't see how you set up a system where you guarantee that you create AI that are always, you know, that are more or less truthful, that are more or less accurate, that are more or less knowledgeable, and are feeding on good data without a significant amount of work that I don't believe is going to be done. And that's one of my biggest issues with AI is I don't believe that humanity, specifically the parts of humanity that are concerned with creating these things, I don't believe that the work will be put in to address these problems, and I don't believe that even if it is possible, let's say, to make it better, I don't, you know, it's never going to be perfect, but let's say we could get it to a place where it's good enough, we can live with it, and it provides way more benefits than harm based on how good it is. I don't trust the people and companies and the state of our society right now to be able to understand it well enough, and/or to put in the work to make it happen, even if they do understand it. Yeah, I'm with you on that. I don't think they will either. And I think that the objectives of some of these companies, some of them are VC-backed, some of them are ... even if they're not, they're ... where cost is your driver, then it will always tend to drive certain outcomes. And I think that the quality line is something that they would push to a point, but then no further, because they'd see no incremental benefit. Because it's unattainable to get 100%. Even the Encyclopedia Britannica has errors in it. So they would accept some kind of a threshold, an error rate threshold, and they say, "Well, this is good enough, people will trust it. They keep using our system because they trust it, and then we get ad revenue or whatever else from it. So it pays for itself now at this point, and this is the pain threshold of incorrect information people are willing to live with." So I don't think that they're motivated to do it either. So I agree with you. I think fundamentally, I don't want people to get the impression that I'm down on the technology, it's never going to work, blah, blah, blah. No, that's not true. What I'm pointing out is that this technology has limits, and they are quite concerning if you don't appreciate those limits and you take it on face value, then you are setting yourself up for inevitable failure, is my point. And I'm not entirely convinced that large language models used in this context, which is the whole ask me anything, it'll give you an answer that's probably right. I don't think that's actually necessarily in the broadest context going to work out. I think there are certain contexts, and I think you've had some more experience with this, and I'd actually like to hear a bit more of your thoughts on this one, is what are examples that you've personally used where it's given you probably the best results? I'm just curious what you've found in your experiences with these applications. Well, I don't, I'm not sure what the cause is, and I'm not sure of the reason for the change, but in time, like, when I was using GPT 3.5, for example, Thank you. I would get a lot of partially correct programming answers, for example, and then about the time I switched using the GPT 4 models, you it was getting better, and now for my use case, and again, I'm not using it all the time per day, but I will use it to help me remember how something is done in a specific language. Like, give me the, I need a regular, I'm going to replace this with that other thing, and it's going to be in this function. Here's what I'm looking for. Here's what the output's going to look like. Just show me how to do it in this language versus this other one, because I don't remember the details. And for things like that, for things like converting formats, for things like creating regular expressions, for things like server admin tasks, like where do I look for this log? What does this mean? How does this piece of software supposedly interact with these others? It's been really good for me for getting me started and generally not providing me with wrong answers. There was something the other day, server admin related, I can't remember what it was, where it gave me a wrong piece of information, but it's pretty rare. And also, so for me, it's no different than looking stuff up on the internet and having to weed through the people in Stack Overflow that kind of know what they're talking about and the ones that do know what they're talking about. What I don't like to use it for, though, honestly, is something that, like, Peter uses it for all the time, is summarizing things, doing creative things for, because, like, he does role-playing games with his friends. And so they'll use it in that context to help generate storylines, to help generate summaries of their activities and stuff like that. yeah Those are the things that annoy me the most, is because it's a terrible writer. Like, it's just terrible. It writes the most inane and boring summaries you would ever hope to read. And it cracks me up at how bad these large language models are at actual language. Like, if you're playing a word game, give it a word and say, "Hey, I've got six letters. Here they are. you Give me all the words that you can make with this." And it is always wrong. It's putting in letters that aren't there. It's putting in the wrong number of characters. I mean, basic fundamental—it can't even count the letters that you gave it, nor remember that you said, "There's two S's and a T," and then it'll just throw some of that out. It's ridiculous. The whole—to me, the least impressive aspects of it are the actual language aspects, which is hilarious. But I do find—but I find it useful for technical things, for getting me started and quickly pointing me in a direction. And, you know, I use it for things that I'm getting paid to do, and it's honestly paid for itself in that regard. Is it perfect? No. Do I still have to do tons of my research of my own on things? Sure. But it gets me started. It's a tool. So that's how I use it, and that's how I find it useful. yeah definitely i i have to admit um i'm personally still struggling to find a reliable use case for it um for things like chanty pt where i asking it questions i still prefer to do search in air quotes the old-fashioned way um but um but yeah i have also heard my my son my oldest son who's studying computer science um he said that it's really really helpful with um with you know i think we we sort of discuss python um you know because he python's the any old scripting language for any old task you might like kind of a generic toolkit um these days which is good i mean i like python fine uh and just trying to remember how something is actually formatted or or um in python for example when you when you haven't written it in a while uh and he says it's amazing for that so um so that's a use case i could probably see next time i want to do a python script but um i haven't had to touch those in a while but i i guess all of this stuff is to say it does have good use cases and there are uh it has got some data sources and it's probably no surprise that the programming ones are actually quite good because it's being developed by programmers so you would expect that that would be one of the big use cases that they personally want to see resolved so they've probably got high quality data or they're scraping text stack overflow or something like that and uh and leveraging some of the upvoting done by humans but in any case um i'm glad that that's useful for you and i need to keep um testing it with different things to see if i can get use out of it but it just worries me like i say the people that don't have as critical an eye on this stuff as you and i um and it can be beneficial and hence i mean obviously if you didn't think it was beneficial you wouldn't be paying for it so it's like well okay this definitely can be beneficial in certain circumstances and it saves me time and that pays for itself and that's great um i start worrying when people you know search google and for things like their legal defense like um doing engineering design getting advice on on how they should do their accounting or tax um doing doing all their books and everything at the end of financial years and stuff like that it's like well it's not going to give you advice that you should rely on you know you should not be using it for those purposes uh at very least as you say you could use it as a you know like oh this is a starting point but that's it and so long as you treat it with that almost um caution i suppose that's probably okay Right. And you also have to be careful of it both sides in things. you you Like in general in programming and server administration, you don't have to worry about that. That's why those use cases are pretty good. But if you start asking it, "What should I do in this situation? What about this belief? What about that belief?" It'll start both sides in things. Because again, it doesn't really know the difference between good quality information and people who should be, can safely be ignored, let's put it that way, and people who can for the most part probably be listened to. So it'll both sides everything. And to me that's a waste of time because sometimes both sides in things is a complete waste of time because one of the sides of those arguments is complete garbage. And so that's one of the other things is like it'll waste, at the very least it can waste your time that way. At the very worst it can, you know, depending on how unaware or unknowledgeable you are or, you know, prone to going down rabbit holes of misinformation, it can lead you that way as well. yeah for sure i uh the only other thing i wanted to sort of briefly bring up on the ai thing and there's actually something else i thought we should quickly touch on is um the impact to education down the road uh as an engineer i i sort of thought about this from the point of view of if you were to ingest if you were to ingest into a large language model all of the engineering design standards would that make it from an educational engineering educational point of view would that make it better or would that then be prone to issues like and maybe not just in engineering like what about the broader context of education i mean it's like if you have a lecturer and a lecturer is setting course notes um that lecturer is seen to be authoritative let's assume that they are trustworthy and authoritative and they're telling you facts um you'll have textbooks and so on that have gone through rigorous you would hope rigorous review peer review by other subject matter experts so that they are as close to fact as possible such that when you educate people then you are getting a fact now i know through my own experience even you know like nearly 30 years ago when i was at uni we had textbooks some of them were really trash like you would the lecturer would say like i remember this is one particular when we were learning c programming in c and every page without fail there was highlighter in red over one particular section or one line of code saying this is incorrect if you type this and try and it won't compile and here's why and i'm like why did you pick this textbook if it's so bad um but apparently hey it was one of few textbooks at the time but anyway so i guess my point is that if you were to ingest trusted sources of information would that then mean that education through university institutions for example or tertiary education or even high school if you just relying on llms and their ingestion and you've got trusted information but you can't necessarily trust all of the outputs you know it just it has me wondering a little bit because you would be able to get someone who could interview well based on you know pre-learning a bunch of responses based on queries to a large language model that to the untrained ear would sound as though like they knew what they were talking about when in reality they probably don't and anyway i don't know i i'm not sure i've got a conclusion on that one other than a lot of educational institutions are banning the use of it from students but my question is more about well at what point does the value of educational institutions get eroded by this technology and people mistakenly trusting it maybe i'm thinking too far down the road here i should wait for these things to go wrong rather than predicting that they're going to go wrong i've been doing causality too long Well, but this is the starting point, right? And the fact that Google wants to, instead of giving you the actual sources of information, just summarize them for you. you you See, this is where I don't have a problem with an AI chat program doing that. It's just doing it and not even directing you to the places that it's getting information from. That is more harmful, in my opinion, mainly because you don't get to make up your mind for yourself, and also because where are those sources of information going to come from if nobody ever goes to visit them? They're not going to exist. And so it's double harmful. It's harmful in two ways, and this is where it starts. Like, if that's going to be the approach, if from now on our approach is we have a voice talking to us and it gives us information, where does this information come from? Who knows? You know, that's probably not great, because at least when we're talking to another human being and they're doing exactly the same thing, we can talk to them further and suss out a little bit more about where does this information come from, and then we can decide how much we trust them. Also, we have context about that person. We know what kind of stuff they tend to believe and not to believe. We know how good of a judge of information they are at times. At least we can make an assessment about that. We do that all the time with other people. We always come up with models in our head of how much we trust a specific co-worker with certain types of information. You can't really do that with a large-language model unless it specifically cites all its sources and gives you easy access to them and promotes that. And I don't think that's the way it's going to go in the future, if the current trends are to be believed. you you you you you you you you I think, honestly, its best uses are, aside from the fact that it annoys me when it does it, I think some of the best uses are things like what Peter is doing, in addition to the stuff I do, you which I personally find to be useful. I think summarizing things is fine, because you can give it something, ask it to summarize, and you can test it over time. And after a while, you can say, "It's always giving me good summaries. It's never missed any key points that I really need to know." And by the way, these things vary greatly. Like, Peter did that with GPT and Claude and LLAMA, and LLAMA was by far the laziest. We designated it Lazy Llama, which is hilarious, because it sounds like an Ubuntu release. But you can test that, and you know what the information is, because you gave it to it. And so you could test it and say, "Do I rely on this thing for summarizing my stuff for me or not?" And then you can just give it stuff and let it summarize it. And that's a good use case, and it could be a time-saving one. So there are things for which they're useful, but I think that the whole approach of how we're training them, what we're entrusting them to do, like, these should really be the front ends for good resources of trusted information, right? They should be looking at—instead of them coming up with the information, they should be looking at very specific and trusted sets of data, yeah and then interacting with us regarding those sets of data. That's what I think they should probably be more tuned to do, rather than just be free-floating generalists that collect everything they see and then regurgitate it to us. no I no that's a good way of summarizing it I think one of the other things um are you across this whole Scarlett Johansson OpenAI thing Yeah. cause I I know it yeah it's technically not and this is the thing it's technically not large language model specifically but it's OpenAI and what they did Mm-hmm. uh or what they did or didn't do um so yeah I mean just quickly what um cause I I know most of it but probably not all of it Well, basically, what they wanted to do was they wanted to have a voice for this product that interacts with people you and can talk back and forth, this AI voice chat. And Sam Altman, being Sam Altman, decided that it would be really cool to have Scarlett Johansson talking to him, and so he asked her. However, he asked her apparently before the product was introduced, but after it had already been worked on. Anyway, it was at a point where it was going to be introduced. He asked her if they could use her voice. She said no. I believe he asked her a second time, but they had already either introduced or were about to introduce the product by then. They used a different voice actress who—I think it was Wired that I read it in—was quoted, but not directly, as quoted through OpenAI or in intermediary areas, saying she was upset about it because she doesn't think she sounds like Scarlett Johansson, and nobody that knows her does either. However, the voices have been analyzed, and it does sound a lot like her. I believe it was intentional. I believe that they wanted that sound, and they were going to—I'm positive in their minds that if it's not really her, and it just so happens to sound like her, it's fair game. That's not necessarily legally sound judgment, that people have lost cases on things like that. And the fact that Sam Altman was—had already asked her, had tweeted out the word "her," which is the movie she was in, playing the AI, I think it's pretty damning evidence, and I think it's just kind of sleazy. At best, it's jerky. It's possibly illegal, like they could lose in a court of law regarding this case, because it's pretty obvious what happened. And I just find it to be emblematic of the type of attitudes that these guys have, where they believe that all of the world's information is theirs, and they should be allowed whatever they want to do because it's cool. yeah I think Silicon Valley has a real problem right now with pursuing things that they thought were cool in their childhoods, or in the case of some of these guys, in their current adulthoods, because they still are children. But they need to focus less on that and focus more on what the world actually needs. And this is my problem with Silicon Valley in general right now, is they're solving all the wrong problems, and we've got a lot of actual problems that could pose existential risks to the human race. yeah yeah Maybe we should focus on those. I just don't trust a lot of the people involved in these efforts. I think that AI and general AI, AGI and all this could be great, but not the way we're doing it. It's like we always take everything good and somehow manage to mess it up, and Silicon Valley is incentivized to do so right now, I believe. I feel like a very wonderful and cool resource, potentially, is starting off on the wrong foot right from the start, yeah yeah and it's because the people doing it think that they're on a mission, and they really don't care how it gets accomplished. yeah yeah well But I do believe that Scarlett Johansson has a valid complaint here, I guess is what I should say. when yeah so I became aware of this I've been a bit disconnected from the socials I sometimes get on there I sometimes have weeks at a time where I haven't checked my Fediverse feed and and so on but I came across this when I was listening to Core Intuition actually Manton Rees posted his commentary on it and his personal belief of whether OpenAI did or whether they didn't steal her voice without her permission and yeah he got flamed badly a whole bunch of people cancelled their membership Really? Huh. out of disgust and so on and the whole... yeah really and so yeah I was listening to Core Intuition and he was walking through it with what happened with Daniel Jalkut and it was you can tell that Manton was quite upset cause Manton's a very chill kinda guy he doesn't... not much rattles him Right. And he's also not going out looking to offend people or cause a huge stir. and he was rattled and um no He's not that way. He's not that kind of personality at all. no no he totally isn't and so I kinda felt for him because I've been on the backlash of different things like that perhaps not quite to that extent but certainly I've had people come back to me and say "Oh you support Bitcoin?" I was like "Oh my god" anyway bottom line is that's how I found out about it and I listened to the voice cause someone had posted on YouTube this is the voice that OpenAI did whatever it was I forget the name of it the voice itself Sky I think maybe it was Sky and it was reading Scarlett Johansson's letter but in the voice of Sky and I mean I'm not a voice analyst but my god it sounded so much like Scarlett Johansson it's not funny I've watched that many movies with her in it like the Marvel movies you know the inflections and everything it was all very yeah so if it was faked it was convincing if it's not heard someone that sounds like her then she needs to get a job as a as a voice double if there is such a thing as a job Yeah, right. cause it's super impressive but to me the bigger issue is to me this has nothing to do with generative AI this has got everything to do with the usage of other people's voices and likeness and this is going to become more and more of a problem the easier and easier it gets to do this like they could sample Scarlett Johansson's voice from all of her movies from different talking engagements that she's done where she's been recorded and you can create a relatively good voice approximation of that with current technology in fact you did something like that which I thought was utterly hilarious at the time with on a Friends with Brews episode where you were on your lonesome and you used what was the tool that you used for that? I can't remember the name of it Oh, I was just trying to remember. I got to find it. hmm Yeah, I was actually going to bring that up. It's because -- so Peter wasn't available for an episode, and I really wanted to get one out there. I mean, it's in the human labs. That's what it is. And what you can do is upload recordings. yes that's it yes that's it hmm So I uploaded a ton of recordings of Peter, and I played with it until I got a Peter that I liked pretty well. And I can listen and tell that it's not Peter, because at the end of the sentences, he definitely had a slightly different intonation in the AI-generated voice than the real Peter Nicolaides. But there was a lot about that voice that sounded like Peter. I did it without telling him I was going to. I used recordings that he and I had made to generate the voice. And I didn't really know how he was going to take it. And to be honest, I kind of wondered, "Am I doing something wrong here?" Even though Peter and I are good friends, we've been friends for a long time, we trust each other very much, and he didn't have any issues with it. And even then, I thought, "You could make an argument that I'm doing something that I should not do here without asking." You could make an argument that I was absolutely wrong to do that without asking, and I would not necessarily disagree with you. I would probably say, "I see exactly what you're saying, and you're probably right." Yeah. So now imagine doing that to a stranger or doing that to someone else for financial gain, for corporate gain, and I think it becomes even more clear-cut. But yeah, it did remind me of that. It was like, "I thought about that." Yeah, so. I still have that voice. I may have to bust it out again sometime. [LAUGH] Just ask permission this time. The thing is though, Scott, something that is becoming increasingly common in the voice actor space is, and this is, I say the voice actor space like I'm some big player. Okay, I've been paid to do two audio books and I've done some voiceovers for some medical stuff over in America. Like there's a back and forth between, it was basically reading lines for a conversational training piece for a medical subject at a university. So like I've done voice acting for that and so on. And Adam Coe says I have a great set of pipes, a lot of people agree. So I have a voice that people might want to clone. And so I get requests from time to time, not often, but I've had two. So I subscribe to this thing, so I paid into this thing called Voices 123. And it's a forum place where you can post jobs for people that want to do vocal acting. And they say I want 15, 20 submissions, here's a script, read this script and we'll let you know if we want to take the next step. Sometimes they lead to something, sometimes they don't. But I've seen a few more of these popping up where they say we're going to pay you $3,000 US. And you're like, whoa, okay, what's this? And we want you to read this whatever whatever thing that gives us the ability to completely replicate your voice. And we then own the rights to your voice in perpetuity. And I'm like, think about this for a second. If your job is to be a voice actor, and you let a company take your voice, and even if they pay you for it, aren't you then doing yourself out of a job tomorrow after they have it and the future? If they pay and you sign the agreement, then technically does that mean they have the rights to do whatever they want with your voice? Like your voice is then used for deep fakes and God knows what else. So it's like this is going to become like and for the record, no, I don't need three grand that badly that I would give up like my voice or any rights to it. So I did not accept that. But I guarantee you there are people that would. And what's going to happen is that people are going to sign up, they're going to do this. And a lot of these voice acting jobs are now going to start to dwindle because why pay an actor, you know, a voice actor, you know, like a few hundred dollars to do an audio book if you can get an AI to generate it for like 50 bucks? And they don't get tired. They have a same consistent, you know, it's because it's computer generated. There's no noise issues. You can make it sound perfect. And it may not be quite so, I don't know, warm or like you may be able to just tell if you know what you're listening for, that it is not a human being, but it is the tech is getting so good that it's getting harder and harder to tell. So the whole Scar Johansson thing just made me think about that as well, is that this is happening and this has got nothing to do with AI directly. It's all machine learning necessarily. I mean, I guess the voice models may be to an extent, but it's the sort of thing that is going to be more and more of an issue and people are going to start having their voices taken without permission. And we're going to see stuff like this happen more often. And irrespective of whether or not you think OpenAI did or didn't clone her voice, it is far easier to do than some people think. So. Just minister Bealam the globe. >> >> >> >> >> >> >> >> Well, there's Mastodon. Scott, we'll see Yes. Thank you. >> >> you
Duration 1 hour, 38 minutes and 23 seconds Direct Download

Show Notes

This show is Podcasting 2.0 Enhanced

John’s Coming Back to America: (VOTE HERE)

SubStack Referenced Articles:

Merchandise and Spotify Integration:

Links of Potential Interest:

Episode Gold Producers: 'r', Steven Bridle and Kellen Frodelius-Fujimoto.
Episode Silver Producers: Mitch Biegler, Shane O'Neill, Lesley, Jared Roman, Katharina Will, Chad Juehring and Ian Gallagher.
Premium supporters have access to high-quality, early released episodes with a full back-catalogues of previous episodes


Scott Willsey

Scott Willsey

Scott works in the semiconductor industry and is a prolific blogger and podcaster from Portland, Oregon.

John Chidgey

John Chidgey

John is an Electrical, Instrumentation and Control Systems Engineer, software developer, podcaster, vocal actor and runs TechDistortion and the Engineered Network. John is a Chartered Professional Engineer in both Electrical Engineering and Information, Telecommunications and Electronics Engineering (ITEE) and a semi-regular conference speaker.

John has produced and appeared on many podcasts including Pragmatic and Causality and is available for hire for Vocal Acting or advertising. He has experience and interest in HMI Design, Alarm Management, Cyber-security and Root Cause Analysis.

Described as the David Attenborough of disasters, and a Dreamy Narrator with Great Pipes by the Podfather Adam Curry.

You can find him on the Fediverse and on Twitter.