[MUSIC] This is Pragmatic, a weekly discussion show contemplating the practical application of technology. Exploring the real-world trade-offs, we look at how great ideas are transformed into products and services that can change our lives. Nothing is as simple as it seems. I'm Ben Alexander and my co-host is John Chidjie. How you doing John? I'm doing very good. How you doing Ben? Doing well. Awesome. Today I would like to have a discussion regarding design reviews and I wrote about this a few months ago on Tech Distortion and the article is called Design Reviews and Name Only but it's something that sort of came to the forefront more recently in the last few weeks before I went on to Christmas holidays break which is what I'm on now now, which I have another week off, thankfully. It's something that I think is worth fleshing out a little bit because one of the things that I see is a lot of people approaching design reviews with some very different expectations. I think there's something to be explored there. I just want to start with the basic, perhaps obvious, is it obvious? Maybe it is. If you design something for yourself alone without any peer review, influence or discussion, then that's incredibly dangerous because what you'll end up with is something that's essentially only useful specifically to you. I mean, it may be useful to other people, but it can become very difficult for other people to understand how it works, understand how to use it. And that's the sort of thing that is very dangerous. So designing completely for yourself alone, yeah, I wouldn't say it's impossible, but honestly, I do question the point. So I guess a simple example might be if there is no intention for anyone to use it, let's say a simplistic example, if I'm a hermit and I'm building a wheelbarrow, who cares? Right, I'm the only one that's ever gonna see it, I'm the only one that's ever gonna use it. If I've gotta do a handstand when I'm using it to make it work, then who cares, right? but obviously we need to exclude that kind of thing from the discussion. So let's just assume for the purposes of this discussion, that someone other than the person doing the development or the design is going to have some say or influence or comment on what we're doing. And hence, irrespective of what kind of commentary that is, it's a design review. And how this applies to everyone else is if you write words on the If you have a blog, if you write software, if you, you know, in engineering, if you do an electrical design, or if you do a civil architectural design or anything like that, it's going to have to be reviewed by somebody at some point. Someone will look at it and they will give you their thoughts and their feedback. So when we're designing something like software, for example, we'll go out to people and do beta testing and beta testing is essentially a design review. It's getting feedback, but it's solicited, requested feedback from individuals. And one of the things that I found when I was developing for the iOS many years ago is that a lot of how effective that is comes back to who is selected to be on your beta testing list. So in engineering as a parallel though, a customer will come to you and say, "I want you to design this for me." And the customer will say, "I'm going to have a representative from us. They're going to look at this product that you're designing for me. And I don't want to keep my eyes shut for six months while you design it. I'm going to ask for a review when you think you're about 30% done, 60% done. And then when you think you're fully complete. And I'm going to review it at each one of those milestones. And often in big contracts, they will tie financial incentives to that. So they'll say, let's say your whole project's worth $100,000. Well, they might tie up 30 grand on the 30%, another 30 grand on the 60%, and the remaining 40 grand on the final design, let's say. So there's a financial incentive to get through those design reviews. But when you're a person writing a bit of code or an app for somebody, what you're trying to do is, well, essentially what you have is you have complete control. So you can say, I pick and choose you, you and you specifically to be on my review team. And one of the tendencies, one of the difficulties I see is that a lot of people say, well, they will just get all their mates to review. And I think that's a bit dangerous because just because your mates doesn't mean that they're gonna be good reviewers. So some people that do beta testing, I put out my app to these people and I'd never hear from them. it'd be just silence. I'd say, "I have had a chance to install it. Yep, how's it going? Yeah, it's good, love it. That's it? Yeah, that's it. Okay, thanks. Great, good, valuable feedback. And then of course you'd have the other end of the spectrum where you would get the people that say, "Well, you know, I really don't like the shade of blue you used in the top right-hand corner." And you're like, "Okay, trying to keep an open mind. Is that going to affect the usability? I mean, how does it affect usability for you? Oh, I just don't like it. I just don't like the way that it flows, how it feels. It's just, yeah, you need to work on this. The font's not quite right. And a whole litany of feedback, which you can argue is kind of valuable, sort of, but in many respects, is that really what you're asking for, right? And the thing that I think a lot of people, what I bring to that discussion is I've been through the ringer that many times in design reviews and engineering that I tried to apply some of that to when I was doing beta testing of my iOS app. So I would say, well, I'm specifically looking for feedback on this feature. So I would like you to please try the following. And you tend to do it as a, sorry, I tended to do it as an incremental set of beta releases. So I'd say, I'm working on this functionality, get that down, Pat. And then I would add the next bit of functionality and say, "Okay, now I'm going to work on that." But in the end, you can't stop people from coming back to you with random feedback. You know, and in many respects, you probably don't want to, but it does slow you down. And there's a lot of... and there can be a lot of noise. So when you go into an engineering review, you have to set boundaries. You say, "Okay, well, this is an electrical design review. I'm not going to sit here and discuss safety systems with you. I'm not going to sit here and discuss the mechanical or the civil components because they're nothing to do with me. I'm not going to talk about the thickness of the material on the switchboard. I'm not interested in that. I'm only interested in the electrical design, as in, you know, what size the circuit breakers are, where the different switchboards are going to be physically located approximately, how we're going to use the control system to do what we're going to do. You have to set boundaries. And I think that if you don't set boundaries, then people just feel free to give you whatever feedback they think is relevant. Right. Now you've been in an environment where you've been through design reviews as well. So, man. What do you. Yes. Yeah. So what can you give me some examples of some of your experiences? Yeah. Let me switch my coffee to scotch here. Oh, OK. Sorry. Not that you asked, but I'm drinking water. Water. There's an in joke for you. Yeah. No, I'm in total agree with that. I'm actually a little curious. So in the example you just gave, an electrical engineering design review, will you receive completely subjective, you know, essentially the same kind of things, the same kind of argument as, "Oh, I don't really like that shade of blue," or, "Could you make the font a little bigger?" Will you receive the same sort of nonsensical non-review? I'd hate to call them nonsensical. I believe everyone's opinion is valuable at some level. Yeah, I am actually kind of serious when I say that. Yeah. Yeah, I am. But yes, you will get that. I remember doing a design review for, I think we were doing 35% design review for a telemetry system up in North Queensland. and we went into the review and it was actually about the radio paths and the radio links. And the discussion of construction safety came up because the area was in a cavern or a ravine and the ravine had very, very steep sides with loose rocks that under rainy conditions, rocks and even boulders would tumble down the side of the hill and cause damage and if someone was standing there, injury or death. And this came up while we were trying to talk about radio paths and the two are completely disconnected. There's no bearing on safety when you're trying to say, "Okay, well, look, the path from here to here has got a fade margin of this and let's talk about the franal zone here and blah, blah, blah, blah." And none of that had any bearing on the safety aspects of the practical side of installing it. And unfortunately, the whole discussion got derailed. That's just one of a hundred examples where people that come to the reviews, you have to set expectations when you walk in and say, we are going to discuss the following. The rest is off limits. Well, and I've had, I guess that's, you know, maybe nonsensical wasn't the right word to use, but I, I think they can actually be damaging and inherently, you know, of negative value because you have a limited amount of time and often a limited window of opportunity with the decision makers who are often the ones providing the feedback from non-unnamed, non-present third parties. Can you tell I was a web designer? Yeah. Right. And as your field of work gets either more technical or more, I guess, what's the word? Structured. Whereas as you're dealing with more and more standardized systems, I imagine that's going to fade into the background. But when you're out on the fringe, the more direct to consumer marketing kind of stuff, like that where literally someone's uneducated opinion can outrank hard data simply based on the whims of again, some decision maker who's oftentimes not in a position to make the decision. It puts the onus on the designer to learn how to run the review in a way that actually leads to a good outcome. And that good outcome, unfortunately for the designer, also means, you know, usually means that you have to kind of kill your own ego and play the game to a certain extent. So yeah, it's really challenging. And I think setting those barriers for what's gonna be talked about and setting those expectations early on and reinforcing them is really important and maybe the most important thing you do in working with a client because it'll... I've seen things go off the rails and I've seen things work beautifully as a result. So yeah, it's interesting. Yeah, it's often overlooked is a lot of people will simply say with a bit of software, they'll put it out there and they'll just say, "Hey, tell me what you think." And that's sort of carte blanche for people to come back with absolutely anything and everything. Now, maybe what you're doing is straightforward enough that that's okay. I don't know, but even when I was doing my crummy little bedside clock, which I'm happy to call it that, even that had enough complexity to it that I didn't dare say that. The bottom line was that I wanted to test specific functionality and it was something that restricting the sort of feedback I was prepared to, well, I was interested in receiving because you can't force other people. I can go into any meeting or on any forum and say, guys, I only want feedback about the font and they'll come back with feedback in other areas. I can't control what other people are going to give me feedback on but what I can do is I can say well look I'll take that on board but I'm only going to be taking your feedback on the fonts for the moment because that's our critical issue that we're trying to try to address. So you can at least be diplomatic about it and you can take it under advisement and say okay well look I'll park your other suggestions in the parking lot over here and we'll come back to that later. I think Because if, yeah. Oh, I'm sorry. Well, I think a lot of it, it again also comes down to who you're picking to do it. And as you kind of alluded to at the beginning and also. I like your suggestion of sort of providing focused areas for review, because I'm thinking about the generally bad job I've done of being a beta tester for people, for making iOS apps. And. I think that one of the big reasons why is, is that, I mean, there's two sides is one I've never been given a, you know, here's the, here's the app. Here's what we've done with it recently. And here's the kind of use case. I think you should use it for to try to see, you know, like I'm missing that, that third part of the, you know, the third leg of the stool as to what really are you looking for me to do with this? Because if I have that, then I can really latch on and I'll sink my teeth into it. But, um. You know, with the apps that I've reviewed, it's basically been, hey, these are pretty good. And I what will happen for me is I'll see little things and they'll be so little that I'll assume someone else is going to catch them. Right. And it's just boring to me. It's not interested in it. But if I was given like a particular task, a particular workflow to go through, where there's that that kind of pitbull mind, right? Where you start just grabbing onto everything and you don't let go. Triggering that is where you would get any, any of the value out of having me in your beta test would be getting, you know, getting me to run through something where I can really, really zoom in on it. But because that's, and I think that comes from the experience I've had on the other side of the table and having to, you know, working in the web, working with situations where you're trying to get people to go through these very complex series of steps to do the thing you want them to do, which is oftentimes give you money. And. That. There's that understanding there that everything on this thing, you know, everything we're doing here has to have some, you know, pragmatic reason, right? There's a reason we're trying to do these things and just having a pretty thing to look at or play with or to feel good that, that, hey, I'm inside this club as I'm a beta tester, in my opinion, is valued. Like that's a different part of your brain. And I want to turn that part off when I'm when I'm testing stuff. Absolutely. And I think the bottom line is that you need to consider two pieces of this, and that is that the person that is doing the design, they are giving you an opportunity. And the person that is actually reviewing the design also has an opportunity to give meaningful, useful feedback. And those responsibilities shouldn't be taken lightly. And if they're going to be taken lightly by either party, you're not going to have a good result. So in the end, a design review is only as useful as the commitment of the people reviewing it. And it's also only as useful as the designer who is submitting it for review. They have to be receptive to the feedback. And the wheels fall off for so many reasons. And I guess I've heard it all about the different angles and perspectives that I'm talking about. For example, just restricting, starting by restricting the number of people. So you say, "You know what? I'm going to pick a number, 5, 10, 20, whatever the number might be." That's going to vary depending upon a whole bunch of different factors. But if you can, sometimes you can fix it and sometimes you can't. But in larger engineering projects, for example, you may say to the client, "Well, we want no more than two representatives or we'd rather just one representative." And sometimes, they'll send one from each department and all those five departments, by the way, or they'll send a manager of one department, a manager of another department and then a couple of engineers. So, you get a mixture of technical and non-technical. If you can restrict it and when you're doing beta testing, you can say, "Well, I only want two people that are actual web designers. I don't want ten web designers and one programmer. I just want two web designers and I want two programmers and try and, you know, two general users, perhaps, I don't know how you want to classify people, but trying to set some limits on it, because, you know, getting back to that limits discussion, but if you restrict the number of people attending, on the one hand, you'll get people say, well, you're not interested in feedback from a greater audience, you know, you're trying to be very restrictive and therefore you're not really all that interested in having a wide variety of feedback. And, you know, however, the flip side of that of course, no, you're actually being selective. You're being very, very thoughtful about who you're going to ask and you're going to make sure that you've got a much sharper focus on the source of feedback that you're trying to get. I prefer to think of it that way, but I've had clients come back to me and say, "You know what? You can't restrict the number of people that we send and we're going to send as many as we like." And I, it's like hands up in the air, "Hey, you know, you got some of the money, but I just, you know, here's why I advise against it and sometimes you win, sometimes you lose. But I still think it's the right approach to try and limit the number of people. Right. Yeah. Well, and you know, if you're releasing an app to the store, if you're selling a product on the shelf, I mean, eventually the design review is the one that involves a cash register. Oh, sure. And you're going to hit, you're going to hit that point anyway. So you have to decide if you want to hit it potentially prematurely. I think that's one of the things that goes wrong with the beta testing phase is you're either easier timing it wrong. You're actually still in a phase that's closer to alpha or you're essentially at the point where you should be shipping something already. I think timing that right, it might be more art than science, but I think it's really important. Yeah, I agree. Definitely, both about getting the timing right and the fact that it's an art form. Absolutely. And there's a lot of talk about a minimum viable product and understanding what a minimal viable design is, whether it's a product or a design. If the design is your product, then there you have it. But you have to have a minimum viable design and you have to say, okay, well, I draw the line in the sand here. This is my list of features that we intend to support once my product or design meets that list of requirements. then I am essentially done. Everything after that is additional. In contractual terms, you could say, "Okay, here's my design basis and anything beyond this represents a variation." In other words, you want more, it's going to cost more. So, there's always that. That's critical when you're doing design to understand that. But one of the other things that I didn't talk about, I don't think explicitly, is it's not just restricting the number of people. I think I caught it kind I kind of mentioned that you also want to restrict the experience of people that attend. So you want to focus on specific, let's say you want people that have got experience with UI design. You only might want two or three of those, but you want the ones that have been around for, you know, been doing this for 20 years. You know, someone who's got experience on multiple platforms, let's say. You know, from a client, I might say, well, look, this is an electrical engineering design. I want to have at least one person who represents you, who is an electrical engineer that understands what they're looking at, so that I get meaningful feedback. And that's the key is you wanna restrict that for the specific areas you want reviewed or need to have reviewed. And honestly, the cynic can say, well, I can handpick people that I know are just gonna nod. And I've actually worked on jobs where that was the case, where people have said, oh, we really like working with such and such from the client because they just nod their head and agree to anything 'cause they've got some unrelated background and they just look at it and they go, oh, wow, that looks like a nice design, nod, nod, sign off. You know, and I mean, that leads to a bad result. Not that I'm suggesting that we would ever take advantage of that, but the bottom line is that that can happen. There's such a thing as being too easy. You need to pick people that are appropriate and people that have got relevant experience. Otherwise, the feedback you're gonna get's, you know, not gonna be very good. - Well, I like the, you know, one of the, some of the methods that come from the jobs to be done framework, where you're doing these rather long, pretty detailed interviews of people as to why they bought certain products, why they made certain decisions. After doing a number of those, I found that that can be, taking some of those techniques and applying them to the design phase can be pretty useful too. It just, it requires some pretty, it's a lot more intense, right? You need to actually be present usually, and you're oftentimes lying. You're misdirecting people to getting them to think they're doing one thing. And in fact, you're looking for their reaction, their their. Physical behavior, you're you're you know, you're trying to read the person. Yeah, that's right. And the problem and that can it can provide you a lot of insight to things that you're going to miss if someone has to come back for you to you with maybe a typed up punch list of problems or you're talking over the phone. And I think, you know, you can all kind of just picture it in your head. You know, you're sitting down with a client. You've got a couple of designers sitting behind them. You know, maybe it's again, just my own background. Maybe it's a visual design on a screen. And you see the person mousing around and completely missing your amazing navigational design or completely avoiding like this one beautiful thing. you thought that you did such a good job with, it can highlight where you've over-focused and where you've kind of put on the blinders. The longer you work on something, that's the risk, isn't it? Because you get so drawn into it and it's like, "Well, obviously they're going to click on this and tap on that and slide on this." And they're obviously going to use the control system this way. And then you put it in front of them and that's not what they do at all. So obviously, you need to have a sprinkling of zero experience or rather minimal experience in your reviews as well. So it's not just I only want technical expertise, I also need a mixture of people that have none. So the point is you need to think about it rather than just randomly say whoever. In some cases, you don't get a choice. Some cases, the client will simply say, "You know what I'm sending? Fred, Sandra and Tim." And that's it. And you don't know who they are. And they just show up and sometimes they don't show up. So what can you do? But anyway, so one of the approaches that I've seen with mixed success is the concept of a minimal or closed review during early development stages. So some people call it like an internal review or a peer review. And the concept is that if you're developing a design for something or anything, you don't go out to a beta test, essentially. You don't go out to a wider audience for their design feedback because you're still trying to hammer down exactly what you want this design to consist of. And that can be very invaluable because it cuts out a lot of the noise and the static and the overhead of going out to a wider audience too early. And having just extolled the virtues of it, I've also copped a great deal of flack about it as well, because there's also another school of thought that says, well, if you're not giving us an opportunity to review at 30% or 20% or 25%, does that mean you've got something to hide? I mean, why aren't you being more open with your design? Yeah, we're paying for this. Why can't we review this anytime we feel like. And it's a difficult line to walk, but if you can pull it off and you have an understanding client or if you have control of your project, then I highly recommend that before you go out to a full-blown beta test. And I think that's sort of the concept behind an alpha and a beta, although, you know, thank you, Google, you know, for screwing up the definition of what the hell an alpha and a beta is. Is Gmail still beta? No, not anymore. Oh, finally. Is Siri still beta? I think it's not. I don't think it is. Actually, no, it's not beta anymore. But I mean, those things were out at mass use for years by millions of people. Are we up to millions? Yes. Millions of people. I think so. I think we've hit millions. Millions of people and and what did they do? You know, they called it a beta. I mean, how dare they come on We they were Apple really soliciting feedback on Siri where they really I mean, they weren't not really You know, it's a design review. It's funny. It's not funny It's it's tragic, but I worked on a project a few years ago and that and that actually was a problem we it was you know, a Few other developers and I were talking to the business guys, right? And when we were saying we're going to do some beta testing, we're, you know, we're thinking beta testing and these guys were thinking essentially, you know, Google style limited release, uh, or unlimited release or whatever it means, right. It just means, Hey, it might break, but you're still going to pay us money. Like, uh, I, and I think it really did distort that, that, that phrase to the point where it's just, it's meaningless anymore. Like, like design itself. Yeah, well, that's another good point. But yeah, absolutely. And I think that the dilution of the terminology does not help the definition when you're trying to educate new and upcoming designers and engineers on the right way to do things. Irrespective, whatever you would choose to call a rose, it doesn't matter. It's still going to be a rose. So the point is that you want to call it now for a beta, whatever they may mean. What I mean is a limited, essentially a closed review amongst your close peers in very small numbers up front, call that an alpha release if you will. And then a larger scale beta release to a larger audience only when you're ready before you go public. And that kind of correlates with design reviews done at 30, 60 and 100% design. Of course, the definition of what a 30% design is, is hotly debated. So I've been through that before. I've had people swear at me black and blue. That's not a 30% design, you're missing blah, blah, blah. And I'm like, here we go again. Anyway, so I think that there's a couple of general rules that we can follow, but I think the way we need to address it is looking at it from both perspectives. So first of all, from the perspective of the designer. So I don't think you should ever call a design review or release anything for a beta until you've got a design that's got an agreed level of completeness. that you're happy with yourself that your design is complete enough that it could actually sustain a review. If you're going into review and saying, "Yeah, it's kind of like what I got the back of a Wheaties box and it's sort of subject to change without notice," insert disclaimer here. Why are you circulating that for review? This is a waste of everyone's time. You're going to have something relatively solid before it goes out to anybody. So I think definitely that's critical. Start with that. Go through it thoroughly yourself first and make sure that you're happy with it, that it's solid before you go out for anyone else's review. So when you invite people to do a review, make sure you get people who've got knowledge about what you're actually doing. It may sound simple, but you'd be surprised how often that's not the case. And when you invite people, be clear in the invitation that you're trying to keep numbers down. It's like you're specifically wanting them. And you say, I just, you know, because what can happen is you get this sort of branching thing happen, yeah, where, oh, my mate Bob has it. And Bob has a friend, Steve, who's really into this sort of thing and really wants to look at it. And it may sound weird, but it does happen. And even engineering projects. You'll end up breaking up with the whole company. Well, yeah, I had this in engineering projects too. In engineering projects, wow. Michael was walking past the desk and he said, oh, I've always been interested in the control system for Blaal, the electrical design for this. Well, I've got a background. I've, I've, I've tightened a screw once. I know about screwdrivers. So, you know, and before you know it, you've got a cast of 50 people in your design review. Well, which is when you realize, or you should, the sooner you realize that the better is that what you're doing is theater, um, to a large degree. And you should start to use the techniques of the stage to do the right thing, which I'll ask you about. Well, actually, let me, let me, so here's my, My own personal thing I found is one of the fundamental laws of doing these is to don't ever bring a design that you, whatever the worst design you bring is, that is the one the client's going to choose. That's the one the client's going to navigate for or to gravitate towards. So only ever bring one. Bring the one that you, bring the version, the vision that you feel is absolutely the best because if you can't, if you're not to a point where you can't make that decision yet, where you can't edit out those other options, then you're honestly not in a position where you should be leading a design team, right? You need to, you need to start out with this position of power and direction, not power with leadership. Otherwise you're going to lose control of the ship like that. going to happen so quickly that you won't even recognize it, but you will no longer be the lead designer. Someone else will be the lead designer. I've actually seen it. I've seen it happen in a clause of a sentence. It happens so quickly. It's just like some person has a rebuttal. It's not even a complete sentence and suddenly everyone's looking at them and they're like, "Oh, really?" It's terrible. And especially when you've been told you need to deliver on this, you are the lead and you're like, okay, sure, here's what I want to do. Anyway, I totally hear what you're saying and it has happened. Oh dear. Okay. So, I just want to keep going through that list. So, when you do have a design review whereby you're going to have a meeting afterwards or a collation and the thing is that This is less common in the internet these days because most of the beta testing is done in different locations. But in the case and it's a smaller company and there's a peer review and you're in the same building, same software company in a building or an engineering company in a building or your client is across town, having a physical meeting in a meeting room is far better than individual solicited comments because it gives everyone in the room an opportunity to say, "Okay, well, Bob and Susan had the following feedback. therefore, there's overlap in their comments and therefore we can address both of their comments at the same time. So that sort of review meeting is essential whenever you can get it, it's really, really good. Otherwise, you're dealing with everything piecemeal. Now, you deal with the piecemeal, there's a lot more work for you as a designer. That's reality for a lot of people doing software development, I understand, because the internet is a big place. But if you are going to do that, you have to make sure you circulate your design for review and leave a reasonable amount of time for people to actually review it before they come to the meeting. How many times have people put things out the day before and then expected a complete review on the day? And some people will say, "Oh, that's just a tactic to get people to not review it and simply it'll slide through on it not being reviewed," which is terrible to say but I've seen people do that as a strategy and it's just sickening. So anyway, if you are having a review meeting in a room, organise someone to take minutes for you that's knowledgeable on the subject. Don't you the designer damn well do it if you have a choice because you need to be concentrating on what's going on. By the same token, the minute taken needs to know what's going on. You can't just have someone that has no idea. You know, like I remember once I was told, "Hey, bring one of the administrative assistants in." and as fantastic as she was, as a great typist as she was, she was fast, really fast. And, you know, she's very switched on. But, you know, if we go talking about, you know, the default current on, you know, the main bus bar of switchboard, you know, 2C, she's going to look at and type down her best approximation of what that means. But unless you have some knowledge of the subject, you know, you're bound to not get it right. So you find that then that becomes additional overhead for you when you finish the design if you've got to go back over their notes and correct them and polish them up. So I would want that. Yeah that minute taker should be the one of the most senior people in the room. Yes in fact I've actually seen that work quite well whereby the senior engineer is actually sitting in the room doing a lot of the note taking, the minute taking, but they're also there as an advisor as well and the junior engineer takes the lead on the design as part of their training, right? You don't do that. And I can't see why that would be any different in any other profession. It makes sense to me. Well, I think of it, well, you think of the flip side of the, how easy it is to let a really important thing slip, right? Because you just, you slightly misunderstood the syntax or the context of a discussion point because, you know, in any complex system, there's going to be things that you just simply are not going to have a complete grasp on yourself as a lead, right? You're going to be delegating authority to other people. And if you miss something that has like a domino effect onto other parts of the project, bad news, right? I mean, it's just, you want someone who's had lots of experience who can see those problems coming. - Yeah, absolutely right. So the other thing is, of course, that whenever you do get feedback you do have to acknowledge and accept all feedback. You cannot say anything like bugger off, I'm not interested. And even if you feel that way, even if the feedback is not constructive, even if the feedback is off topic, you simply can't do that. Because as soon as you do that, especially in an open environment, you shut down everybody. Everyone realizes, oh, well, this person's not really interested in improving the design, they're not interested in my feedback, so I'm not gonna give them feedback. Or in some cases it can go the other way. I'm gonna now be even harsher in my criticism. I'm going to really stick it to this guy because he's not interested. So I'm gonna make sure I'm gonna make him interested. It's just, you're putting this design out there for the world to see. You have to be prepared to accept feedback. And the funny thing is that sentiment applies to a lot of things that we put out there. If you have a blog, if you do a podcast, if you put anything out there in the world, you have to expect if people are going to read it, then you're going to get feedback. That's just, that's reality. So the next thing is to progress through the design in a methodical way. Solicit your feedback on a section or a functional area at a time. So if you're doing a bedside clock, for example, then it has an alarm functionality. So you test the alarm functionality. If you've, you do that as a separate item. And then if you want to talk about how you adjust the size of the time on the front or the structure of the time on the front, then you on the main display, then that would be a separate discussion point. So you've got to be methodical about it because, and once you deal with a section, deal with it, it's done and don't come back to it 'cause you've had an opportunity to get feedback on that section. Don't keep going back over it. And otherwise you just end up with a circular drawn out review that ends up taking two or three times longer than it should. And would argue that a lot of the feedback that you then incorporate is going to be less useful. Ultimately, people are giving you their time and time is valuable, time is precious for all sorts of different reasons in a work environment and in a personal environment. Whatever time we choose to spend doing what we choose to spend it on means less time doing other things. So you have to respect everyone's time that's attending. It's not just your time. And just 'cause you're doing the design or the programming, whatever, Yours is not more valuable or less valuable than anyone else's. It's just as valuable as everyone else in that review meeting. Everyone that's beta testing your product, their time is valuable to them. And you need to respect that and appreciate that. If you're going to have a long meeting, if you're in a meeting situation, then just please, God, have regular breaks. Not everyone has the bladder the size of a massive balloon. And the other problem is, of course, people sort of tend to drift in their concentration. So the longer you go between breaks, the harder it is for people to concentrate. Yeah, if you've got a choice, I find that a bowl of mints or lollies and something in the center of the table actually keeps the blood sugar up of the people that have them anyway. Mints are a good one 'cause they also, then you get the benefit of, anyone with smelly breath is less smelly breath then. So when they talk, it's not so bad. Anyway, because low blood sugar will affect concentration. But anyway, that's just a little tip if you've got a real world meeting that you're dealing with. that I found anyway. So that as the designer, that's my list. Do you have anything you would add to that? - You want to make everyone feel involved. And I think it's the way to avoid having to dismiss, not dismiss, to avoid having to make people feel bad if their comments essentially are not out of line, but not particularly useful or germane to the conversation. Because that happens and it can have really bad effects given the internal politics of the people you're working with. And I mean, I guess kind of a sort of a made up example of that would be, you know, let's say you're developing an app for a restaurant. I don't know why you'd be doing that, but let's say you are. And while you're in this meeting, going over with the owners of the restaurant, or the, whatever, insert the specifics as appropriate, you've found that the single biggest obstacle to you making progress through this design, this application, what you're developing, is the, oh, I don't know, say the internal IT guy. 'cause that would never happen. How do you bring these people on board in a way that A, doesn't give up control? Because again, if you're giving it up, then well, you're not in charge of anything anyways, and why are you worried about it? And B, how do you make this person feel good about them giving up their bit of control, or oftentimes they're a bit of perceived control. And figuring out a way that there can be a role that everybody plays in the process, 'cause it really is, I mean, I wasn't really joking at all when I was saying it's theater. You're doing one thing, and it looks like something else to a lot of people. And for a lot of people, it's this fun, sort of exciting break from their daily activities. and to you it's part of the job, right? It's just something that you're doing. To them, this is special and it should feel special and coming away from it, they should feel special. And it's tricky and it could be as little, you know, you have to, it's about no understanding people and it's about empathy. I mean, it is kind of the meta discussion of what design is all about, is you need to design this experience for each of these people going through it So that they come away feeling good about it because you might you know who knows maybe you're in a situation where yeah everyone is getting invited and one of the people getting invited is the secretary and you're rude to the secretary and now. For some reason man all your calls they just don't go through right and your bills are ever be rude right and your secretary exactly but that's that's. Those are the kind of rookie mistakes people make they'll they'll shoot something down that. in a way that's just particularly unkind and if you don't know who you're doing that to, and it's not even that, it's not even about being rude, it's just, you know, identifying what are all the potential gains we can get from this situation and they might not be what it says on the memo that we just passed around, what are we trying to do here? And that's, which I guess is really broad and vague, but it really is to go into these things thinking about it like it's a performance and to really put... Because if what you're suggesting, the 30, 60 kickoff meeting, these kind of three big milestones, whatever your deliverables are and whatever your specification is, I mean, it could very well be just simply that 30 is when you've run out of the first 30% of money. The 60 is when you've run out of the next. That's often how things seem to work out. That whatever these things are, you need to make them really punch because this is their their experience of working with you. The rest of it is all, it doesn't mean anything, all the hours and all your sweat, all the care and time you put into it, no one cares. It doesn't matter. This is their experience of working with you. And this, as much as anything else, is the product you're giving them. - That's actually a really, really good point. And I, you're right. It is that when I first heard you say that the theatrical piece of it, it is a bit like that. And what you do is definitely going to impact the customer's perception on you. And I guess the thing I take away from that is, and again, reflected in some people that I've seen, is that there are some people that are just good at design review because as design review, like leading a design review, simply because they are more personable, as opposed to the ultra, and again, this is going to get stereotypical, but I don't mean to be stereotypical, but the individual who spends most of their time working in the cubicle and not actually interacting with a lot of people, and they may be able to fire off thousands of lines of code in a day and be the best troubleshooting, debugging person you've ever met or the best electrical designer you've ever met. However, you put them in a room full of people and they just sort of sit there and almost sulk a bit and say, "Well, why are you criticizing my design? My design's really good." And it's like, well, this is you interacting with your client or your end customers or whatever, you know, your beta testers. I mean, you need to be able to manage that. And it's funny because a lot of people that are excellent designers are horrible design review leads. Right. And that's something that served me well. And I think is something I would recommend to anybody that's either A, coming at it from a less technical, you know, if you're, if you're, if your, if your career is bringing you from a sales or marketing or kind of a management role deeper into the technical side of thing, or if you're coming from a deeper, closer to the metal technical job and moving out towards more client facing thing is to find that point where you can kind of exist as a pivot between the person you're talking about, right? If someone does not like going in to be reviewed, if they don't like the process, if they don't want to do it, and most importantly, if they recognize that they're not good at it, and if you guys can team up and work together so that you've found... So essentially, we're taking this another step back. So what's the design review before the review? What are the strengths and weaknesses of the individual inputs? And who should even be leading with any of these particular... It's too bad this is in a visual format, because I'm doing some really awesome hand motions right now. But, but, you know, who, who should even be in that meeting? Who should be talking and, uh, and go all the way down the stack with that so that you're not, you don't end up in these situations where someone who essentially is really just not well suited for this ends up, ends up doing it because I've seen that happen a lot. And, and it's usually just because no one really thought about it that much. And if you talk about about it afterwards, I was like, "Oh, well, that's obvious." Absolutely. I'm not sure what other advice I could give to anyone that is going to do a lead a design review of any kind, but I guess it is trying to be as nice and as positive as possible. That's one of the reasons why I said earlier on, you need to, as a design lead, you need to be able to accept any and all feedback, no matter how frustrating or irritating that is because honestly, if you don't, then the consequences can be very bad. And the higher up professionally you go in organizations, then the more expensive the consequences. So it's one of those things you got to look out for. So that's the design lead side of it. But the other side of it we got to look at it from is the reviewer side. And so many people go on about all the others, the designers design and they've got to take care of all of that, it's their problem. I used to see it that way when I was younger, but now I've been on both sides of the fence and it occurred to me a while ago that people neglect the responsibility as a reviewer. As the older I'm getting, the more it irritates me because when someone gives you an invitation and says, "I would like you to review this for me, please." You really need to respect that and say, "Thank you so much. I'm glad you value my opinion. Is there any area that you want me to look at? That sort of thing. Try and be useful, try and be constructive. Because those are the sorts of review. I want to be the reviewer that people come to and say, "You know what? I really appreciate your feedback. It was helpful." And they're not just saying it, they actually mean it. And they get on board with some of your suggestions. And other times I've had people come back to me and say, "Look, I agree with you but we're restricted by this, this, this, and this. has already been agreed in the contract to which I'll say, "Hey, that's fine. I understand. It's no big deal." So I guess as a reviewer, the key thing to remember is it's so easy to be nitpicky when you're not the designer or worse than that, perhaps you're not a designer at all yourself. So people that aren't, I find this a lot with people that aren't engineers or aren't designers or programmers, they tend to, a lot of them tend to be very, very nitpicky about things. They're either they say nothing or they're really, really nitpicky. So I guess the point is that you need to keep your feedback focused and relevant and that's important. You need to respect the designer and their design wherever you can. Obviously, if something's completely screwed up and wrong, you have to tell them and explain why. But in the end, you need to be able to stand behind any of the feedback that you give. So you can't just, "Oh, this design is terrible. This is not the way I would have done it." That's totally the wrong attitude. You need to go in with, "I respect this designer. They've put their work out there. I respect their design. Here's what I think could be done to improve on it. Or here's what I think is not working the way I would expect." And frame it that way and be kind. These are people, they're putting their work out there on display. You need to be nice about it. You can't just be a jerk. I guess, suppose, you know, karma being karma, eventually, perhaps, if you are a jerk, enough time to design is you'll stop getting asked to be a reviewer. You know, that reminds me of a few weeks ago, a couple of months ago, actually, a buddy of mine sent me a website to review and just, I don't know what made me think to do it, but I basically, rather than like type up a list or just describe it all, I just took a few different screenshots and to set up QuickTime, the screen recorder on my Mac, and just recorded myself going through and talking about it. And the two things that were really interesting that I noticed is, one, as he commented afterwards, I spent way longer on it than I thought I did. It was maybe like 20 minutes. It was a really decent bit of content, it was a good amount of time, and yet it felt much, and I know it actually was much, much shorter than it would have been had I had to type it all up. So there was some immediate value I found there. But the flip side was that it made it very, very easy for me as a reviewer to essentially, to be kind while cutting, right? to say, "Here's what's wrong and here's why, and I know what you were trying to do here, but you need to change this little bit about it," and essentially to make it so that everything was constructive all the way through. And I think the fact that essentially, I mean, I was using my voice to express what was going on, and I could even use the mouse on the screen to kind of show, basically draw the lines like I was pointing at it. And yeah, you really need to think about that almost like to me, I try to view it like you're essentially, you know, the previous team just got hit by a bus and it's now your project, right? And you now need to deal with it. So there's no point. I mean, there's never going to be a benefit to just tearing down what someone's done. If it's bad. I mean, if it is just objectively bad, you're like, ah, this is wrong. Then, you know, Give them the benefit of the doubt and say, okay, they missed it. You know, they're not just putting crap out here 'cause they don't care. They missed something, someone hasn't been educated, someone needs to read something, and just tell them. Just tell them then that here's what you need to do that's different, and here's why. And just giving that, always having it, you know, always spinning in the same direction, right? That you're just trying to get this wheel rolling better, and it's got some dents in it, but you're trying to get it going the same way. And a lot of that can be, so much of that comes from your tone. So much of that comes from the way you end your review, right? You can be incredibly critical all the way through, but if you end on an up note, just because of the way that we act and the way our brains work as humans, do pay way more attention to the ending of things than all the rest of it. So you can be, you know, Even if it's just a little stupid thing, like saying, you know, at the end of every little bit of criticism, you say, "Yeah, so that's pretty good, but that's just basically the thing I would change." Or, "So, you know, so almost there, right? Almost got it." And always just peppering those things throughout just always takes that little bit of the sting out. And it adds up because it's always painful. And if you're dealing with a younger designer, if you're dealing with someone that, you know, let's be honest, just hasn't had enough time to really build up a thick skin, you could be making a real big difference in how they respond to you. Because if you push them over the line the other way, then it's going to be bad. Absolutely right. And yeah, I couldn't put it better. It's all about how you deliver it and showing that respect and delivering constructive feedback. Because a good litmus test as to whether or not your feedback is personal or whether or not it's professional is to justify it. So, if you cannot justify why you think it should be grey as opposed to pink, let's say, then if you can't justify it, then why should you give that as feedback? That's just an opinion. Feedback is not all and I say, oh, it's just an opinion. Feedback isn't just about an opinion. There has to be an opinion that's driven by an improvement. Change for change sake is pointless. Anyone can change something for change's sake but you need to provide some kind of path and reason and rationale behind why you would change it the way you would change it. That's a good way of filtering out if something is valid feedback or not, in my opinion anyway. Just to quickly get back onto the list, we're almost done. That is, if you have got a group meeting and you've put the design out there a week ahead time and you've been invited to it, for God's sake, if they've given you a week's notice, don't leave it to the night before the meeting or the day before the meeting to review it or worse than that, during the meeting. You know how many times I've seen people that have had one week, two weeks notice for a review that have come in and I've said, "Okay, so everyone's had a chance to review the documentation and the design," and half the room say, "Oh, no, we haven't had time, we've been busy." Now the same people, the very same people, if I had put it out the night before, would have put their hand up and say, "Oh, how dare you put it out the night before, "will you not give us a chance to review it?" It's frustrating as all hell. So as a reviewer, it's your job, because you were picked to do it, to don't just rush through your review, take your time. You've been given some time, use it. And if you're not gonna do that, and you're not gonna give it time, then you shouldn't show up at the review, you shouldn't give any feedback, because you don't deserve to. you need to put time aside for this. - Yeah. Yeah, it's tricky for, you know, that decision, that valuing of this time, valuing of the process needs to happen at a high level too. - Oh, that's true. - I mean, you know, for things like, like, you know, say, you know, app beta reviews, you know, if you're, or, you know, if you're invited to be a beta tester for an app, It's I guess we're going through, I'm talking about some like, you know, this is doing a good job at reviewing is such a job. Why aren't people getting paid for it? I just keep thinking, I mean, it's, it's, well, that's a good, yeah. I mean, like it, if you're in a company and well, I guess, you know, you're getting paid anyways, right? So maybe it balances out there, but are you properly trained for it? Are you, you know, is the rest of your workload being adjusted to recognize it? It's cause it is, It really is a pretty intense thing to do if you're doing it right. Absolutely. And if you're, and this is the other thing that gets me about people that say, Oh, I'd love to do a beta test of your app. And I asked myself a question before I put my hand up when someone like, for example, recently, a friend of mine on Twitter is doing up an application. And so I said, Hey, anyone that wants to do a review on this, you know, let me know. And I sort of went back to him and said, look, yeah, I, I don't feel like I can give it the time that I need to at the moment, but maybe in future months I might have some more time I'll be able to do that. I don't want to go and give someone hope that I'm going to review something if I know I'm just too busy to do a good job of it. And to me, it's fairer to the designer to do it that way. It's unfair to say, sure, yeah, I'd love to see that. You have a look at it and you say, oh, that's kind of cool. Then you go back to TweetBot or whatever and you just carry on in a merry way. If there is, just getting back onto the review meeting thing, if you're in an environment where there's a review meeting, and I think one of the things people need to appreciate, and this comes down to the whole paying attention and taking it seriously, is distractions like phones and laptops and even side conversations. I've had people come to design reviews and they haven't spoken to someone in the room in six months and it's like, "Oh, hey, I haven't seen you in a while. You know, what's been happening with the kids? What's been happening with the boat? Have you been going out on the lake?" Blah, blah, blah. And it's like, you know, that's not really the time guys. This is my design review here and I need your focus. So, you know, whenever you're doing that, you're wasting everyone else's time and it reduces the overall effectiveness of the review. So you got to respect it. A lot of this comes back to being a good reviewer is to coming back to respect, respect the time that's been put aside and so on. And obviously that's all implicitly assuming that your manager or manager's manager has said, you have got a budget of X number of percent of your week design reviews go do it. When you work for a large for design consultancies, they will simply that'll be part of your mandate is you have allocated 20% of your every week for reviews and I'll allocate you to reviews. As a senior engineer, I was allocated to three or four different projects that I wasn't actually a designer on but I was asked to come in to do technical review on. You're allocated a portion of your week so that you're getting just get back to your comment about, yeah, management have to buy into this. From a company point of view, if you're a sole proprietor and you're selling your own app, that doesn't matter. It's only as important as you wanna make it. And if you've got someone else, you're reviewing someone else's app for free, 'cause it does take time and effort. Why don't you get paid for it? I don't know, but I guess it's a goodness of your heart thing, friendship thing. And it can be a bi-directional thing because a lot of software developers get other software developers that they are friends with or become friends with to review their software for each other. So it's kind of like, you know, so AgileBits might get BlackPixel to review some of their software and vice versa, you know, and for beta testing, you know, and that's great because then, you know, you get a feeling of everyone sort of sharing the review load and you've got people that are good at what they do and recognize what they do in each company and, you know, it's meaningful, valuable feedback. And I think that that's a good balance. It makes sense because very few, I mean, there's, you know, you're, you're, you're rarely actually competing with your competition, right? I mean, you're, there's very few, I guess flip side is, is there, there's a lot of resources out there to draw on inside, uh, inside any community, whether it's, uh, in, you know, iOS app developers or web developers, you know, graphic design, obviously, people doing industrial design, you can always do that. So I guess that's one aspect of, or one approach to it. The other one, I think, it's also something that would maybe just be, maybe it's just in my experience with the testing I've done, is that, again, it hasn't been really, really clear what the expectations are, aside from just use the thing a bunch. And one app that I used, basically my review was, I won't name names, people know who this is if they know me and my friends, it's a calculator app. And my review is basically, this is good, it works well, make it more like a calculator that's sitting on my desk so I don't have to look at it because that's how I use a calculator. And I just had some comments about, yeah, I would move this here and make this change. And that's literally it. I'm like, it's fine other than that there's no other time I'll ever be using this thing. That's how I use calculators. I got so used to using the 10 key on a keyboard and I'm never looking at it. So for me, the most important thing is that the buttons are where my fingers feel they should be. And if anything else changed, it just was going to be an also ran for me. And that was it. So for me, that's like literally, you know, there's no other point in having me involved there. Because I'm just, I don't feel I'm, I'm, I'm not a user. That's just, that's, that's how I would use the thing. But on the other hand, like some other apps I've done that have been much, you know, more complex, you know, large scale applications that I don't think I would ever actually be able to go through and do intensive testing of all these things unless they were like real serious parts of my workflow. And maybe in that case, what's better is to simply just be, you know, and maybe it's what's meant a lot of times is that you're there to look for crashes. You're there, you know, you want to just get a lot of people on it so that you're going to find the edge cases. You're going to find these bugs that are going to pop up. So, you know, I think it's kind of like the focus, you know, the structure for those individual review, for an in-person, like a real review meeting, if you apply some of that same thinking to how you're doling out your app to your beta testers so that maybe there's a group that really is meant to be hardcore users and they're really focused on a particular aspect of the app or a particular workflow and then, you know, as you maybe work out these concentric circles, you get to the point that essentially it's just, you know, it is, you're just throwing people at it so you can try to find bugs and you're phishing, which is fine, you know, but that you know, you know which group's which. - Yeah, exactly. The other point though, is if you're gonna cast the net wide like that, then people that are reviewing it, the reviewer need to take it seriously. So if they, there's no good going back to a software developer in particular and saying, yeah, it crashed on me a few times. Yeah. It's like, well, that really does not help in any way at all. I mean, please, God, how did you get to that point where it crashed? Yeah. So, yeah. That's actually, yeah, that's a... Working with, you know, with any sort of software development, web applications, whatever, learning how to elicit that feedback the right way, you know, developing a script that you work with people, developing, you know, a pattern that you're running all the time, helps so much because you will hear that all the time. Is yeah, it didn't work. - Yeah. - And it's real simple. What were you trying to do? What did you do? What did you expect to have happen? What happened? I mean, go look at Apple's radar thing 'cause it's pretty close to the right script. - Yeah. - But yeah, it's frustrating. - I once had feedback on some control problems we're having And the operator just dragged me into the room and I'm like, OK, so what's going on? Well, I was just using it and then it just crashed. And I'm like, what do you mean you were just using it? And he looks at me and he waves his hands in front of me and he's like, I was just using it. Come on, John. I know, I just can't. I feel like such a bad programmer. Anyway. OK, so final point as as a reviewer, and then we'll wrap this up is, be thorough with your review and your review comments, because design is not easy. I don't care what field you're in, design isn't easy. There are a lot of conflicting priorities. There can be measurable amounts of complexity that's not visible to you as an end user, but you are asked to provide feedback as a reviewer as input to the design. So you can actually make it a better end result, but you have to actually want to do that. So honestly, I think that the reviewer plays just as key a part as the designer in having a good outcome. - So if you want to talk more about this, you can find John on Twitter @johnchidjee, the same on app.net. And check out John's site, techdistortion.com. If you'd like to send an email, you can send it to
[email protected]. I'm Ben Alexander, and you can reach me on Twitter @fiatluxfm, or you can see show announcements and related materials by following the show account @pragmaticshow on Twitter. Thanks for listening, everyone. Thanks, John. - Thank you. (upbeat music) [Music] (electronic music) [Music] [BLANK_AUDIO] [BLANK_AUDIO]