SG Ep 37
===
Richard Ellis: [00:00:00] AI may be the most transformative force since electricity, but it's colliding with a workforce that's running out of fuel. Since 2019, global performance has declined across industries with the sharpest drops. In adaptability, creativity and collaboration, the very traits AI can't replace. BetterUp Labs estimates that the loss of these human performance skills has cost the global economy over $2 trillion.
Richard Ellis: Their research shows that performance depends on a hidden variable, psychological fuel, motivation, optimism, and agency. And right now, all three are in a free fall. So while AI expands capability, human capacity is shrinking and leaders are caught between the two. Welcome back to some goodness. In our last episode with Ben Scoones, we explored how teams can assess their readiness for ai.
Richard Ellis: Today we're asking a deeper question. What happens when AI [00:01:00] starts changing, not just what we do, but how we think about work itself? Ben is director of Data Science at Kythera Labs, where he leads AI strategy in addition to developing and monitoring various healthcare data assets, his training and philosophy of science gives him a rare lens on what happens when intelligence human or machine meets ambiguity.
Richard Ellis: Ben, it's great to have you back.
Ben Scoones: Good to be back. Yeah. Thank you.
Richard Ellis: Yeah, so we kind of laid the foundation last time, uh, around a lot of just the ai, uh, the, the nature of AI and where and how do you get started to apply it into, you know, workloads and workflows, uh, in a business. But today we wanted to lean into more the nature of work and the human aspect of it.
Richard Ellis: And, uh, I'm ready to dive in if you are.
Ben Scoones: Yep. Let's go.
Richard Ellis: Well, let's just start with this. Uh, a lot of people, especially the younger generation [00:02:00] are, are asking, Hey, will AI take my job? I know that's a complex question, even though it's simple on the surface, but what's your response to that when you hear, oh, AI's gonna take my job, or will I have a job when I graduate?
Ben Scoones: Yeah. Yeah. So I think that, uh. Is a common question. It's a bit of a scary question. Maybe we'll get into that in a little bit. Kind of the, uh, I guess the existential dread that, uh, that, that question causes. But I think at least, at least for the moment, I would say, and I think long term, there's a fairly clear answer to that.
Ben Scoones: Um, and so what I would say is. The question you should be asking yourself is, how defined and repeatable is my work? Or going back to what we discussed last episode, how algorithmic is what I do, how easy is it for a machine to be able to replicate what I do? And if the answer is pretty easy, then uh, [00:03:00] you then yes, like potentially that job could be replaced by computers.
Ben Scoones: And I think we have, you know, historicals, ex historical examples of that. So one example is. Uh, the term computer was not always used to refer to kind of the machines, the computing machines that we use today, right? Maybe a hundred years ago and even a hundred years prior to that computer would've pe somebody that performs calculations.
Ben Scoones: So they're doing mathematical calculations manually, and that was the whole job. But then, you know, 1940s, 1950s, when computers start. Uh, being developed, and then shortly after that, when they kind of become commercially available and start to become cheaper than hiring people, then that's where you see that term has obviously applied to them.
Ben Scoones: But also, you're not gonna see people with that job anymore. There aren't gonna be people that are performing that, that particular thing as, as their whole job. Because it's incredibly well defined. It's incredibly repeatable, it's incredibly measurable. It's very easy to tell whether something is, uh, calculated [00:04:00] correctly or not.
Ben Scoones: And so that's a prime candidate for, um, being replaced by the computer. Um, because really there's nothing in that task that a computer can't do. And really a computer is just better at it than people are. So I think that's the question. How similar to that, how repeatable, how algorithmic is what I do or some portion of what I do?
Ben Scoones: Um, and if the answer is, is algorithmic and very repeatable, then yep. That it's, it's gonna be susceptible to that.
Richard Ellis: Uh, that's a great way to put it. And, um, as I think about, you know, the, the, the types of jobs that are in our purview, you know, in sales, marketing, customer success, um. It's not all of their job that's defined and repeatable.
Richard Ellis: Right. But there is some subset of people's work that is that way. Right? And, and that's probably the case for many knowledge workers. And, and there's oftentimes where we're saying, Hey, if we could only have, you know, more time [00:05:00] for these value add activities and take off the grunt work or the administrative work off my plate.
Richard Ellis: You know, we'd be that much more effective, efficient, you know, we could scale, et cetera. So I think another way to look at it perhaps is also what elements of your job are defined and repeatable, that you can use AI to replace that and then free you up to use your expertise in your insights to, um, to do more meaningful work, uh, in your job and in your company.
Ben Scoones: Absolutely. Yeah, I think that's a great point because a lot of jobs, you're not just doing the same thing again and again and again. You're doing a range of activities and so yeah, I think it's. Likely for probably most people, some portion of what they do, AI is gonna be useful there.
Richard Ellis: Um, yeah, so I, I think there's a, a, a real opportunity to turn what can be a very scary thought.
Richard Ellis: Oh, AI's taken over a bunch of jobs or eliminating jobs to think about, well, let me just kind of do some self-inspection right on my [00:06:00] job and my workflow and where's there the opportunity to get some value out of AI and enhance my capabilities and, you know, my effectiveness.
Ben Scoones: Absolutely. Yeah. And that's, I think that's true not just for the individual, but also for, you know, kind of from the company perspective as well.
Ben Scoones: Your employees are not just these, uh, automatons performing this one job probably. Um, they're gonna have a range of skills. They do a range of different things in their job. And so, you know, you're probably not thinking about laying off a lot of workers necessarily, just 'cause everyone gets 10% more efficient.
Ben Scoones: That doesn't mean suddenly 10% of your workforce is useless. You've probably got. Lots of other tasks that you could be allocating those people to, and now you get to do more of them. That's the thing. You, you get to do more with less, uh, right. Hopefully, anyway. I mean, I know for certain applications, AI systems can be expensive to use and to maintain, but, but that's the goal.
Ben Scoones: Um, you should be able to do more with less now. And [00:07:00] probably a lot of companies want to do more and they just haven't been able to do it. Um, so, right. Like you said, it's, it's an opportunity. Right. It's, it's how you should be viewing it in that regard.
Richard Ellis: Well, and in contrast to that, there's, uh, you know, some, a, a bit of a negative trend that, that I've found in some recent studies.
Richard Ellis: One in particular done by BetterUp. They, their data shows that since 2019, uh, we've had just a steady decline in human performance. Uh, and the steeper decline was in areas of adaptive. And agile and, and collaborative work. More of those, you know, high judgment, you know, really human specific elements of work.
Richard Ellis: What, what does that kind of say to you in light of ai? Because we've all been watching this digital transformation and all these tools and stuff, and so we think, you know, on the surface, oh, everybody and everything's getting more productive. But actually human productivity has been declining for the [00:08:00] past six years.
Ben Scoones: Yeah, I think it's an interesting question. Uh, and I think we're definitely getting a bit more philosophical, bit more, uh, maybe speculative here. Um, yeah, sure. I'm, I'm happy to speculate a little bit and with the caveat of saying as well, I don't know with that particular BetterUp, um, how, how did they acquire that data?
Ben Scoones: How are they measuring those things? Um, not sure, but let's just, you know, we'll just assume that, that, that is a trend. I, I, which I think. I could see that it would be Right. Kind of makes sense. So I think there's some sense of that. There's probably a few different factors that are contributing to that.
Ben Scoones: Um, you saying 2019? That's earlier than I thought. And that just makes me wonder about things like, I guess COVID would've been after that time, but there's definitely a shift towards working remotely. Um, more than there was previously. It's harder to collaborate when you work remotely. It's easier to, uh, slack off as well.
Ben Scoones: Mm-hmm. [00:09:00] Um, and I think. Slacking off. Yeah, it might sound like a good idea, but really probably what's happening when you do that is your, I can't remember the source of the quote, I can't remember the quote itself, but the, the sentiment is, um, people have the capability to handle lots of work and work hard, or not at all, but not in between.
Ben Scoones: Um, you just kind of feel stuck between these two, uh, I guess of, of leisure and productivity. Maybe that's a part of it, right? Like if you are slacking off and you're trying to get away with a smaller amount, you are, you are interested, doing that probably drops, right? Um, so whereas it's easier if you feel invested in something, if you feel committed to something like that investment of time that's gonna affect you, uh, I think you kind of your spirit towards it as well.
Ben Scoones: So that's, that's probably one element of it. Yeah. Um, I would say another element would be, um, certainly I think AI probably contributes to that. So if people are feeling like [00:10:00] those kind of human elements of work are not being valued, what's causing that? Maybe it's just a general malaise, uh, among the population are amongst certain people.
Ben Scoones: They feel like we kind of, I, I mentioned that earlier, the existential dread perhaps that AI makes some people feel. I feel it myself a little bit. You're always kind of wondering, like, okay, you're having to ask yourself that question now, how much of my job is algorithmic? Whereas before, you would never really have to ask that question.
Ben Scoones: You could just keep doing what you're doing. So now you've gotta constantly think, okay, am I, am I really doing work that is worth my time? Is my work gonna be going, uh, phased out and replaced by machines? Where does my value come from? Am I, am I good at these things that I need to be able to do now? And I think if you're constantly being forced to ask that question, it just puts you in the, in an uncomfortable spot.
Ben Scoones: And it kind of just makes you wanna have to check out of that system, which is requiring you to do that. Right. So again, [00:11:00] speculative, but, uh, but I think if you are, I, I, I could see that being, that being part of that, part of that issue.
Richard Ellis: Well, part of the reason I brought it up is because, um. There, there's been several studies, McKinsey and others of trying to assess the return and the value of AI initiatives and projects and, and do some correlative work back to the real humans that, that, that exist in those workflows and using those AI tools.
Richard Ellis: And one of the things that, uh, has come up is the trend seems to indicate that your a performers when given AI tools. Generate the most value. And, and I've seen, you know, estimates of eight to 11 X performance increase over, you know, low performers. I mean, that, that's not just like doubling performance or triple eight, I mean eight to 11 x, that's huge.
Richard Ellis: But [00:12:00] then when they look at low performers, they're not, they're hardly getting any value at, at all with the same AI technologies and tools at their um. Disposal. And so it, it's, it is just fascinating to me because, you know, on the surface it seems like, oh, well we could just, you know, give AI to everybody and suddenly everybody is more effective and productive and, and performs better, but that's not the case.
Ben Scoones: Yeah, yeah, that's definitely, I mean, it's probably true of, of anything, isn't it? You have to have buy-in from, from that individual to. Uh, to want to work with it, to want to do a good job. And you can't, you can't force that. And that's, uh, I mean that's part of, I think the value maybe of, of having a good CEO and having good managers and all that kind of thing is you are investing in the human capital, um, of your company, right?
Ben Scoones: You're making sure that people, uh, feel connected to the work that they're doing. They feel connected to the ethos of their company. They have a reason to do their work, [00:13:00] and they also. Hopefully feel safe within the company that they are valued. And so you don't have that fear of, is AI gonna replace me?
Ben Scoones: People see my value beyond just my ability to do this rote task. And if that's true, then you feel more comfortable giving that rote task to something else because you know that. The company's not seeing you as, as equivalent to that as your, uh, productivity in, in that particular area. They're seeing you as, as more than that.
Ben Scoones: And obviously there's, there's, uh, you know, some financial value side to that, right? Like, you do have to be doing something. But I think having that freedom, having that agency, uh, and that confidence is, is gonna be helpful, um, with that. And it's gonna make you feel like that and, and take that approach to AI rather than the opposite.
Richard Ellis: Right. Well, and, and if you look at some of what the reports are saying, those that are high performers and, and high performers in general, as we know, tend to be more motivated, more optimistic, higher agency individuals, [00:14:00] they, they believe they can make a difference. And so they grasp onto these tools that run with it, right?
Richard Ellis: And, uh, but you have to recommend that that's probably the top of your bell curve, right? The middle of your bell curve of performing, you know, uh, team members. You need to maybe coach them, guide them a little bit differently, and not just throw AI at them and expect everything's gonna be suddenly better, but you know, guide them, you know, show them how this allows them to, you know, make a difference and make an impact and do better and be more creative and do some of that fun stuff.
Richard Ellis: And so I think there's kind of a leadership and coaching implication to just recognizing that our people are different. And you know, just AI washing is not gonna just magically get everybody performing at higher levels.
Ben Scoones: Yeah, yeah, absolutely. I think you mentioned that there giving training sessions or helping people understand the technology, I think that's a broader part of building a good culture and attitude towards AI within your [00:15:00] company.
Ben Scoones: There are a number of components to that and, and maybe we'll get into some of those, but you. You need to be able to allow people to use it. They need to be able to see the value of it. And that burden should not just be, I think, on the employee or, or why, I guess another way of putting it is, if you're a leader in your company, why would you not want to, uh, encourage people?
Ben Scoones: Why would you not want to give them skills to be able to be more performant and more efficient? Why would you not want to make them, um, more productive?
Ben Scoones: Um, so, uh, yeah, I, I, that's, that's a part of that.
Richard Ellis: Well, and I think about AI as being an amplifier, and I think I got that from you, or originally we were kind of brainstorming around the, the ebook that, that you wrote.
Richard Ellis: But you know, if you think that it's gonna amplify anything you, you, you throw at it. Or, or you attach it to, right? It can amplify the good, but it can also amplify the bad, right? And so you, if you have mediocrity or broken processes, you know, it can amplify, [00:16:00] you know, the ugliness and the risks and the, the, the bad stuff, right?
Richard Ellis: And so you just gotta be careful. And that kind of goes back to our, our last session, talking about making sure that you have well-defined processes and workflows so that when you do amplify it, it amplifies all the good stuff you wanna be amplifying. Did I get that from you? Is that, uh,
Ben Scoones: uh, I'm not, I'm happy to attribute it to you, I think.
Richard Ellis: No, that's good. We, we talk a lot about this stuff as it kind of blends together. Um, yeah. But I did, one thing that I'm pretty sure you said though is that, um. You, you know, org the idea is that organizations need to find a way to flourish with ai, not just adopt it. And so that was a unique, you know, way to think about it, flourishing.
Richard Ellis: So tell me in, in your mind, what, what does flourishing with AI look like?
Ben Scoones: Yeah, so I think with flourishing. Kind of that can be distinguished from just having a [00:17:00] neutral attitude towards it. You're really trying to have the attitude of how can I make this a good thing for my company rather than just reacting to it or trying to mitigate the damage that you see at my cause.
Ben Scoones: And I think AI is pretty, um, I guess maybe polarizing in some ways. People are kind of afraid about it, other people are excited about it. But really universally it should be seen as this is a tool that it's gonna make things more efficient. This is gonna allow us to do more work. We're not getting into some of the more complex moral things, but just, just professionally speaking, uh, it's gonna be a very good tool.
Ben Scoones: And so it is kind of easy to see why that would be the case, but you still need to be ready for that. You still need to be thinking about how you can make that happen. And I think there are really two things, uh, that, that a business can do to, to try and flourish with ai. So the first, so I say two things, two categories of things.
Ben Scoones: So the first category is. How are you defining your culture around AI within your business? And then the second is, how are you changing [00:18:00] some specific infrastructure or processes or strategy within your business to either be ready for AI or be actively working on pursuing some of these initiatives? So starting with the first, I think we've kind of talked about it.
Ben Scoones: The reason you need a good culture is because. AI by itself is, is not doing anything. You need people to identify where to use it. You probably need people to use it in some capacities. You need people to build these processes that it's being integrated into. So just like any tool, someone's gonna actively pick it up and, and put it somewhere where it needs to be used.
Ben Scoones: So you want people to have a positive attitude towards it. You want people to know how to use it well. You want people to understand what it's good at and what is bad. Otherwise, they're not gonna be able to do that. So. Having that good culture, obviously a very important thing and I think I'll say there are probably three different things that you can, that you can do to that or maybe contribute to that.
Ben Scoones: I'm sure there's more, but we'll just say three. Why is training. Making sure that [00:19:00] people know how to use it. I've read that there's, um, I, I won't be able to attribute the, uh, the stats, but it maybe takes eight to 10 hours of, I think it was Ethan Molik actually. He has a, a block that I think it's called One Useful Thing.
Ben Scoones: I think he mentioned it on there. It takes about eight to 10 hours to get comfortable using, uh, using AI and feeling like. It's not a intellectual or sorry, cognitive burden to right be figuring out I have to use it here and this is how I get good results. So you have to make it past that, that stage. And not everyone is gonna be willing to do that by themselves because not everyone is gonna think that the end result is, is worth it.
Ben Scoones: So training can be helpful there, kind of opening people's eyes to what the technology can achieve. Mm, helping them easily identify what tools are gonna be useful for them and for what purposes, and then showing them how to use those tools. That's all part of the training and how to do that safely as well.
Ben Scoones: [00:20:00] Another component is having a good AI policy, and I think this is something that a lot of businesses are realizing that they need, but there's still definitely a lag in, uh, actually defining these things. It takes. You know, again, the technology is, is new. It's, uh, developing very quickly. Defining a policy is, is kind of hard for something like that, right?
Ben Scoones: But it's important because you need people to understand how to use it safely. Just mentioned that. Um, so you have to tell them what that looks like. You have to tell them what tools are available and what, uh, how they're permitted to use them. And they need to feel free exploring their own ideas as well about how to use those things.
Ben Scoones: They don't, you don't want 'em to feel restricted or afraid that they're gonna do something wrong and leak company secrets or something like that. If they feel that way, they're not gonna use it. So. Right. They don't only get fired. Yeah, exactly. It's just, it's, it's not worth it. Right. I could get fired in six months because I'm, I'm doing this thing manually, or I can get [00:21:00] fired now because I'm breaking the AI policies.
Richard Ellis: Yeah. Yeah. I, I leaked all our sensitive data.
Ben Scoones: Yeah, exactly. So that's, I think that's an important element that's just gonna give you safety for everybody and clarity over what, what they can do. And then the third, I would say is. And I think this is just in general, this is something you want having a culture where people feel comfortable.
Ben Scoones: Sharing ideas and pursuing ideas that are their own. You shouldn't just be having people at the, there are so many applications for ai. You don't just want one or two people at the top saying, this is what we're gonna do in our business. These are the things that we should be using it for. Everyone does something different.
Ben Scoones: Everyone has a different perspective on things, and you should be wanting to listen to that. And if someone has an idea and they can show that maybe this works, then that's something you should think about. That doesn't mean you go and build a product off of it, and that doesn't mean you ultimately adopt that suggestion, but.
Ben Scoones: You are getting a much larger pool of ideas to, to take from, and it's also the people [00:22:00] that are gonna be directly using that probably, that are coming up with those ideas. So having a culture that allows that, um, I think is is very important.
Richard Ellis: I think that's a great one. I mean, the, not only is the technology changing so rapidly, but just thinking about different use cases and, and creative uses of ai, uh, can come from the random, you know, experiments or just playing with it.
Richard Ellis: Right. And we recently hosted a, uh, an executive, uh, leadership round table. Uh, recently here in Dallas. Uh, and the, the theme was the intersection of AI and the go-to-market engine and just being able to talk, you know, to other leaders in the area about, you know, what are they experiencing, how are they using ai, what's working for them, what's not working for them.
Richard Ellis: It was just, you're really valuable and, and productive and we all got a lot out of it. And so making sure that, you know, the culture, uh, in your organization has. Forums like that and opportunities to share [00:23:00] not only within team, within the marketing team, but cross team, Hey, sales is over here doing these unique things, or the back office is doing these things so that you can learn from each other.
Richard Ellis: I think that's a great one.
Ben Scoones: Yeah, absolutely. Absolutely.
Richard Ellis: Uh, what are you seeing in terms of measurement, um, of the impact of AI in organizations? 'cause it almost seems like you don't need a business case to go use AI right now. It's like, it's almost like this, Hey, we're gonna be behind if we don't use it.
Richard Ellis: So everybody start using it. And, you know, I hear more, you know, leaders and teams saying, well, we've gotta show leadership, or we've gotta show the board that we're using ai. Nobody's saying we've got to go, uh, show them our anticipated impact and value of using ai. It's just we need to show that we're using it.
Richard Ellis: So what are you seeing?
Ben Scoones: Well, I'm, I'm sure you've seen it as well, if I've seen something on LinkedIn, probably everyone else has. 'cause I'm, I'm not on there that much. Um, but a stat that I keep seeing crop up [00:24:00] is that, uh, 95% of AI projects fail and only 5%. I have, I have no idea how that would be measured.
Ben Scoones: Um, but you know, that's probably, I can see why that would be true. Right. But, you know, we talked in the last episode about not everyone is going through the, through the, doing the hard work upfront to define their workflows or get their business ready for adopting ai. And I, I just don't see why you would expect anything other than failure if you don't know what you want it to achieve.
Ben Scoones: And if you don't know what job you want it to do. So I think I talk about this in the ebook a little bit, but being a little bit patient. Investing in your company to get yourself ready for using ai. Whether that's bringing in skills, uh, with hiring, or whether that's preparing additional documentation, defining this policy, whether it's defining your processes because you don't have them well defined.
Ben Scoones: All that knowledge is in people's [00:25:00] heads. You gotta get it out of there. Maybe it's thinking about a long term strategy. Maybe it's just waiting for. Someone to develop the tool in the marketplace that you would really benefit from, but that doesn't exist yet and it would just cost you a little bit too much to do it in-house.
Ben Scoones: Um, there's a lot that you can be doing to kind of get ready to use AI before you actually start with it. That doesn't mean it has to take forever. That doesn't mean you have to have all those things in place before you can pursue anything, right? But you just wanna be wise and thoughtful about, um, what are the necessary steps for me to be successful?
Ben Scoones: So. The finding out workflow. Get the data that I need, make sure I know what success looks like, figure out how important and valuable this is for me. How much do I wanna invest into it? What else exists out there that could maybe help me with this? Is this something that I need to build? Um, and if, you know, once you've answered all those questions, then then go and build it.
Richard Ellis: Go for it. Yeah, that's very smart. And, and I do think there's, there's gonna be a lot of goodness in companies that, uh, do have a little [00:26:00] bit of freedom to experi, uh, experiment. Right. And, and you know, I'm all about, you know, failing fast, but you don't want to fail. I. Twice for the same thing. Right? And so there still needs to be a little bit of a governance process around it.
Richard Ellis: Experiment, but know what you're set out to accomplish. And if you fail, you know, let's feed that back in so we don't do it again. And we learn from that. Right. And I just don't see, you know, just like you were saying around policies. There, there's not really documented policies right now in organizations.
Richard Ellis: There's, there's not, well, you know, uh, framed governance around how are we gonna learn from our, our trials and our experimentation with ai. So, so that's an area of maturity I think companies need to kind of move into quickly.
Ben Scoones: Yeah, for sure. And I, I think, like you said, failing fast. Very important thing there as well.
Ben Scoones: 'cause it's, it can be expensive to pursue these things.
Ben Scoones: Um, yeah, absolutely. Very important. Yep.
Richard Ellis: Well, good. Well, um, we're at the end of our time again, so thank you, uh, for coming back on the show. [00:27:00] Yep. Um, as you know, here comes the question, what other goodness? And this time, last time you gave us a great book, which I haven't ordered yet, but I definitely will 'cause I wanna read it.
Richard Ellis: Uh, but this time outside of ai, what, what's some other goodness that, uh, that, that you've experienced in your life?
Ben Scoones: Well, I'll give you another book. Uh, and I go, it's not, it's not exactly related to ai, but it is, uh, related to transhumanism, so, oh, interesting. So there you go. Which, which I think those two things are definitely linked.
Ben Scoones: Um, so the book is, uh, my wife and I have been reading c Lewis's Space Trilogy. So the first book in that series is called Out the Silent Planet, and. Um, for those familiar with Tolkien and the, the protagonist in the book is actually based on Tolkien. And so Tolkien and CSO is from France, um, which is kind of an interesting tidbit, but the book is all about, uh, a guy who, um, is, is kidnapped and taken to another planet, and that planet has [00:28:00] intelligent life and he kind of gets to know them.
Ben Scoones: And it's a really brilliant book. It's the rowing is so good. Um, and what Paul was so good about it is the way he brings up some interesting, uh, philosophical and, and theological issues as well. So, hmm. He weaves in issues of sin and the nature of man, um, into the narrative in just, I, I think maybe you would call it a slightly heavy handed way, but it feels like it really fits, uh, with, uh, with the narrative.
Ben Scoones: So, very good. And I know that the subsequent ones in the series, which we're about to get into, get into that question of transhumanism as well. So what does it mean to be, to be human and. How is that gonna change with the Advent technology? Should that change? I think I, I feel like I know the answer to that.
Ben Scoones: I think the answer is no, but there's a lot of people at the moment advocating for, um, for the fact that it should, and that's the natural, [00:29:00] that's, that's our next evolutionary step. And uh, that's kind of a crazy thing to think about, but it's also maybe becoming more worthwhile thinking about.
Richard Ellis: Fascinating.
Richard Ellis: And what's the name of that book? Uh, the first one in the trilogy,
Ben Scoones: uh, the Silent Planet.
Richard Ellis: Okay. Got it. Got it.
Ben Scoones: Yeah.
Richard Ellis: Well thanks for that recommendation. Thanks for sharing. That does sound fascinating. And earlier on you were giving me a hard time for being a little bit getting into the philosophical or the esoteric.
Ben Scoones: Oh,
Richard Ellis: that was
Ben Scoones: great. I love to do it.
Richard Ellis: Yeah. Well, thanks again for, uh, coming back on the show. Always a pleasure spending time with you.
Ben Scoones: Yeah. Thank you. I've enjoyed it.
Announcer: Some Goodness is a creation of revenue innovations. Visit [email protected] and subscribe to our newsletter.