Topics
Artificial Intelligence and Business Strategy
In collaboration with
BCGAs CEO of Aboitiz Data Innovation, David Hardoon oversees the operations of a technology conglomerate focused on using data science and AI to support its businesses in a range of sectors, including banking, financial services, utilities, agriculture, and construction in Singapore and the Philippines. In his role, David is leading some unexpected — but practical — uses of artificial intelligence, including using voice and image recognition to detect stress in livestock, and analyzing internet-of-things data to reduce waste and CO2 emissions in the cement R&D process.
David joins this episode of the Me, Myself, and AI podcast to discuss the broad scope of the organizations he’s responsible for, the role of AI regulation and governance in helping to spur innovation, humans’ sometimes problematic role in shaping AI outputs, and how a high school detention led to a career in artificial intelligence.
Subscribe to Me, Myself, and AI on Apple Podcasts, Spotify, or Google Podcasts.
Transcript
Sam Ransbotham: Concrete production? Livestock? The Socratic method? Somehow, we talk about all three. Find out how these connect with AI in today’s episode.
David Hardoon: I’m David Hardoon from Aboitiz Data Innovation, and you’re listening to Me, Myself, and AI.
Sam Ransbotham: Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of analytics at Boston College. I’m also the AI and business strategy guest editor at MIT Sloan Management Review.
Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG and one of the leaders of our AI business. Together, MIT SMR and BCG have been researching and publishing on AI since 2017, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities and really transform the way organizations operate.
Sam Ransbotham: Welcome. Today, Shervin and I are excited to be joined by David Hardoon, who holds several senior positions at the Aboitiz Group. David, thanks for joining us.
David Hardoon: Thank you very much, Sam, Shervin.
Sam Ransbotham: Can you first tell us a bit about the Aboitiz Group? Where do you work?
David Hardoon: The Aboitiz Group is a 100-plus-year-old conglomerate that originated starting in Spain — Catalonia — and relocated to the Philippines. It started in the hemp business but is now quite diversified, from the main business — power generation and distribution across the Philippines — [to] financial services, cement, construction, utilities, [real] estate, airports, food, agriculture. We’re now going through a transformation and becoming — I love this term, by the way — a techglomerate.
Sam Ransbotham: What is Aboitiz Data Innovation?
David Hardoon: About seven years ago, give or take, the bank started with all digitalization of the banking services. And what that had resulted in, as you would imagine, [was] a tremendous amount of data. The more you engage your consumers digitally, the more you have digital services — well, surprise, surprise — the more data you have.
And the question came as well, how are we really using it? Are we using it? What’s the best way to put it to good use? And that question kind of also went beyond just the bank into the rest of the businesses, as you can imagine. Power has a lot of data; agriculture, airports, etc., have a lot of data.
We were born with a very on-point mandate: operationalizing data, operationalizing AI. Really, how do we put it to good use?
Shervin Khodabandeh: What are some of these uses?
David Hardoon: I mean, there’s the usual financial side, where we all learn from hyperpersonalization, financial crime. And don’t get me wrong — that stuff … always gets me all excited. I spent a few good years in the financial regulator [space] here in Singapore.
But let me give you an oddity: cement, an industry that you wouldn’t really associate with data or AI. We sat down with the CEO at the time, and we said, “Look, even in the world of cement, you have a lot of data.”
How can this work? So let me give you a little tidbit of how the world of cement works. And this is something that was new to me. So basically, it’s like baking. I don’t know if you bake, but it’s like baking. It’s basically, you have mixtures. You have these kind of formulas, and you end up with cement, which will have different types of properties. And these properties [are] what’s absolutely critical depending on what you’re planning to build, whether it’s a mall, a high rise, a low rise, residential, etc., and so forth.
Having said that, as with baking, you kind of need to do a bit of trial and error. You need to try out these different mixtures to make sure it produces the right one. That results in operational overhead. It results in wastage. I mean, as with baking, you stick this stuff into kilns — literally, it’s a furnace — to bake it. Using data, using the information that’s coming from all the devices, the IoT, using AI, being able to actually tell the bakers or, in this case, the chemical engineers what is going to be the output of this mixture before they even start, while at the same time maintaining that quality control, which is absolutely crucial. Now this is, by the way, not just hypothetical. This is already operational for the past year in all the plants — about six plants in the Philippines — and results in operational efficiency, results in reduction in [the amount] of wastage, resulted in what I like to call quantifiable ESG [environmental, social, and governance]: a 35 kiloton reduction of CO2 emissions. So that’s a nice unusual example I like to give in terms of how data is used.
Shervin Khodabandeh: Well, I could tell you, Sam and I are going to love that. We’re both chemical engineers.
David Hardoon: Oh, well, there you go.
Shervin Khodabandeh: Actually, when you said baking, I did my Ph.D. in catalyst synthesis, so I spent a lot of my time baking various aluminosilicates to create catalysts. And you’re completely right: You try all these things. Some work; some don’t work. And had there been the ability for me to know ahead of time, I probably would have gotten my Ph.D. in a tenth of the time.
But seriously, this is quite interesting. Now, if you go from personalization and cyber and fraud, and you also have this example in baking cements, then we must believe that there is such a wide portfolio of things that you are considering. So tell us more about what makes it into that portfolio, because there is no end to what you could do. What are the kinds of things you get excited about?
David Hardoon: You’re absolutely right. Being fortunate working in a conglomerate, you kind of wake up every day and discover something new. So there are kind of two dimensions to it. On the one hand — and I’m going to go back to this term operationalization, operationalizing data and AI — it’s stuff that has to make sense to the business. So revenue, operational efficiency, risk management. And then we have to look at the things around the corner. We have to experiment. But those may not be things that get immediately deployed. Like, effectively in agricultural builders, we have the animals; we have pigs, swine, and poultry. And as part of that process, we want to make sure that the animals have the best possible care provided to them. On an experimental side, we say, “OK, how can we use technology that’s already available but may not have been put in exactly in this particular context, not in Southeast Asia?” So we’re using voice recognition and image recognition for pigs to help identify stress and detect illnesses, so that could be automatic alerts to the caregivers.
Shervin Khodabandeh: What’s the ground truth on that? That would be interesting to know.
Sam Ransbotham: That’s a great question.
Shervin Khodabandeh: What’s the training data?
David Hardoon: So this is the amazing stuff. It’s a very expressive animal. So when you actually go there with the people who take care of them, they can literally point at them and say, “This animal is distressed,” and you’re constantly recording.
We’re kind of, “OK, is this really something that’s relevant? Does it make sense?” Like, can we have that conversation with the baker, you know, the chemical engineer? Can we have a conversation with the animal keeper, the veterinarian and so forth, or the pole engineer when we’re dealing with electricity cables? It’s extremely important.
And that’s one of the things that I realized throughout my career of doing data, is where things failed, where you suddenly had this divergence of exploring scientific research — and I came from the world of science, you know, like ex-academic — without really seeing that connectivity. And if we go all the way back, even when radar was invented, the reason things fall apart is whereby the very, very small gaps of “Well, it’s not quite there; oh, it’s not quite usable.” So that’s the first part.
Then, the second level is seeing, well, is this something that, as much as possible, is truly going to make a difference to either our internal users — because that’s extremely important — and for many of the businesses which are within the group, which are actually B2B, like in power, where, essentially, we provide power and wholesale [electricity]? So it’s our internal users in terms of, let’s say, predictive asset maintenance — critically important.
Shervin Khodabandeh: That is really fantastic. I mean, what you’ve said is inspiring on so many levels. One is, let your imagination be the limit, right, because [of] the question of “Can something be done better, more effectively? Can you see around the corner there?” And there’s data, then, yes. That’s one thing that’s inherent in all these examples that you gave.
You started with what most would consider quite advanced and interesting things, and we have guests who talk about those all the time: personalization, fraud, cyber. All of those are very important. And then you went to cement. And then you went to pigs. And then you talked about human and AI …
David Hardoon: Yeah.
Shervin Khodabandeh: Which is quite critical too. I just find that very, very energizing.
David Hardoon: Well, it’s the nexus between human and AI. There are two critical things that I believe have to go hand in hand — have to. While this may change in the future to some degree or extent — I mean, who knows what’s going to happen around the corner? Things change so rapidly. But I’ll be the first one to admit this: I truly came to this appreciation when I worked [for] the regulator — surprise, surprise — [of the] criticality of combining governance and innovation. And I used to get asked this question repeatedly, of “Oh, but don’t you think governance inhibits innovation? It stifles us.” And I came to the view of, I’m vehemently against that perspective.
I would argue that not only it does not stifle it — it would result in more and even better innovation. It’s essentially about just simply having, you know, common sense. I was privileged in being in the process and coming up with the FEAT principle. So this was fairness, ethics, accountability, and transparency, back at the Monetary Authority of Singapore.
I remember that when it came out — and we deliberately kept it very simple — and I showed it to our governor, our managing director, and he was just like, “David, isn’t this just common sense?” And I just smiled, and I was like, “Well, no; even common sense has to … it’s not always that common. It has to be written down.” But it’s critical. That’s No. 1.
And No. 2, what you were mentioning is that, yes, while AI and data can do what is seemingly miraculous stuff, it’s critical that this combination with us humans and how we use it is baked in at the very beginning. And even now, obviously, everyone’s talking about ChatGPT, but remember: All the data that it’s trained on is from us to a certain extent.
Shervin Khodabandeh: Yeah. You can’t take humans out of the loop, because after a while, they will lose what makes them human.
Sam Ransbotham: But we have examples of that. I mean, that’s OK in some places. I mean, neither of you know how to navigate by the stars, I’m guessing, unless, Shervin, you’ve got some tricks up your sleeve that I haven’t learned yet. I mean, most people don’t drive a manual transmission; that seems to be a skill that’s … well, OK, maybe one or two of us do here. But the point is, we don’t have to retain all possible skills. We just have to be savvy about which ones we hang on to.
David Hardoon: That’s exactly [right], what you said: It’s some, not all. But sometimes you find that you see this trend of like, “Oh, look what it can do. Everything gets automated.” And I remember, if I go to my early days as a consultant — you know, I used to be a consultant doing AI — you would find a lot of times, potential clients and people you spoke to, even if they didn’t say it explicitly, what they were trying to achieve was like, “Oh, just do everything automatically with AI.”
And you need to have almost this natural inclination by saying, “OK, if it’s contextual, if it makes sense.” Like you said, maybe I want to pick up star navigation because I’m interested in it. I want to learn about astrology or astrophysics or whatnot. Great. But you see it now becomes a niche topic that some people pick up. The general public doesn’t need to know how to do it. But we need to be able to identify that decision point rather than just go [into a] “No, everything now, AI galore” kind of situation.
Shervin Khodabandeh: Well, I mean, what you’re saying is, there’s value in the ongoing dialogue. There’s value in ongoing challenge. And every time there is a dialogue, I mean, even back in Socrates’s time, the dialogue is where it elevates the conversation. And you’re rightly pointing out that the moment you say AI is the be-all and end-all is the moment that you are under-delivering on AI, and then you’re for sure under-delivering on the human potential.
David Hardoon: Well you’re losing a potential answer. Let me give you two examples. In the financial section, we have Union Bank of the Philippines, amongst others. While AI governance regulation is not yet — yet — a requirement, let’s say, in the Philippines, we’ve situated a working group, which is an interesting combination of people, from your risk officer, legal, compliance, and then you have marketing, customer engagement, experience. What happens is, while you still have the traditional process of model validation, etc., from a statistical, mathematical, data point of view, the models are presented in this working group for us to have a debate. Because a model may pass all the statistical tests, but if this model goes wrong — even that 10% or 5% — there is a significant reputational risk at play or there’s a potential impact to the consumers. That debate is important because if you just looked at it from that statistical, even a potentially automated process, you would miss it.
Now, the resolution, interestingly enough — and I honestly tell you, like, maybe eight out of 10 times so far — isn’t data, isn’t AI. The resolution a lot of times is process, which is people. And that makes us actually wiser and understanding, “OK, how do we use it, and how do we engage with it? And when do we allow” — Sam, to your point — “that automation?” And when we go “no,” I retain the veto to overrule, to a certain extent. So that’s one example.
The other one is if I go back to my cement [example], and in fact, we did this very deliberately at the very beginning because we didn’t want our colleagues and chemical engineers to think, “Oh, great. Why do you need me? You’re just going to automate the whole thing.”
No. The whole point was, we absolutely need them because there may be new types of mixtures that we haven’t considered. You will still need to have that experimentation. The whole goal is providing information.
What it has resulted in is efficiency. If I give a swing again to another one — when ChatGPT came out, I got asked straightaway from a few boards, “What does this mean?” And my instinctive reaction, rather than going into this whole lengthy explanation of liberation, I just responded by saying, “It means that every one of us can have the productivity of 10 people.”
So this is what this stuff means, and that’s what that nexus, the dialogue, the integration, the augmentation means — is that we now have the ability to be far more productive, whatever “productive” means in that context. For some people, they may say, “I just want to work two hours but [appear] as if I work the whole day.” Some may say, “I want to work the whole day.” … It may differ. But that’s what it means because now we’re able to take all this data.
I’m sure some of you remember, back in 2000, you had these memes online of “getting information off the internet is like drinking from a fire hose.” It’s still true. We’re inundated with information, with data, but it’s like distilling it down to something that’s relevant to me, usable, that I can do something with it and get that gain, essentially.
Sam Ransbotham: I think one thing that’s coming out of this conversation — Shervin used the word Socratic, and, David, you used the word dialogue. What’s nice about this is it’s dropped this hubris that I feel like I see in a lot of machine learning. Machine learning seems to be about humans teaching machines. So it’s this sort of “We know all. We make the machines emulate us, and if they do, they pass the Turing test, and, yes, everything’s golden.” But then you get pushback, and you say, “Oh, no. The machine can teach us things we’ve never known before.” Well, that just has switched the direction. It still has that same directional hubris, but the things that you’re both talking about are much more [oriented around the] Socratic and dialogue.
When you think about what that group can form together — and Shervin and I have got some results from last year’s research that said about 60% of the people are thinking about AI as a coworker. And that strikes me as that sort of a relationship, because between the two, yes, you find some new compound that maybe someone wouldn’t have tried. I don’t know what the chemical engineering equivalent of the Fosbury flop is. Do you remember the Fosbury flop, where he learned a different way of jumping over the high bar, and then suddenly everyone else adopted that technique? That sort of idea seems like it could come out of this approach.
David Hardoon: It’s actually really interesting you bring that up. I mean, I’d love to say, “Oh, yeah, we had this all intended in the very beginning,” but I’ll be very honest and say, I think it’s more of a nice consequence that wasn’t fully intended at a point in time.
But I want to go back to that FEAT principle. One of the principles resulted in a lot of discourse — and I mean a lot — where we had a statement amongst all of them that said that we should hold AI to at least the same standard as human decisions. So AI-based decisions should be held to at least the same standard as human-based decisions. And the debate was phenomenal and [people] said, “Oh, no, we should hold it to a higher one,” etc., and so forth.
But what the intention of that principle was saying is, if you’re using now … so let me go back again to, let’s say, a financial [example]: loan provisioning. And if [you’re] using an AI algorithm and you’re finding that “oh, we’re discriminating,” OK, yeah, absolutely, that’s something that needs to be addressed, reviewed, and corrected. But hold your horses there. Take a step back. Take the AI out of the equation. Had you been discriminating before the AI? And that’s really the question because … I remember I had long debates with many regulators. Maybe debate is the wrong word: discussions with many regulators. And I was actually a bit opposed to regulating AI, and I’ll explain what I mean by that.
I’m not opposing regulation. But when they said “regulating AI,” I got a bit defensive. I said, “What I’m worried about is that we’re like, ‘OK, well, since AI now is showing me all this stuff that I don’t want to know about, then I’m just not going to use AI.’ And we’re going to go back to the same procedures previously, which, guess what — it’s the same problem. You just weren’t paying attention to it because that information, that knowledge, wasn’t bubbled up to the surface.”
So, what I kept on arguing is that, yes, the regulation has to be in play. And yes, there may be certain scenarios whereby AI requires higher scrutiny. But the regulation is still on the outcome. The regulation is still in the fact that, for example, it’s a case of discrimination. You should not be discriminating; whether you’re using a human-based process or an AI-based process is kind of beside the point. But I just want to emphasize that point, Sam, because it really goes back to what you were saying of, it’s now teaching us things that we may have been, let’s say, sometimes consciously ignorant of, sometimes inadvertently unaware of.
Shervin Khodabandeh: David, tell us about your background. How did you end up where you are?
David Hardoon: If I roll back all the way to the beginning, and I’m going to say this again with a big smile myself, how did I end up where I am? Detention. That’s how I ended up here.
I must have been, what, 14, 15, 16 years old, and I got sent to the library because of detention. And, you know, if you’re in a library, you have nothing better to do. I picked up a book on Prolog. Don’t ask me why, from all the books I could have picked up, I picked up one about Prolog. And this is really before knowing anything about the whole world of, well, I guess in that case, it would be expert-based systems. And I started reading, and I just couldn’t put it down. And that kind of triggered this exploration of, how can we better capture knowledge? How can we better learn?
And that obviously resulted in kind of learning a bit more about neural networks, AI. In fact, I was one of the first two students who took the degree of computer science with artificial intelligence. It was literally brand-new, from that perspective.
My Ph.D. thesis was about semantic models, so literally the representation and encapsulation of knowledge, effectively, and information; [it] was on learning musical patterns/music, or generating music from brain patterns. And the whole idea about that is essentially providing expert-based systems knowledge, if you think about it in that way, for people who, say, can’t sit in front of a piano and play but are fully capable cognitively.
So that’s kind of what brought me here. I know it’s a very weird kind of journey. But yeah, I need to thank my literacy teacher: Thank you for sending me to detention.
Sam Ransbotham: OK, so we’ve got a segment where we’re going to ask you some quick questions. What are you proudest of in terms of artificial intelligence? What have you done that you’re proudest of?
David Hardoon: Where to begin? What I’m most proud of is the way we’ve been able to graduate — and I literally mean that — from the academic world to the industrial world.
Sam Ransbotham: What worries you about AI? You’ve mentioned some worries today. But what worries you?
David Hardoon: What worries me is I don’t think we’re fully appreciating what we’re creating. I think we need to focus head on with the realization of what we’re creating and what we’re seeding for possibilities, for good and for bad.
Sam Ransbotham: What’s your favorite activity that does not involve technology?
David Hardoon: SUP: stand-up paddling. Being on the water and just paddling away. It’s extremely soothing. It’s actually phenomenal exercise, for those who haven’t tried.
Sam Ransbotham: I’ve tried and I’ve missed the stand-up part. I’m OK with the paddling, but the stand-up seems to lead to trouble. What’s the first career you wanted while you were sitting in detention? What did you want to be when you grew up?
David Hardoon: I wanted to be an astrophysicist.
Sam Ransbotham: What’s your greatest wish for AI in the future? What are you hoping we can gain from this?
David Hardoon: I don’t know. Self-actualization? I hope we learn more about ourselves. It’s already giving us capabilities. I mean, for example, I’m dyslexic. Thank heavens for auto spell-checkers!
Sam Ransbotham: Well, thank you for taking the time. I think that there’s a lot that you’ve mentioned. I think we can go back to even examples of food 100 years ago. We had a terrible food cleanliness [problem], and now we have a supply chain we can trust. Perhaps we can build that same sort of supply chain with data. Thank you for taking the time to talk with us today. It’s been a pleasure.
David Hardoon: Thank you, Sam, Shervin.
Shervin Khodabandeh: Yeah. Thank you.
David Hardoon: And maybe, if I may just add another note, I think that’s really the critical thing: It’s AI trust. It’s about trust. Thank you very much.
Sam Ransbotham: Thanks for listening. Next time, Shervin and I talk with Naba Banerjee, head of trust product and operations at Airbnb, about how the travel platform uses AI and machine learning to make travel experiences safer.
Allison Ryder: Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn’t start and stop with this podcast. That’s why we’ve created a group on LinkedIn specifically for listeners like you. It’s called AI for Leaders, and if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and gain access to valuable resources about AI implementation from MIT SMR and BCG. You can access it by visiting mitsmr.com/AIforLeaders. We’ll put that link in the show notes, and we hope to see you there.