Me, Myself, and AI Episode 303

Predicting Volatility and Risk: Nasdaq’s Doug Hamilton

Play Episode
Listen on
Previous
Episode
Next
Episode

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

BCG
More in this series

Douglas Hamilton works across business units at Nasdaq to deploy artificial intelligence anywhere the technology can expedite or improve processes related to global trading. In this episode of Me, Myself, and AI, he joins hosts Sam Ransbotham and Shervin Khodabandeh to explain how the global financial services and technology company uses AI to predict high-volatility indexes specifically and to offer more general advice for those working with high-risk scenarios.

Read more about our show and follow along with the series at https://dev03.mitsmr.io/aipodcast.

Subscribe to Me, Myself, and AI on Apple Podcasts, Spotify, or Google Podcasts.

Transcript

Sam Ransbotham: In AI projects, perfection is impossible, so when inevitable errors happen, how do you manage them? Find out how Nasdaq does it when we talk with Douglas Hamilton, the company’s head of AI research.

Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of information systems at Boston College. I’m also the guest editor for the AI and Business Strategy Big Ideas program at MIT Sloan Management Review.

Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG, and I colead BCG’s AI practice in North America. Together, MIT SMR and BCG have been researching AI for five years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities across the organization and really transform the way organizations operate.

Sam Ransbotham: Today we’re talking with Douglas Hamilton. He’s the associate vice president and head of AI research at Nasdaq. Doug, thanks for joining us. Welcome.

Doug Hamilton: Thanks, Sam and Shervin. Great to be here today.

Sam Ransbotham: So our podcast is Me, Myself, and AI, so let’s start with … can you tell us a little bit about your current role at Nasdaq?

Doug Hamilton: In my current role, I head up AI research for Nasdaq at our Machine Intelligence Lab. The role itself here is a little bit unique, since many, many roles within Global Technology, which is our engineering organization, are very much so business-unit-aligned, so they’ll work with one of our four core business units, whereas this role really services every single area of the business. That means that we’re servicing market technology, which is the area of Nasdaq’s business that produces software that powers 2,300 different companies in 50 different countries, powers 47 different markets around the world, as well as bank and broker operations, compliance, and [regulatory] tech for making sure that they are compliant with their local authorities.

We service, of course, our investor intelligence line of business, which is how we get data from the market into the hands of the buy and sell side, so they can build products and trading strategies on top of those. We service, of course, the big one that people think about mostly, which is market services, which is the markets themselves; that’s our core equities markets and a handful of options and derivatives markets as well. And then finally, corporate services — that actually deals with the companies that are listed on our markets and their investor relationship departments.

So really, we get to work across all of these different lines of business, which means that we get to work on a huge number of very interesting and very diverse problems in AI. Really, the goal of the group is to leverage all aspects of cutting-edge artificial intelligence, machine learning, and statistical computing in order to find value in these lines of business, whether it’s through productivity plays, differentiating capabilities, or just continued incremental innovation that keeps Nasdaq’s products leading edge and keeps our markets at the forefront of the industry.

In this role, I have a team of data scientists that are doing the work, writing the code, building the models, munging the data, wrapping it all up in optimizers, and creating automated decision systems. So my role, really, I think, day to day, is working with our business partners to find opportunities for AI.

Shervin Khodabandeh: Doug, maybe to bring this to life a bit, can you contextualize this in the context of a use case?

Doug Hamilton: I’ll talk about one of our favorite use cases, which is a minimum volatility index that we run. So the minimum volatility index is an AI-powered index that we partnered with an external [exchange-traded funds] provider, Victory Capital, on. The goal of this index is to basically mimic Nasdaq’s version of the Russell 2000 — it’s a large and mid-cap index — and then essentially play with the weights of that index, which are normally market-cap-weighted, in such a way that it minimizes the volatility exposure of that portfolio. What made that project really difficult is that minimizing volatility is actually a fairly easy and straightforward problem if you want to treat it linearly. That is, you look at a bunch of stocks, you look at their historical volatility performance, you pick a bunch of low-volatility shares, you slap them together, boom — you get a pretty low-volatility portfolio.

And that’s actually fairly straightforward to solve, from using linear methods to solve it, numerical programming, etc., and you can wrap linear constraints around it to make sure that you’re not deviating too much from the underlying portfolio. You’re still capturing the general themes of it. You’re not overexposing yourself to different industries. That’s actually fairly easy to do. However, when this becomes really interesting is, wouldn’t it be cool if you found two stocks that worked against each other, so they could actually be quite volatile, but the portfolio, when mixed together, actually becomes less volatile than even two low-volatility shares, because they’re constantly working against each other? That is, they have this nice contravarying action that cancels each other out so you can capture the median growth without the volatility exposure. That’d be great.

Now, that becomes a nonlinear problem. And it becomes a very noisy, almost nonconvex problem at that point too. But you still have all these constraints you need to wrap around it. These are simulated annealing, genetic algorithms, [Markov Chain Monte Carlo-style] optimizers. And those also behave pretty well when we have soft constraints that generally guide the solutions back into the feasibility zone. The problem they have is when you give them hard constraints. They don’t like hard constraints; they break a lot. So, what we had to do is rearchitect a lot of these algorithms to be able to handle these hard constraints as well.

Shervin Khodabandeh: What would be a hard constraint?

Doug Hamilton: I’ll give you an example of a soft constraint and a hard constraint. It would be really nice if you have a portfolio, when you go to rebalance it, if its total turnover was less than 30%, let’s say, because it gets really expensive to rebalance it otherwise. A hard constraint might be that no holding can vary by more than 2% between the optimized portfolio and the parent portfolio. So if the parent portfolio is 10% Microsoft, let’s say, then the optimized portfolio has to be between 8% and 12%, right? So that’s an example of a hard constraint. If it’s 7.9%, we’re in violation of the governing documents of the index, and everybody gets into a lot of trouble.

Shervin Khodabandeh: Got it. That’s a good one. OK. So you’re saying hard and soft constraints together form a tougher problem.

Doug Hamilton: A considerably tougher problem, because these algorithms deal well with nonlinearity. Particularly, these Monte Carlo Markov Chain-style algos do not deal well with those hard constraints, where they must meet these criteria. And when you have — I think in that one, we had 4,000 constraints, something like that — almost nothing meets them. So if you take this hard culling approach, then you’re left with no viable solutions to gain density around. So we had to spend a lot of time working with the team to figure out what the appropriate solution architecture should be — algorithmically, etc. — to overcome that challenge, how we set up those experiments, what sort of experiments we need to set up, how we test it, and, of course, how we actually communicate to the client that the solution is better than what they currently have.

Shervin Khodabandeh: Doug, this example that you talked about on volatility — is [it] one of hundreds of use cases that your team does, or one of tens of use cases? [I’m] just trying to get a sense of the scale of the operations here.

Doug Hamilton: Within Nasdaq, what we represent is the center of excellence for artificial intelligence. So this is one of … I’d say it’s in the dozens of use cases that are either live or that we’re exploring at any point in time. On top of that, obviously, we have robust relationships across the business with third-party vendors that help us with all sorts of internal use cases — where maybe it’s not something we’re looking to sell to the outside world, or something where we can leverage existing technology in a better way than building it in-house — that also certainly are part of our AI story as well.

Sam Ransbotham: I was thinking about your example of finding the matching [stocks]. We think about digital twins; it’s almost a digital un-twin stock that you’re trying to match with. That has to change, though, at some point. How often are you revisiting these? How are you keeping them up to date so that you don’t end up with things suddenly moving together when you thought they were moving the opposite [way]?

Doug Hamilton: The nice thing about the world of indexing is that it’s almost statutory how you do that, in that when we look at other models that we have in production, we usually do this in one of two ways. We usually do it either in an ad hoc way, through telemetry, looking at model performance and looking for some sort of persistent degradation in performance, as well as, of course, having some sort of regularly scheduled maintenance for many of our products. For indexes, we’re basically told, “Here’s how often you rebalance, and here’s how often you’re allowed to make the change.” So in this case, we rebalance twice a year, so every six months is when we go back and take a look.

Sam Ransbotham: Let’s switch a little bit to say, how did you end up doing this? What in your background led you to be able to do all those things?

Doug Hamilton: I’m fortunate in that I got my first data science job in 2015. I’ll tell you how I ended up there. My very first job was in the Air Force. I was enlisted in the Air Force in an operational position as an electronics technician; I spent a lot of time shocking myself. It was not the most fun thing in the world, but I was 22, so it was hard not to have fun. And what I realized … I have this exposure to an operational world and was able to gain some leadership experience early on through that as well.

I used the GI Bill to go to school — the University of Illinois — [where] I finished an undergraduate degree in math. I was very convinced I wanted to go become a professional mathematician, a professor. I had some great professors there that I was working with and was on the theoretical math track: real analysis, topology, etc. And that was great until the summer before I graduated: I had this wonderful internship in an astronomy lab, where we were studying a star in the last phase of its life, and it was going to have no earthly application whatsoever, and I was just bored and realized I didn’t want to be in academia.

As many people do who are in quant fields and faced with such an existential crisis, I decided I was going to go become a software developer. And what being a software developer mainly helped me figure out was that I didn’t want to be a software developer, so I went to MIT to study systems engineering and management and really focused a lot of my effort in operations research while I was there. I had a colleague in the class at Boeing, who was looking to start up a data science group, so he suggested my name, and that’s how I got started working at Boeing in manufacturing quality and standing up an advanced analytics and data science group there.

I worked there for a couple of years and then, like many people who go and try to operate in the real world, became a little disillusioned by the real world and decided to retreat into the world of finance, where I found Nasdaq. I worked as a data scientist here for a few years before moving into a management position. I think that’s the story in a nutshell.

Shervin Khodabandeh: So Doug, from airplanes to financial markets, it seems like all of the examples you gave are where the stakes are quite high, right?

Doug Hamilton: Yes.

Shervin Khodabandeh: I mean, the cost of being wrong or an error or a failure — maybe not a catastrophic failure, but even that, I mean — any kind of error is quite high. So how do you manage that in the projects and in the formulization of the projects?

Doug Hamilton: I’m really glad you asked that, because this is my opportunity to talk smack about academic AI a little while, so I’m going to start off doing that.

Sam Ransbotham: Be careful. There’s a professor here, so —

Shervin Khodabandeh: Keep going. Sam would love that. Keep going.

Doug Hamilton: Really, I think it all starts with being more concerned about your error rather than your accuracy. One of the things I’ve been really disappointed about in academic AI over the last couple of years is that — really, it’s related to this AI ethics talk that we have these days, where people were shocked to find out that when you build a model to, let’s say, classify some things, and you look at some minority cohort within the data, that the model doesn’t classify that all that well. And it’s like, “Yeah” — because that’s oftentimes, if you’re not careful about it, what models learn. And you’re absolutely right; the stakes here are quite high, so what we want to be very conscious of is not just trying to get the high score — which, when I read a lot of papers, it seems like we’re in high-score land rather than in utility land. Even when I talk to many entry-level candidates, a lot of them talk about trying to get the high score through juicing the data rather than being really careful about how they think about the modeling process — so they’re very focused on the score: “What’s the accuracy? What’s the accuracy? How do we get the accuracy higher? Let’s get rid of the outliers; that’ll make the accuracy higher.” Well, it turns out the outliers are the only thing that matters.

So, what we are very concerned about, of course, is making sure our accuracy is very high, making sure our square scores, whatever, are very high; making sure that the metrics that are associated with business value are incredibly high. However, in order to make sure we’re hedging our risks, what is as important, if not more important, is being keenly aware of the distribution of the error associated with your model.

No matter what project we’re working on, whether it’s in our index space, whether it’s in our corporate services space, whether it’s in productivity and automation, or if it’s in new capabilities, we want to make sure that our error is distributed very uniformly, or at least reasonably uniformly, across all the constituent groups that we might be unleashing this model on — making sure that if there are areas where it doesn’t perform well, we have a good understanding of the calibrated interval of our models and systems, so that when we’re outside of that calibrated interval, frankly, at the very least, we can give somebody a warning to let them know that they’re in the Wild West now and they should do this at their own risk. And maybe it’s a little caveat emptor at that point, but at least you know.

Really, I think those are the two most important things to help manage those risks: being eminently concerned about the distribution of your error, and being really, really well aware about where your model works and where it doesn’t. There’s a number of other things that everybody does these days around [personally identifiable information] protection and making sure that there’s a robust review process involved. More recently, we’ve been able to make sure that every single project we’re working on has at least one other person on it, so that two people have to agree that this is the best path forward and that these are the right numbers that are coming out.

Shervin Khodabandeh: So you gave a very good series of examples about algorithmically and technically and mindset-wise some of the steps that folks need to take to manage and understand the errors and be ahead of them rather than being surprised by them. I mean, on one hand … so you have to have an eye toward the riskiness of it and how that could be managed. And on the other hand, you talked about being the center of excellence and the place within Nasdaq where the state of the art in this space is being defined. How do you balance the need to watch out for all those pitfalls and errors and conservatism, with pushing the art forward? In terms of a managerial orientation, how do you do that?

Doug Hamilton: I think preaching that conservatism internally to your own team. When I first started, I had this great manager at Boeing. On the one hand, when she was reviewing our work, it was always very, very critical of what we were doing — very careful about making sure we’re being very careful and cautious. And then, as soon as we went to a business partner or a client, “Oh, this is the greatest thing ever. You’re not going to believe it.” And I think that’s a very important part of this; those two angles of internal conservatism and external optimism are really very necessary to making sure that you don’t just build high-performing, risk-averse AI systems, but also that you see rapid and robust maturation and adoption of the technology.

Sam Ransbotham: Well, it ties back to your talking about understanding the error distribution. You can’t really get ahold of that unless you do understand that error distribution well.

Shervin and I have been talking recently about — it’s come up a few times; he’ll remember better than I have — about just this whole idea of noninferiority. That the goal of perfection is just unattainable, and if we set that out for any of these AI systems, then we’re never going to adopt any of them. And the question is, it’s like you say, it’s a balancing thing of “How much off of that perfection do we accept?” We certainly want improvements over humans, but we also want improvements over humans eventually. It doesn’t have to be improvement right out of the gate, if you think that there’s some potential for that.

Shervin Khodabandeh: Let me use that as a segue to ask my next question. So you’ve been in the AI business for some time. How do you think the state of the art is evolving, or has evolved, or is going to evolve in the years to come? Obviously, technically it has been [evolving], and it will. But I’m more interested in [the] nontechnical aspects of that evolution. How do you see that?

Doug Hamilton: When I first got started, the big papers that came out were probably [on] the [generative adversarial network] and [residual neural network]; both came out actually about the same time. [In a ] lot of ways, to me that represented the pinnacle of technical achievement in AI. Obviously, there’s been more since then, obviously we’ve done a lot, obviously a lot of things have been solved. But at that point, we figured a lot of things out. And it opened the door to a lot of really good AI and machine learning solutions. When I look at the way the technology has progressed since then, I see it as a maturing ecosystem that enables business use.

So whether this is things like transfer learning, to make sure that when we solve one problem, we can solve another problem, which is incredibly important for achieving economies of scale with AI groups, or it’s things like AutoML that help to make everybody at least … this kind of idea of a citizen data scientist, where software engineers and analysts can do enough machine learning research or machine learning work that they can prove something out before they bring it to a team like ours or their software engineering team. I think these are the sorts of maturing technologies that we’ve seen come along that make machine learning much more usable in business cases.

I think beyond that, historically what we’ve seen is the traditional business case for artificial intelligence have been all-scale plays. I think these maturing technologies are these technologies that are allowing us to mature models, reuse them, and achieve economies of scale around the AI development cycle. As these get better and better, we’re going to see more use cases open up for “Computers are good at it.” And we’ve certainly seen it when we look at how hedge funds and high-frequency traders operate. They’re all using machine learning all over the place, because it’s better for research purposes than ad hoc trial and error and ad hoc rules. By the same token, we’ve seen it in game-playing machines for years. So the idea that we’ll have more and more of these situations where [the] computer is just better at it, I think we’re going to see that more and more.

Certainly, this is, I think, the thesis behind self-driving cars, right? Driving is the thing that people do worst, that we do most often, and, provided that you can work out the edge cases, which is really hard, there’s no reason why computers shouldn’t be better at driving than people are.

Shervin Khodabandeh: I was going to ask, what about those problems where computers alone or humans alone can’t be as good, but the two of them together are far better than each of them on their own?

Doug Hamilton: When there is a computer-aided process or an AI-aided process, we can then usually break that down into two things — at least two processes. One is a process that the person is good at doing, and the other is a thing that the computer is doing. But if you can imagine computer-aided design, there’s many things that a computer is good at in computer-aided design that it is helping the person with. One of them is not coming up with creative solutions and creative ways to draw out the part that they’re trying to design, but it’s very good at things like keeping track of which pixels are populated and which aren’t, the 3D spatial geometry of it, etc. And that’s what it’s good at — and then, the actual creative part is what the person’s good at.

Maybe a person is not so good at generating new and novel designs for, let’s say, furniture. Maybe you’re Ikea and you want to design new furniture. So maybe people aren’t particularly good at generating these things out of the blue, but they’re pretty good at looking at it and saying, “Well, hang on a second. If you design the chair that way, it’s got a giant spike in the back, and it’s going to be very uncomfortable, so let’s get rid of that, and then let’s try again.” So there’s this process of generating and fixing, or generating and editing, that we can break it down to. And the computer might be better at generating and the person is better at editing for these real-world or these latent requirements that are very difficult to encode.

Sam Ransbotham: All right. Well, thanks for taking the time to talk with us and to learn about all that you, and in particular Nasdaq, are doing. We’ve heard about, for example, project selection, balancing risk, and how you pick those projects. We learned about how important understanding error is and all the different possible cases that you see for artificial intelligence. It’s a pretty healthy bit to cover in just one session. We appreciate your input on all those topics.

Doug Hamilton: Thanks, Sam. Thanks, Shervin. It’s been a pleasure speaking with you.

Sam Ransbotham: Please join us next time. We’ll talk with Paula Goldman, chief ethical and humane use officer at Salesforce.

Allison Ryder: Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn’t start and stop with this podcast. That’s why we’ve created a group on LinkedIn, specifically for leaders like you. It’s called AI for Leaders, and if you join us, you can chat with show creators and hosts, ask your own questions, share insights, and gain access to valuable resources about AI implementation from MIT SMR and BCG. You can access it by visiting mitsmr.com/AIforLeaders. We’ll put that link in the show notes, and we hope to see you there.

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

BCG
More in this series

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Subscribe to Me, Myself, and AI

Me, Myself, and AI

Dismiss
/