Global Salon: Are We Better Or Worse Off With AI?

Lou Steinberg is founder of CTM Insights, a research lab that focuses on solving cybersecurity’s toughest problems. He previously served as the chief technology officer of TD Ameritrade, in charge of technology innovation, risk management and cyber security. He discusses the business impact of generative AI.


Global Finance: What’s the low-hanging fruit for generative AI in corporate finance?

Lou Steinberg: Today’s big application is improving client interaction in various channels like chat and voice. We all hate IVR [interactive voice response] systems and dumb robotic chat agents. Making them better would improve client satisfaction.

Tomorrow’s opportunity is in personalization. In banking, we often promote “the next best offer.” We think you might be interested in this product. We can get way, way, way better at not only interacting with you to offer what you want, but personalizing the offer for you as well.

It’s the same data and the same back-end systems, but everybody will have their flavor of an offering, and we can customize it without having a tremendous amount of variation on the back end. That’s the real upside here.

There’s ultimately a third piece, which is when we get to open banking, data monetization, and things like that. Our ability to understand you and leverage your data, not only within a bank but across other financial products, starts to come into play. That’s today, tomorrow and the next day.

GF: Will generative AI result in a sea change for a private bank’s office of the CIO, in terms of client relationship management and product offering selection?

Steinberg: It’s certainly a shifting market. Private banking has always been about personalization. We build a relationship with you, and we can improve that by using new technologies. It used to be data and analytics for mass customization, and now AI to do personalization at scale. With data and analytics, we would find people like you and then do clustering. The CIO drove that conversation because the CIO knew how to mine data. With AI, we can build communities and clusters of one. So, we’ll know about you and your interests and what’s the next best offer for you.

This is shifting the center of gravity further toward the CIO because the CIO is the one who has the technology underpinnings to do this personalization.

The downside of using AI, particularly with generative AI, is that generative AI “generates” answers. It makes things up based on what it has learned in the past. It’s right there in the name. That’s good if you do creative work like writing stories or coding software. It’s isn’t good if you try to apply it to something where made-up answers are harmful.


Applying generative AI to private banking makes sense as a channel and for personalization. Still, we have to be careful about what they call hallucinations, which are the stuff it makes up.

GF: Will it change the internal composition of the industry?

Steinberg: The short answer is, the product organizations aren’t going to let go of their important role, which is defining where they’re headed with clients. What are they going to bring? But increasingly, how we get there is shifting to the tech teams because they’re the ones with the original data and analytic skills, and now AI skills. As they pull and integrate more sources of training data into their models, requirements will still come from the product, but they’re losing control over how.

GF: What are the dangers of hallucinations?

Steinberg: The point of generative AI is to generate. If you ask a fact-based question, you might get something that sounds very real—but it’s really wrong. You could ask an AI chatbot, “What are my tax liabilities for 2022?” It will go out, understand all kinds of tax nuances, come back and give you an answer, but it might not be a correct answer. It might make up a statute in the tax law. And if you file a tax return based on it, I’m not sure the IRS [Internal Revenue Service] will give you a pass.

A New York law firm, which I won’t name, recently filed a case and cited legal precedents that they had researched with generative AI. It made them up. The judge got extremely angry and sanctioned them for making up fake cases. The same issue happens if you ask about banking products.

GF: Would the same be valid for software code?

Steinberg: Creativity in coding is fine, you want to make up new software or you are stealing someone else’s IP. The risk here is you have to look at the training data. Most of generative AI’s training data comes from open source, which is full of bad code. Some of it is malware, and some of it is just poorly written.

If you teach your AI how to code leveraging GitHub and the GitHub repos are full of bad code, you’re essentially generating bad code. Of course, there are a lot of examples of good code on GitHub, and you can get some reasonably good code out of generative AI. But just like the legal or advice case, you have to verify it. You have to “trust but verify” because your AI might have been taught by bad coders and hackers and may generate code in their style.

And, of course, it’s not just code that gets generated. AI can generate realistic audio and video that is fake.

GF: How do companies fight this? How can people tell if it’s an AI-generated person on the phone, not the person they think they’re speaking with?

Steinberg: This scares the daylights out of me because the generated version might say things you would never say. What I’m terrified about—and we’ve already seen is a fake phone call—an AI-generated, synthesized voice that sounds like my kid and says, “Dad, I’ve been kidnapped.” Or it might sound like the CEO saying, “Hey, I’m at a conference and can’t talk. Please wire $10 million for this business transaction.” And it might not be a call, it might be on Zoom, except it’s not the CEO you see and are talking to; it’s an AI fake.

We’ll see a ton of fraud coming out of the space. How do you combat it? There are two different approaches people are taking to fight these “deep fakes.” Most people are trying to say, can I detect a badly generated version of you? The hands look funny. The eyes don’t blink right. The mouth is wrong. The problem with that is AI keeps getting more realistic.

A second approach is to say, I don’t want to detect the bad stuff. AI could generate an infinite amount of bad versions of you. I want to detect if this is a good version instead because that’s a much easier problem. My research lab has taken the second approach of “how do we verify that it’s legit?”, as opposed to “how do we detect bad?”

Unfortunately, most people confirming it’s legit focus on initial authentication. Do you know your password, and can you receive a texted PIN? Then they assume it stays legit.

The problem is bad actors can intercept the connection after it’s setup and insert their deepfake content. What we’re doing is instead of verifying the envelope, the channel and the initial engagement setup, we’re continuously verifying your content as you speak. We’re asking “Did these words actually come from this person?” We can sign spoken words and put them in the cloud, where anyone can check them in real time. And it works; we have prototypes of plugins for things like Zoom and voice apps that, as you are talking, detect if something changes your meaning, your words. Proving content is legit is how you really solve this problem.

GF: Has this technology led to the next generation of cyberattacks? Are they already happening?

Steinberg: Yes, they are happening at a small scale. We’ve seen impersonation attacks as I mentioned.  Broader attacks against data are limited in financial services today. We’re seeing those at [the US Department of Defense].

But data attacks are a big problem. My nightmare scenario when I was the CTO of TD Ameritrade was a new kind of ransomware that would change my databases. If you encrypt my files, I have backups, know which ones are encrypted, and know how to put them back. I don’t know what’s right or wrong if someone randomly changes data in my databases. All I can do is roll back the entire database, but if I do that, all of my transactions for that day or that period get lost and become collateral damage.

So, the nightmare was attacks against data integrity. There’s a variant to this, which is attacks against AI training data so your AI learns to make mistakes. I can teach your AI to make bad trades in, say, energy. You’ll never know why your model isn’t good at energy trading. You just know overall, you’re making money, and you’re happy. But the attacker will take the other side of your energy trades all day long. All because they biased your unprotected training data.

And, there have been direct attacks against data. We’ve seen attacks against voting records in the last US presidential election. On a small scale, voter registration lists were altered. The governor of Florida was affected; somebody hacked a database and tried to invalidate his right to vote by changing his address. They fixed it. We’ve seen similar attacks against data that controls things in the physical world, such as water-treatment plants. I haven’t seen many data integrity attacks in financial services, but we know it’s coming.

arrow-chevron-right-redarrow-chevron-rightbutton-arrow-left-greybutton-arrow-left-red-400button-arrow-left-red-500button-arrow-left-red-600button-arrow-left-whitebutton-arrow-right-greybutton-arrow-right-red-400button-arrow-right-red-500button-arrow-right-red-600button-arrow-right-whitecaret-downcaret-rightclosecloseemailfacebook-square-holdfacebookhamburger-newhamburgerinstagramlinkedin-square-1linkedinpauseplaysearch-outlinesearchsubscribe-digitalsubscribe-printtwitter-square-holdtwitteryoutube