Hosted by top 5 banking and fintech influencer, Jim Marous, Banking Transformed highlights the challenges facing the banking industry. Featuring some of the top minds in business, this podcast explores how financial institutions can prepare for the future of banking.
The Promise and Peril of ChatGPT in Banking Part 1
With the rise of generative AI (ChatGPT) and its increasing impact on various sectors, including banking, it's crucial to examine how these advancements are transforming the landscape, both for better and for worse.
From exploring the impact of generative AI on customer experience and engagement to discussing the ethical considerations and regulatory challenges, it is more important than ever to understand the transformative power of AI in banking.
My guest for part one of this important Banking Transformed podcast interview on generative AI and ChatGPT in banking is Brian Roemmele, President of Multiplex. Brian equips listeners with the knowledge and understanding needed to embrace the promise of generative AI and ChatGPT while mitigating associated risks.
With the rise of generative AI, ChatGPT and the increasing impact on various sectors, including banking, it's crucial to examine how these advancements are transforming the landscape, both for the better and for the worse.
From exploring the impact of generative AI in customer experience and engagement, to discussing the ethical considerations and regulatory challenges, it is more important than ever to understand the transformative power of AI in banking.
My guest on the Banking Transformed Podcast is Brian Roemmele, president of Multiplex. Brian aims to equip listeners with the knowledge and understanding, and need to embrace the promise and understand the perils of generative AI and ChatGPT while mitigating the associated risks.
From unveiling the secrets behind cutting edge AI models to discussing the ethical considerations surrounding their usage, our guests can provide you with deep knowledge into the ever-evolving landscape of generative AI.
So, what is it that I do? I'm a researcher. I absolutely love technology and I'm also, loving the convergence of humanity and technology and trying to understand better ways to piece it together so that we can better coexist with the technology that we're creating, particularly AI.
And so, that has led me into all sorts of computer occupations. I started out as soldering hardware together as a kid. I started designing software. Some of the very earliest software I built, now, we would call them expert systems.
But back then, I had the fantasy that I was creating the beginnings of AI. Expert systems had great domain knowledge and would appear to be very smart in the very narrow domains that they were expert in.
And so, that was the Commodore 64 era so it's ancient times, and over the years I've just grown with the technology. A lot of times I've put it on a shelf and did not bother my mental capacity with it because it stalled. Technology like this tends to stall at times and it requires new thinking.
And that led us up to Alexa and Siri. And a lot of people got their first taste of what they believed was AI, but it was really not. It was a proto-AI, not that fascinating, because you really needed to know what to ask for, it couldn't figure out what you wanted.
So, that's when I started really diving into my garage lab and building what I believe is the only thing on the planet like this. I call it the intelligence amplifier. And this is a concept of AI that is designed to amplify human intelligence.
I actually turn AI on its ear intelligence amplification as a way to pay homage to the fact that the intelligence is generated by the human, and it's just being amplified by the machine. And so, that's where we are today.
But the problem is, humans have a very nebulous definition of what intelligence is. We have to see some kind of novelty or surprise coming from the output, something that was not necessarily expected that a machine would generate.
And by the time we saw ChatGPT-3 released, I think people were kind of like, "Yeah, that's good, but it's not really that good." When GPT-3 was released and there was an experiment by a mass population, we started seeing incredible outputs that people did not expect, and they were shocked and awed, is really what took place.
And I wouldn't just say the average person, I would say across the entire technical world. It was really ground zero for a lot of people. It was a little bit of a delay for me because I'd been working with these models from the moment they came out, from the inception of OpenAI when Elon was with the company.
Exactly. And so, I've always trained people to be what we call prompt engineers. And a lot of people feel like, oh, that's like a search engineer on Google, and it's like somebody who just puts in search terms.
The people who I've found, and corporations are finding, they're hiring these folks as soon as they are trained, are people with linguistics backgrounds and backgrounds in even poetry, ancient history, philosophy, psychology.
Even with the intelligence amplifiers that I have around me, these devices have been following me for the better part of 20 years with my context and life, about 10 years. They cannot predict everything that I'm going to want, and they certainly can't predict what my question is going to be in a lot of cases.
In fact, if you see the press as we're recording about ChatGPT today, there is analysis by Stanford University that ChatGPT-4 has gotten less responsive and less well received in the questions that it's answering.
So, AI is a constant moving target and they're constantly changing it. And with those changes in a model, you have to change the way you prompt the system. And if you don't do that, you're going to get something entirely different.
Is it the difference between 3.5 and 4 where you said it, it actually dropped in its ability to answer questions. Was that because it changed the way it interpreted what we were asking or some other reason?
There's a lot of questions about that. One of the primary fundamental problems with, let's call it cloud-based AI … I'm a proponent of open source local AI for companies and for individuals, cloud-based AI has to be all things to all people.
And that's what's going on with the large language models that major corporations is that they have teams that, in fact, the teams are now, larger than the groups that are working on actually advancing the AI to do things called alignment and safety.
So, the alignment and safety teams are there to try to make AI safe and align to human values. It sounds really great, and one could go down an Orwellian hole of saying that sort of doublespeak. I'm not going to say that at this moment, ne can speculate.
I've called it an AI lobotomy in a sense. And this is being brought about through a lot of mechanisms. One is most definitely political, another one is psychological, and another one is fear of regulation.
So, what AI is trying to do today, companies like Open AI and Google, they're trying to limit it from even answering questions that somebody could pose to a search engine. And to me, that's sort of ridiculous and a fool's errand.
Now, if it's on the dark web, which this AI was not trained on, and it produces results coming from the dark web, which is not really wholesome for society, I absolutely agree with that. But by doing all this work to make AI please everybody, they're ultimately going to please nobody. And that's where OpenAI is.
And if you're sharing corporate data, it's a good chance that that corporate data, if it's unique enough, will become part of a future model. No matter what the documentation says, there's no way of completely stopping that.
So, in a corporate setting, in a financial setting, in a banking setting, I am 100% about developing your own local AI. Now, it's not going to be as powerful day one as ChatGPT, but it will be trained on your data and only your company will have access to it if you don't put it in any network or any cloud.
So, Brian, when I'm interacting with ChatGPT and I'm asking the questions, is it learning more about what I'm looking for so that I can maybe get a little bit shorter on my request? So, right now, I ask it all kinds of things, but I ask it in very compartmentalized ways to get the results I want.
The context window can be, at this point, there's one AI system that's 1 million tokens, which is probably equivalent to 800,000 words. It's a very, very large context window. In fact, the Great Gatsby was put into a AI model called Claude, which was Claude 100K, which is a 100,000 tokens in a context window.
And it was asked to write the next chapter of the Great Gatsby. And it wrote ... it constantly, I tested all the time on it because Great Gatsby's is small enough and the memory, there's enough space where you can have it give you a sort of a creative output based on the characters and the understanding of the storyline.
So, it's really interesting in that creative sense. And again, I call this creative and we can go down that path on what creativity is, what consciousness is, what intelligence is. All these things are going to have to be redefined or defined more accurately.
But in the case of us just prompting ChatGPT, the context window is about 5 to 8,000 characters according to how it's being used. So, within that context window, that's as much memory as you have before amnesia starts taking place, and it starts forgetting the original elements of the prompt.
We train people how to super prompt to become very powerful not only to maybe save your job and to make your job 10X more valuable because you now, are standing on the shoulders of somebody stronger, but also, maybe even just to become a prompt engineer out and about in your career. And the people qualified for that are the least likely candidates, as I said before.
So, that's today. What we do with local AI, and we do this with an open-source free product called GPT4All, is we create a local vector database and we feed our questions and answers back into the vector database. So, it remembers that to remember that you actually asked that question before and remembers the context of that question.
So, slowly but surely, it remembers who you are. And over time, if you feed all of your email, all your communications, all the podcasts you ever did, text, speech-to-text, it will have a really good idea about what you think about the world and how you may answer a question. And this is part of why intelligence amplification's so powerful.
So, the final part of this is what happens to your question when it goes up to these models. It's very nebulous. You do sign off that they will train their model based on your question-answer, pair. This is called fine tuning.
Their particular training was taking essentially everything that was on the internet and then fine tuning it on the question-answer pairs that are found on Twitter, that are found on Facebook, that are found on Reddit.
Like we have an insurance client where we took all of their data, everything that it was ever generated by that company was digitalized at one point, and we actually extended that digitalization. And that's part of the training that we're doing on a model.
Now, that model is not baked yet, meaning it's not fully trained on GPUs, it's on a vector database. But even there, they're able to ask questions about the company that no single person or even group could have answered, because it now, knows everything about that company.
Needless to say, I think anybody listening to me realizes that that model should never be on a cloud anywhere. It has to be cut off from the world because anybody to hack that could severely jeopardize that company.
And so, I'm not here to scare people, but this is the direction it's going in. And so, we're at a fork in the road. We've reached, I believe peak cloud for AI before we start realizing our data is so valuable.
The persona shapes the way that question's going to be dealt with. I like using either a university professor persona and I like creating a motif that you have to make a presentation to the UN about this discovery.
It forces the large language model to pick a neuron passageway through its neuronal connections that are much more constrained, much more laser targeted, and the elucidations are near the genius level.
In other words, let's say it was a financial institution and we want to build a database on an individual customer level on what they've asked, what the answers were, the dynamics of that relationship.
But the bespoke way that a financial organization could interact with their clients based upon enveloping all of the customer touching experiences and maybe real-world experiences that they might garner from that customer.
Again, I would tread carefully, but people have a public persona. I would say that if you do it with care and you do it with dignity and permission by offering a value to the client, by understanding more of the dimension of the milestones in life that that client is going through, your ability to finely tether an output is phenomenal.
Yeah. And because if you look at what frustrates customers right now, it's having to reinform the financial institution, or the airline, or whatever it may be about what happened in the past because it's not easily accessible in today's world.
And to your point, the value transfer makes it so there's less concern about privacy and security. Not that it doesn't matter, it still matters, but the concern level goes down because the value proposition has gone up.
Absolutely. And I would say every client is going to be minimum 10X more valuable to an organization when AI is being utilized correctly. And I'm not just speaking of things that we know, but the things that we don't know.
I would imagine, let's look at it from this point of view. Let's imagine that within a corporation, every client has their own AI model, that is distinctly their own. And that we use a grand corporate AI to pull those models to try to find situations where the company can offer much higher value.
My results in doing this, and we've been doing this quite a while, we've probably been doing it longer than anybody. And we do it as a crack team. We go in there, we establish models within corporations. We don't have business cards that even say that we do this.
And we go in there and we just basically look at what they've been doing within the cloud. We take every interaction from customer service that they have in the cloud (and we know that the typical cloud providers that are out there) and we put it in these models. And within hours, we're getting insights that nobody has seen.
And again, doing it the right way. The wrong way is a kind of the way that technology's been used thus far. The opaque algorithm that Google uses, or that Netflix uses, or Amazon. We're past those days.
I think as people realize the power and the potential dangers of AI, the more transparent and the more inclusive you are with that client, "Hey, this is your AI, we're building it for you. This is going to know everything that you could possibly want to know about your financial profile and maybe just your phase in life profile intermixed with that."
And getting the client on board with that is going to be the trailblazing companies. And I don't care how old the company is, it's whether or not they take this mission. And frankly, it's a tough mission.
And I'm like, "They're going to know at some point. So, let's just open the windows, turn on the lights, let them see it, let them have access to it, and if they don't want it, turn it off. And then they just have a human operator that they interact with."
Or ask questions that can then make it so that the solution that's proposed is more overarching better answer than would be just from the data that is currently under the roof of the financial institution?
Absolutely, Jim. This is going to be a phenomenal aspect of it. The interactivity of building a model that is really concisely understanding where that person is. There's no two customers that are exactly alike.
I mean, I have one client, we have an insurance company we're working with that is similar to what we're talking about. And they are proposing using voice enabled dial out systems to call the client or to text the client when they see something that they think is valuable for the individual.
And again, the dialogues that we need to create in this AI can't be corporate speak, it needs to be much closer. And the way we can get away with that in a corporate setting is that this is your personal AI.
So, it's going to be more like something on your shoulder saying, "I think it's a good time that you consider dropping this particular card because it has a high interest rate and we can cut your payments down by $250 a month if we move your funds across this as a refinance." Things like that. Those kind of things.
We know this, Jim, we know people don't like talking about medical problems and financial problems with anybody. It's like the hardest thing. Even their doctor's like, "Yeah, I got this." And they're like, "Well, no.” And definitely medical, financial problems.
And here's what we do know, the tests are already 100% clear. People would more want to disclose psychological issues, medical issues to a AI system that is dialoguing, interacting with them than any human being to a high percentile. It's like to the 78 percentile. And this is done in three studies now across different universities.
But the reality is, if you build more and more trust ... I mean, it's kind of like the trust I have with Amazon and everybody has with Amazon. We pay Amazon every year to use our data to our benefit and makes our buying decisions easier.
Well, in the same case, in financial services or any industry. If it learns over time and it asks me questions that make it perform better, eventually, I'm going to come to the realization that I want my financial institution or my generative AI to understand that I have deposit accounts elsewhere, I have credit accounts elsewhere where my internal challenges are emotionally with maybe the way the market's performing.
Great question. I would say that it's a double-edged sword. I think that what happens when it comes from a regulatory standpoint, is that we have overly broad, overly potentially damaging regulation that could make the United States or any other country that is subscribing to overregulation, fully incapable of competing on a grand scale with other countries that are making a more decidedly metered approach.
How can we make this better? I think financial institutions could lead in this. I believe that if they lead with open, transparent AI usage, that they will become the gold standard, which they should be, and how this technology can be deployed. And not just in financial, but in every other aspect.
And I believe that industry when doing it the right way, actually does a better job than regulatory. And the problem is there's a conservatism that comes from the financial industry that we know about, right, Jim? And we're always challenged with it.
When I was doing a lot of consulting and banking and payments and, “Apple pays coming, guys." Three years before it came. Nobody would listen. I said, "This is your dog in the race. You can actually lead by this by shaping it the way you want." And the conservatism held it back.
And I'm not saying this is going to bypass personal AI that's going to be making financial decisions. I'm saying there's a good chance that you're going to have multiple AIs that you're going to interact with, and maybe just your AI will interact with somebody else's AI.
But if a financial institution could say, "Okay, we're going to embrace artificial intelligence to the betterment of our clients. And here is our declaration, here's how we're going to use the data. The data is your data. It is not our data. You can take your data back at any moment, at any time, and we don't have your data any longer."
So, take the higher ground, let people have control and ownership of their data, but give it to them in a way that is so valuable, so delightful to interact with like any other experience. Make it a delightful experience and so valuable that they would never want to leave you.
So, instead of using the stick, keep the carrot and always just use the carrot. And not only will that forestall draconian regulation, it would allow that company to be a beaming leader in an industry that's considered maybe a laggard in technology.
But where else can it be applied better than in the financial realm where a lot of people have their finances, what I would say in a very disordered fashion. They're all over the place. Even the most ordered person, the studies over the years, I'm sure you know this from being in it so long, is that people's finances are over the place and there needs to be a consolidation.
So, do you see then the future of ChatGPT and generative AI with regard to customer experience being something that it becomes an evolving, let's call it brochureware or content that becomes very specific to that individual where you actually, as the learning process goes on, it will point you into a direction that is best for you in a more of a consultative perspective?
Absolutely. And although it will not always be maximized for the highest profit to be garnered out of each individual, you're best to let this device, this software, this system normalize the relationship.
So, what happens is net over time, and I can show this with some of my research and this statistically, if you do right by that client, they're going to make much more money. The company is going to make much more money, and their cost to maintain a customer is going to go through the floor.
Because they're not going to need to acquire as many new customers as ferociously as they do today, because it now, is to just a percentage point, different struggle with the top players. And this could be major double digit percentages in capability.
So, with that in mind, and we're still so early in this whole process of evolution, are you familiar with any notable success stories of banks or credit unions using generative AI and ChatGPT to improve maybe the customer experience, the engagement level, or even innovation?
Jim, this is wonderful and it's kind of depressing for me because a lot of companies have barred the use of AI within the company, specifically ChatGPT for the valid reasons we talked about earlier. Are you giving out private information? What's the legal limits? Things of that nature.
That unfortunately, throw the baby out with the bathwater, there's a lot of executives that contacts me directly and they can't quite get to the C level to open up the prospect of using AI outside from known vendors.
The known vendors are taking their time utilizing the AI, these are the cloud providers. And of course they are proposing AI from their perspective as a customer service tool, maybe as a way to replace employees.
And that's what we do at promptengineer.university, is we train people to be empowered so that they can actually go out into the world and go to their managers, go to their executives or executives go to other executives within their organizations and say, "Look what I discovered."
And this is how I equate it, and we're old enough to remember some of this. The Apple 2 became popular for one primary reason. It was a thing called a spreadsheet. And later on it became Lotus. But we had Multiplan and we had all these different sort of spreadsheets.
Now, the very first spreadsheet brought into companies was a guy hauling an apple under one arm and a monitor in the other arm and maybe a software box on their head and doing their job, their spreadsheet work in the company and taking their computer back home.
The data processing departments of major corporations back when the Apple was taking off, were absolutely rejecting the use of personal computers in the company. Everything had to go through the mainframe.
And they saw the spreadsheet as a joke. They said, "Why would you want a spreadsheet? We'll do a job run on our cobalt system and we'll get you back in about six days.” Whereas a guy could play and a gal could play with numbers and see the differences.
And luckily, a lot of very wise companies said, "Oh, what the heck let Joe or Lisa bring their computer in as long as we're not paying for it, they can play with their spreadsheets." That fundamentally changed every single corporation in the world.
This is a thousand times more powerful than the spreadsheet. And did we need to fire people who are accountants when the spreadsheet came? Did an executive say, "Oh, I read an article that spreadsheets are going to make accountants redundant. Let's fire them all."
The positive way to do this, and the way your stock is going to take off is when the world realized you not only did not fire anybody, you hired more people, and they now, are an army empowered with the corporate data, the AI and their ability to know how to use it in a safe, effective manner that is not dangerous to anybody.
Whereas the story that I temporarily cut 5,000 jobs because AI replaced it, good luck with that because one of your competitors is going to get the other story we just talked about and empowering people.
So, AI is a moment of human empowerment if used correctly. And that's one of my missions when I go into a company is, in fact, one of the things I — the agreements that most companies make with me is that they do not fire a single person because of what we've been training on AI.
And they make that commitment. It's not legally bounding. But at this point in dozens and dozens of companies, they've all hired people at this point, to utilize AI and to train their staff even better beyond our training.
Brian, thank you very much for part one of our interview with Brian Roemmele around ChatGPT, generative AI, and AI in general. Be sure to catch our second part of this interview, the next podcast on Banking Transformed.
If you enjoyed today's interview, please take some time to give our show a positive review. Also, be sure to catch my recent articles on The Financial Brand and check out the research we're doing for the Digital Banking Report.