Each new technology to emerge presents new risks, and artificial intelligence (AI) is no different. But dealing with those threats is a broad responsibility that extends well beyond just the credit union’s IT staff.
As credit unions increasingly leverage AI for tasks such as fraud detection, loan decisioning and much more, risk managers are also facing the need for proactive measures to address compliance, biases, and cybersecurity threats.
Below, Chase Clelland, Senior Vice President for Risk Management at Grow Financial Federal Credit Union in Tampa, Florida, addresses how they’re developing governance frameworks and procedures at the same time as they explore and deploy new use cases. Their goal: to enhance operational efficiencies while addressing potential vulnerabilities that also can emerge if not properly managed.
What AI applications are in use at your credit union?
We’re in the infancy of applying AI at our credit union. We’ve opened up use of Bard/Gemini, ChatGPT, QuillBot, and Copilot and other Microsoft products. We’re also evaluating several others, including AbleAI and Pienso. What do you see as the biggest risks around artificial intelligence? As I alluded to above, exposure of personally identifying information (PII) is the biggest risk. We’re seeing new ways emerge to use, for instance, ChatGPT with security control on the back end that engage with a private cloud. That could be a great way to run large language models (LLMs) in a safe environment, but it’s only been out for a few months and it’s still unproven. As our credit union’s Risk Officer, that’s a focal point of my due diligence around LLMs. So is cost modeling. Anytime you get involved with tokenization of data, those miniscule basis points on a dollar really add up. And then there’s the risk of malicious attacks. There already have been hundreds of instances of malicious code discovered embedded in downloaded LLMs. The bad actors are importing themselves into these new technologies. It reminds me of when Google first emerged. AI is going to naturally affect the way we work, but we’ve got to make those new technologies safe as we embrace them. We’re trying to balance the scales of justice here, if you will. What are you doing about those risks? We’ve formed a cross-functional AI governance team and a charter governance policy and we’re now in the process of ideating use cases to bring these tools to a greater population around at Grow Financial Federal Credit Union. Making sure that we don’t have account name structures or anything that could help identify a member is at the crux of what we’re doing and we’re making sure that whatever tools we use are either on premise or in our cloud instance. Nothing goes external. Right now, about 50 of our nearly 600 people are playing around with and have access to the tools. That provides us with the opportunity for testing and learning. We also think of our governance charter as containing four pillars with a group focusing on each: training, ideation and use cases, security and safety, and compliance and ethics. |
This article appeared originally on CreditUnions.com. For more information on how other credit unions are utilizing AI in their workplace, click here.