Categories
Audio Sources - Full Text Articles

As companies implement AI, others focus on AI security

(NewsNation) — As companies rush to implement cutting-edge artificial intelligence (AI) systems, others are rolling out tools to protect those same systems from themselves.

Earlier this week, Arthur — an AI monitoring platform — introduced a first-of-its-kind firewall for “large language models” (LLMs).

LLMs, which are a type of artificial intelligence that learns skills by analyzing massive amounts of text, have already been shown to boost productivity but they also come with vulnerabilities.

When OpenAI released its artificial intelligence chatbot, ChatGPT, in November users realized they could generate inaccurate, and sometimes, toxic responses. Those issues may not matter for someone who’s looking for a great soup recipe but they make a big difference at the corporate level.

“In a business context, where there’s billions of dollars at stake, we better be very sure (the AI response) is accurate before we return it to the user,” said Arthur CEO and co-founder Adam Wenchel.

Rather than accept that sometimes mistakes happen, Arthur’s platform can intervene and prevent certain prompts where errors are likely. That intervention could be especially important in a healthcare or legal setting where lives are on the line, Wenchel pointed out.

There are also privacy and data leak concerns.

As companies implement AI, they’ll have to use massive troves of data to train their systems. For example, a bank’s model may include investment data that contains sensitive information that neither the public, nor the company’s own employees, should be able to access.

Arthur’s firewall helps filter that data by analyzing the AI’s response before it’s sent to the user. Wenchel said the tool can flag responses that include Social Security numbers or individual health records and block the AI from presenting that information.

The monitoring platform is already being used by the Department of Defense and some of the top U.S. banks. Like the artificial intelligence systems it’s designed to protect, Arthur’s platform is continuously learning and improving as it analyzes other LLMs.

The additional security tools will come as welcome news for companies who have already become more cautious of AI.

Just this week Samsung banned employees from using generative AI tools like ChatGPT after staff uploaded sensitive code to the platform, Bloomberg reported. Earlier in the year, some Wall Street banks did the same.

But those bans may end up being temporary. With additional AI security in place, Wenchel thinks businesses will feel more comfortable integrating the technology going forward.

“It’s a whole new world,” he said. “Companies that don’t normally move with incredible speed are moving pretty quickly which is pretty amazing to see.”

WP Radio
WP Radio
OFFLINE LIVE