Regístrese ahora para una mejor cotización personalizada!

AI deep fakes, mistakes, and biases may be unavoidable, but controllable

Nov, 13, 2023 Hi-network.com

As generative AI platforms such as ChatGPT, Dall-E2, and AlphaCode barrel ahead at a breakneck pace, keeping the technology from hallucinating and spewing erroneous or offensive responses is nearly impossible.

Especially as AI tools get better by the day at mimicking natural language, it will soon be impossible to discern fake results from real ones, prompting companies to set up "guardrails" against the worst outcomes, whether they be accidental or intentional efforts by bad actors.

AI industry experts speaking at the MIT Technology Review's EmTech Digital conference this week weighed in on how generative AI companies are dealing with a variety of ethical and practical hurdles as even as they push ahead on developing the next generation of the technology.

"This is a problem in general with technologies," said Margaret Mitchell, chief ethics scientist at machine learning app vendor Hugging Face. "It can be developed for really positive uses and then also be used for negative, problematic, or malicious uses; that's called dual use. I don't know that there's a way to have any sort of guarantee any technology you put out won't have dual use.

"But I do think it's important to try to minimize it as much as possible," she added.

Generative AI relies on large language models (LLMs), a type of machine learning technology that uses algorithms to  generate responses to user prompts or queries. The LLMs access massive troves of information in databases or directly from the Internet and are controlled by millions or even hundreds of billions of parameters that establish how that information can provide responses.

The key to ensuring responsible research is robust documentation of LLMs and their dataset development, why they were created, and water marks that identify content created by a computer model. Even then, problems are likely to emerge.

"In many ways, we cannot guarantee that these models will not produce toxic speech, [and] in some cases reinforce biases in the data they digested," said Joelle Pineau, a vice president of AI research at Meta AI. "We believe more research is necessary...for those models."

For generative AI developers, there's a tradeoff between legitimate safety concerns and transparency for crowdsourcing development, according to Pineau. Meta AI, the research arm of Meta Platforms (formerly Facebook), won't release some of the LLMs it creates for commercial use because it cannot guarantee there aren't baked-in biases, toxic speech, or otherwise errant content. But it would allow them to be used for research to build trust, allow other researchers and application developers to know "what's under the hood," and help speed innovation.

Generative AI has been shown to have "baked-in biases," meaning when it is used used for the discovery, screening, interviewing, and hiring of candidates, it can favor people based on race or gender. As a result, states, municipalities and even nations are eyeing restrictions on the use of AI-based bots to find, interview, and hire job candidates.

Meta faces the same issues AI developers experience: keeping sensitive data private, determining whether an LLM can be misused in an obvious way, and trying to ensure the technology will be unbiased.

"Sometimes we start a project and intend it to be [open sourced] at the end of it; we use a particular data set, and then we find at the end of the process that's not a dataset we should be using," Pineau said. "It's not responsible for whatever reasons - whether it's copyright issues or other things."

LLMs can be fine-tuned with specific data sets and taught to provide more customized responses for specific enterprise uses, such as customer support chatbots or medical research, by feeding in descriptions of the task or prompting the AI tool with questions and best answers.

For example, by including electronic health record information and clinical drug trial information in an LLM, physicians can ask a chatbot such as ChatGPT to provide evidence-based recommendations for patient care.

What a generative AI model spits out, however, is only as good as the software and data behind it and the tools can be used to produce "deep fake" images and video -that is, bad actors can manipulate real photos and images to produce realistic fakes.

Microsoft's Copilot move

In March, Microsoft released Copilot, a chatbot based on ChatGPT that's embedded as an assistent in Office 365 business applications. It's called Copilot because it was never intended to perform unattended or unreviewed work, and it offers refences for its work, according to Jared Spataro, corporate vice president for modern work and business applications at Microsoft.

"Especially on specifics like numbers, when Copilot spits out 'You grew 77% year-over-year in this category,' it will give you a reference: this is from this report," Spataro said. "If you don't see a reference, you'll be very sure it's making something up.

MIT Technology Review

Jared Spataro, Micorsoft

"What we're trying to teach people, this thing is good, but just as people make mistakes you should think right now of this as a very talented, junior employee you don't trust," he said. "It does interesting work, but you'll have to trust, but verify."

Even when generative AI isn't perfect, it does help with creativity, research and automating mundane tasks, said Spataro, who spoke at the conference via remote video. When asked by an audience member how he could prove he was real versus an AI-generated deep fake. Spataro admitted he couldn't.

Watermarks to the rescue?

One way to combat fake news reports, images and video is to include in the metadata what are essentially watermarks, indicating the source of the data. Bill Marino, a principal product manager at generative AI start-up Stability AI, said his company will soon be integrating technology from the Coalition for Content Provenance and Authenticity (C2PA) into its generative AI models.

C2PA is an association founded in February 2021 by Adobe with the mission of providing identifying metadata in generative AI content.

StabilityAI last month released StableLM, an open-source alternative to ChatGPT. C2PA's metadata standard will be contained in every image that comes out of Stability's APIs, "and that provenance data in the metadata is going to help online audiences understand whether or not they feel comfortable trusting a piece of content the encounter online," Marino said.

"If you encounter the notorious photo of the Pope in Balenciaga, it would be great if that came with metadata you could inspect that tells you it was generated with AI," Marino said.

Stability AI trains LLMs for various use cases and then provides them as open-source software for free (they may monetize their APIs in the future). The LLMs can then be fine-tuned through prompt engineering for more specific purposes.

Marino said the risk associated with deep fakes, malware, and malicious content is "utterly unacceptable. I joined Stabilty, in part, to really stomp these out. I think the onus is on us to do that, especially as we shift our attention toward enterprise customers - a lot of these risks are non-starters."

Like others at the MIT conference, Marino believes the future of generative AI is in relatively small LLMs that can be more agile, faster with responses, and tailored for specific business or industry uses. The time of massive LLMs with hundreds of billions of parameters won't last.

Stability AI is just one of hundreds of generative AI start-ups using LLMs to create industry-specific chatbots and other technologies to assist in a myriad of tasks. Generative AI is already being used to produce marketing materials and ad campaigns more efficiently by handling manual or repetitive tasks, such as culling through emails or summarizing online chat meetings or large documents.

As with any powerful technology, generative AI can create software for a myriad of purposes, both good and bad. It can turn non-techies into application developers, for example, or be trained to test an organization's network defenses and then gain access to sensitive information. Or it could be used for workload-oriented attacks, to exploit API vulnerabilities, orto upload malware to systems.

Hugging Face's Mitchell credited Meta for gating its release of LLaMA (Large Language Model Meta AI) in February because that forces anyone seeking to use the technology to fill out an online form with verifiable credentials. (LLaMA is a massive foundational LLM with 65 billion parameters.)

"This now puts in things like accountability," Mitchell said. "This incentivizes good behavior, because if you're not anonymous, you're more likely not to use it for malicious uses. This is something Hugging Face is also working on.

"So, coming up with some of these guardrails or mechanisms that somewhat constrain how the technology can be used and who it can be used by is an important direction to go," she added.

Democratization of generative AI models can also prevent just one or two companies, such as Microsoft and Google, from having a concentration of power where the priorities of people - or mistakes by those who created them -are embedded in the software.

"If those models are deployed worldwide, then one single error or bias is now an international, worldwide error," Michell said. "...Diversity ensures one system's weaknesses isn't what everyone experiences. You have different weaknesses and strengths in different kinds of systems."

tag-icon Etiquetas calientes: Inteligencia Artificial Industria de la tecnología Investigación y desarrollo Tecnología emergente

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.