Regístrese ahora para una mejor cotización personalizada!

Nvidia teams up with Snowflake for large language model AI

Junio, 27, 2023 Hi-network.com
Nvidia

At Snowflake's user conference in Las Vegas Monday, Snowflake Summit 2023, the cloud database maker announced a partnership with chip giant Nvidia that combines forces for processing so-called foundation models for AI.

According to the arrangement, Snowflake customers will be able to rent cloud GPU capacity in Snowflake's data warehouse installations, and they'll use that capacity to refine neural networks with Nvidia's NeMo framework, introduced last fall. Foundation models are very large neural networks, such as large language models, that are customarily "pre-trained" -- that is, they have already been developed to a level of capability. 

Also: AI has the potential to automate 40% of the average work day

A customer will use Snowflake's data warehouse to employ the customer's own data to develop a custom version of the NeMo foundation model to suit their needs.   

"It's a very natural combination for the two companies," said Nvidia's vice president of enterprise computing, Manuvir Das, in a press briefing. Das continued:

"For Nvidia and Snowflake to get together and say, well, if enterprise companies need to create custom models for generative AI based on their data, and the data is sitting in Snowflake's data cloud, then why don't we bring Nvidia's engine for model making, which is NeMo, into Snowflake's data cloud so that enterprise customers, right there on their data cloud, can produce these models that they can then use for the use cases in their business."

Also:Databricks'$1.3 billion buy of AI startup MosaicML is a battle for the database's future

The announcement is part of a growing trend to employ AI, and especially generative AI, as a business tool. On Monday, Apache Spark developer Databricks stunned the tech industry with a$1.3 billion acquisition of startup MosaicML, which runs a service to train and deploy foundation models.

Snowflake will implement the service by procuring Nvidia GPU instances from the cloud service providers with whom it already works. "Now, we are just talking about an extension of that [relationship] to include GPU-based instances," said Das. 

In a separate release on Tuesday, Snowflake said it will extend its Snowpark developer platform with what it calls Snowpark Container Services, currently in a private preview. The company is "expanding the scope of Snowpark so developers can unlock broader infrastructure options such as accelerated computing with Nvidia GPUs and AI software to run more workloads within Snowflake's secure and governed platform without complexity," according to the release, "including a wider range of AI and machine learning (ML) models, APIs, internally developed applications, and more."

In response to a question from about how customer data would be protected in the arrangement between the two, Das indicated the main responsibility lies with Snowflake.  

"Snowflake has a design construct to ensure that when a customer chooses to do computation on the Snowflake data cloud, it remains within the boundaries for that customer," said Das, "and then the NeMo engine just fits into that model."

Added Das: "Certainly, there is a responsibility for NeMo as well" for security, "and that's why it's joint engineering work."

Also:Nvidia unveils new kind of Ethernet for AI, Grace Hopper 'Superchip' in full production

The partnership follows a recent announcement by Nvidia with ServiceNow to use NeMo with ServiceNow's customers in IT services. Where the Snowflake arrangement is "general purpose," said Das, the ServiceNow partnership "is more the ISV (independent software vendor) sort of model." ServiceNow is using the NeMo code to train customer models for each of their customers, "so that when each of their customers does their IT work, and opens [trouble] tickets, they'll get responses that are specific to that customer."

Nvidia CEO Jensen Huang has positioned software as an important growth vector for his company, which makes billions selling GPU hardware to develop neural networks. NeMo is part of the enterprise software stack the company is promoting, in large part through partnerships with cloud providers. 

In March, Nvidia CFO Colette Kress told investors at a Morgan Stanley conference, "Our software business right now is in the hundreds of millions [of dollars of revenue] and we look at this as still a growth opportunity as we go forward."

Also:AMD unveils MI300x AI chip as 'generative AI accelerator'

See also

How to use ChatGPT to write Excel formulasHow to use ChatGPT to write codeChatGPT vs. Bing Chat: Which AI chatbot should you use?How to use ChatGPT to build your resumeHow does ChatGPT work?How to get started using ChatGPT
  • How to use ChatGPT to write Excel formulas
  • How to use ChatGPT to write code
  • ChatGPT vs. Bing Chat: Which AI chatbot should you use?
  • How to use ChatGPT to build your resume
  • How does ChatGPT work?
  • How to get started using ChatGPT

tag-icon Etiquetas calientes: Inteligencia Artificial innovación

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.