Regístrese ahora para una mejor cotización personalizada!

Nvidia CEO Jensen Huang: AI language models as-a-service "potentially one of the largest software opportunities ever"

Sep, 21, 2022 Hi-network.com

Nvidia's co-founder and CEO, Jensen Huang, opened the company's fall GTC conference by announcing general availability next month of the company's new "Hopper" GPU in systems from Dell and others. The keynote also featured computers for healthcare, robotics, industrial automation and automotive uses, as well as several cloud services including an Nvidia-hosted cloud service for deep learning language models such as GPT-3.

Nvidia

As reported yesterday, Nvidia co-founder and CEO Jensen Huang opened his company's fall GTC conference with numerous product and service announcements, including the introduction of two cloud computing services the company will operate.

In a press conference Wednesday, Huang told that the two services will be "very long-term SaaS platforms for our company."

One service, Large Language Model Cloud Services, let's a developer take a deep learning artificial intelligence program such as GPT-3 or Nvidia's Megatron-Turing 530B, and tune it to particular applications, to make it specific for a task while reducing the effort a customer has to do.

The second service, Omniverse Cloud Services, is an infrastructure-as-a-service offering by Nvidia that will let multiple parties collaborate on 3-D models and behavior.

Also: Nvidia CEO Jensen Huang announces 'Hopper' GPU availability, cloud service for large AI language models

asked Huang, How big can the SaaS [software-as-a-service] business be for Nvidia over many years?

Huang said it was difficult to know but that the large language model service has such broad applicability, it will be one of the biggest opportunities in all of software.

Here is Huang's response in its entirety:

Well, it's hard to say. That's really, kind-of, the answer. It depends on what software we offer as a service. Maybe  another way to take it is just a couple at a time. This GTC, we announced new chips, new SDKs, and new cloud services. And this is what you're asking about. I highlighted two of them [cloud services]. One of them is large language models. And if you haven't had a chance to look into the the the effectiveness of large language models and its implication on AI, please really do so. It's really important stuff. Large language models are hard to train, and the applications for large language models is quite diverse. It's been trained on a large amount of human knowledge. And so it has the ability to recognize patterns, but it also has within it an encoded amount, a large amount of encoded human knowledge, so that, if you will, it kind of has human memory, if you will. In a way, it's encoded a lot of our knowledge and skills. And so, if you wanted to adapt it to something that it was never trained to do - for example, it was never trained to answer questions or it was never trained to summarize a story or  release breaking news, paraphrase, it was never trained to do these things - with a few additional shots of learning, you can learn these skills. This basic idea of fine tuning, adapting for new skills, or zero shot, or few shot, learning, has great implications in a large number of fields, which is the reason why you're seeing such a large amount of funding in digital biology. Because large language models have learned to structure the language of proteins and the language of chemistry. And so, we put that model up. And how large can that opportunity be? My sense is that every single company in every single country speaking every single language has probably tens of different skills that their company could adapt that our large language model to go perform. I'm not exactly sure how big that opportunity is, but it's potentially one of the largest software opportunities ever. And the reason for that is because the automation of intelligence is one of the largest opportunities ever.

The other opportunity we spoke about was Omniverse Cloud. And remember what omniverse is. Omniverse has several characteristics. The first characteristic is it ingests, it can store, it can composite physical information, 3-D information, across multiple layers or what is called schemas. And it could describe geometries and textures and materials, properties like mass and weight and such, connectivity. Who's the supplier? What's the cost? What is it related to? What is the supply chain? I would be surprised if - behaviors, kinematic behaviors. It could be artificial intelligence behaviors. And so, the first thing that Omniverse does, is, it stores data. The second thing it does, is, it connects multiple agents. And the agents can be people, can be robots, can be autonomous systems. And the third thing that it does, is, it gives you a viewport into this new world, another way of saying, simulation engine. And so, Omniverse is basically three things. It's a new type of storage platform, it's a new type of connecting platform. and it's a new type of computing platform. You could write an application on top of Omniverse. You can connect other applications through Omniverse. Like, for example, we showed many examples of Adobe being connected to Autodesk applications being connected to, you know, various applications. And so, we're connecting things, and you could be connecting people. You could be connecting worlds, you could be connecting robots, you could be connecting agents. And so, the best way to think about what we've done with Nucleus [Nucleus Cloud, a component of Omniverse Cloud, is a facility for developers to work on 3-D models using the Universal Scene Description specification], think of it as the easiest way to monetize that, is probably like a database. And so, it's a modern database in the cloud. Except this database is in 3-D, this database connects multiple people.

And so, those were two SaaS applications that we put up. One is called large language model. The other one is basically Omniverse or a database engine, if you will, that we're going to put up in the cloud. So, I think these two announcements - I'm really happy that you asked - I'll get plenty of opportunity to talk about it over and over and over again, I'm going to talk about it over and over again, but these two SaaS platforms are going to be very long term SaaS platforms for our company, and we'll make them run in multiple clouds and so on and so forth.

tag-icon Etiquetas calientes: Inteligencia Artificial innovación

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.