Regístrese ahora para una mejor cotización personalizada!

AI models are getting smaller and more powerful

Jul, 08, 2023 Hi-network.com

Large language models (LLMs) such as GPT-3 have been growing in size and complexity, with models like GPT-4 having nearly six times as many parameters. However, the increasing costs and resource requirements of these models are becoming a problem.

Training costs could exceed a billion dollars by 2026, and the stock of high-quality text for training may be exhausted around the same time. Additionally, running larger models is more expensive.

Researchers are now shifting their focus towards making models more efficient rather than simply bigger. One approach is to make trade-offs by cutting the number of parameters but training models with more data. Another option is to use rounding techniques to reduce hardware requirements.

Fine-tuning, low-rank adaptation, and teacher-student models are also being explored to achieve better performance with smaller models. Improving code and developing specialised hardware are additional strategies to enhance efficiency in AI models.

tag-icon Etiquetas calientes: Inteligencia Artificial desarrollo Gobernanza de los datos

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.