Regístrese ahora para una mejor cotización personalizada!

Noticias calientes

Taming AI Frontiers with Cisco Observability Platform

Sep, 05, 2023 Hi-network.com

The Generative AI Revolution: A Rapidly Changing Landscape

The public unveiling of ChatGPT has changed the game, introducing a myriad of applications for Generative AI, from content creation to natural language understanding. This advancement has put immense pressure on enterprises to innovate faster than ever, pushing them out of their comfort zones and into uncharted technological waters. The sudden boom in Generative AI technology has not only increased competition but has also fast-tracked the pace of change. As powerful as it is, Generative AI is often provided by specific vendors and frequently requires specialized hardware, creating challenges for both IT departments and application developers.

It is not a unique situation with technology breakthroughs, but the scale and potential for disruption in all areas of business is truly unprecedented. With proof-of-concept projects easier than ever to demonstrate potential with ChatGPT prompt-engineering, the demand for building new technologies using Generative AI was unprecedented. Companies are still walking a tight rope, balancing between safety of compromising their intellectual properties and confidential data and urge to move fast and leverage the latest Large Language Models to stay competitive.

Kubernetes Observability

Kubernetes has become a cornerstone in the modern cloud infrastructure, particularly for its capabilities in container orchestration. It offers powerful tools for the automated deployment, scaling, and management of application containers. But with the increasing complexity in containers and services, the need for robust observability and performance monitoring tools becomes paramount. Cisco's Cloud Application Observability Kubernetes and App Service Monitoring tool offers a solution, providing comprehensive visibility into Kubernetes infrastructure.

Many enterprises have already adopted Kubernetes as a major way to run their applications and products both for on-premise and in the cloud. When it comes to deploying Generative AI applications or Large Language Models (LLMs), however, one must ask: Is Kubernetes the go-to platform? While Cloud Application Observability provides an efficient way to gather data from all major Kubernetes deployments, there's a hitch. Large Language Models have "large" in the name for a reason. They are massive, compute resource-intensive systems. Generative AI applications often require specialized hardware, GPUs, and big amounts of memory for functioning-resources that are not always readily available in Kubernetes environments, or the models are not available in every place.

Infrastructure Cloudscape

Generative AI applications frequently push enterprises to explore multiple cloud platforms such as AWS, GCP, and Azure, rather than sticking to a single provider. AWS is probably the most popular cloud provider among enterprise, but Azure's acquisition of OpenAI and making GPT-4 available as part of their cloud services was ground breaking. With Generative AI it is not uncommon for enterprises to go beyond one cloud, often spanning different services in AWS, GCP, Azure and hosted infrastructure. However, GCP and AWS are expending their toolkits from a standard pre-GPT MLOps world to fully- managed Large Language Models, Vector databases, and other newest concepts. So we will potentially see even more fragmentation in enterprise cloudscapes.

Troubleshooting distributed applications spanning across cloud and networks may be a dreadful task consuming engineering time and resources and affecting businesses. Cisco Cloud Native Application Observability provides correlated full-stack context across domains and data types. It is powered by Cisco Observability Platform, which provides building blocks to make sense of the complex data landscapes with an entity-centric view and ability to normalize and correlate data with your specific domains.

Beyond Clouds

As Generative AI technologies continue to evolve, the requirements to utilize them efficiently are also becoming increasingly complex. As many enterprises learned, getting a project from a very promising prompt-engineered proof of concept to a production-ready scalable service may be a big stretch. Fine-tuning and running inference tasks on these models at scale often necessitate specialized hardware, which is both hard to come by and expensive. The demand for specialized, GPU-heavy hardware, is pushing enterprises to either invest in on-premises solutions or seek API-based Generative AI services. Either way, the deployment models for advanced Generative AI often lie outside the boundaries of traditional, corporate-managed cloud environments.

To address these multifaceted challenges, Cisco Observability Platform emerges as a game-changer, wielding the power of OpenTelemetry (OTel) to cut through the complexity. By providing seamless integrations with OTel APIs, the platform serves as a conduit for data collected not just from cloud native applications but also from any applications instrumented with OTel. Using the OpenTelemetry collector or dedicated SDKs, enterprises can easily forward this intricate data to the platform. What distinguishes the platform is its exceptional capability to not merely accumulate this data but to intelligently correlate it across multiple applications. Whether these applications are scattered across multi-cloud architectures or are concentrated in on-premises setups, Cisco Observability Platform offers a singular, unified lens through which to monitor, manage, and make sense of them all. This ensures that enterprises are not just keeping pace with the Generative AI revolution but are driving it forward with strategic insight and operational excellence.

Cloud Application Observability dashboard

Bridging the Gaps with Cisco Full-Stack Observability

Cisco Observability Platform serves as a foundational toolkit to meet your enterprise requirements, regardless of the complex terrains you traverse in the ever-evolving landscape of Generative AI. Whether you deploy LLM models on Azure OpenAI Services, operate your Generative AI API and Authorization services on GCP, build SaaS products on AWS, or run inference and fine-tune tasks in your own data center -the platform enables you to cohesively model and observe all your applications and infrastructure and empowers you to navigate the multifaceted realm of Generative AI with confidence and efficiency.

Cisco Observability Platform extends its utility by offering seamless integrations with multiple partner solutions, each contributing unique domain expertise. But it doesn't stop there-it also empowers your enterprise to go a step further by customizing the platform to cater to your unique requirements and specific domains. Beyond just Kubernetes, multi-clouds, and Application Performance Monitoring, you gain the flexibility to model your specific data landscape, thereby transforming this platform into a valuable asset for navigating the intricacies and particularities of your Generative AI endeavors.

 

Get started with Cisco Observability Platform


tag-icon Etiquetas calientes: destacado Artificial Intelligence (AI) Kubernetes Amazon Web Services (AWS) Cisco Full-Stack Observability (FSO) Generative AI (genAI) Large Language Models (LLM) Cisco FSO Platform ChatGPT

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.