Nike Boots on the starting blocks during day five of the 2017 IAAF World Championships in London … [+]
The name New Relic is an anagram of founder Lew Cirne’s name. Arguably he could have called the company Crew Line and continued to observe the anagrams (it uses the same letters) and give a company name more in line with the organization’s mission of being a platform for all-in-one observability. As an Application Performance Management (APM) specialist, New Relic now aims to bring together all members of a software engineering team (the team) in a more unified way to help control production, deployment and the existence of applications throughout the software supply chain (the line) today.
Why do we need AI APM?
The company this month announced the launch of New Relic AI Monitoring, an APM service for AI-powered applications. But why do we need APM in AI, how does it differ from “normal” APM… and does it necessarily have to be smarter?
“Almost every company is deciding how they are going to integrate AI into their operations and product offerings,” said Manav Khurana, chief product officer of New Relic. “Observability is fundamental to the operation and growth of AI. With AIM, we give engineers the visibility and control to navigate the complexities of AI and build applications safely and cost-effectively.
Khurana sums it up pretty succinctly: he says we need to know what data flow elements are happening in AI applications so we can correlate and manage them and – fundamentally – that of course means we need to be able to see which extended language model (LLM) injects its knowledge into the code flow in order to assess its value, robustness, security and robustness.
Today, we see New Relic positioning AI Observability with AI Monitoring to provide software engineers with visibility across the entire AI stack, making it easier to troubleshoot and optimize their AI applications. The company’s AI monitoring technology is said to be capable of monitoring any AI ecosystem, with over 50 integrations across the AI stack, including popular LLMs including OpenAI GPT-4.
Just for clarity, the New Relic AI Monitoring product is known as AIM, which is an AI monitoring technology that comes in the form of an APM solution. At the risk of suggesting a reinvention of naming conventions for the enterprise, it might have been better labeled AI-M or AI-monitoring, or even AIMAPM. But we digress: we asked if AI APM differs from “normal” APM.
How AI monitoring mechanisms work
Addressing this point, New Relic reminds us that AI-based technology stacks introduce new complexity, as AI components such as LLMs and vector data stores are often a black box for engineers, with the potential to provide inaccurate (or biased) results and generate telemetry volumes. data that must be tracked and analyzed… and which even introduces security issues.
“[With AI monitoring] Engineers can access a single view to troubleshoot, compare, and optimize different LLM prompts and responses for performance, cost, safety, and quality issues, including hallucinations, bias, toxicity, and fairness. It gives engineers complete visibility into all components of the AI stack as well as services and infrastructure so they have the data they need to demonstrate compliance with AI regulations,” notes l company in a technical declaration.
Key features and use cases here include the previously mentioned AI stack integrations to enable engineers to monitor an entire AI stack with quick-start integrations for LLMs, vector databases, Popular orchestration frameworks and machine learning libraries. Technologies integrated here include:
- Orchestration framework: LangChain
- LLM: OpenAI, PaLM2, HuggingFace, MosaicML
- Machine learning libraries: Pytorch, Keras, TensorFlow
- Model service: Amazon SageMaker, AzureML
- Vector databases: Pinecone, Weaviate, Milvus, FAISS, Zilliz
- AI infrastructure: Azure, AWS, GCP, Kubernetes
The company also talks about visibility across the entire AI application stack to provide a holistic view of the application, infrastructure and AI layer, including AI metrics like as the quality of the answers and the tokens, as well as the so-called APM Gold Signals (latency, traffic, errors and saturation), all without any additional instrumentation required.
“Using AI to ensure that AI applications meet security, quality, safety and cost standards will save development teams time in terms of monitoring the complexity of these applications, meeting compliance standards and benchmarking performance, and will help protect organizations against vulnerabilities. » said Stephen Elliot, vice president of IDC Group. “Any company that provides these solutions ultimately enables developers to deliver better products and better customer experiences. »
Amazonian base
Along with this development, New Relic also announced that the New Relic AI Monitoring product is now integrated with Amazon Bedrock, a service fully managed by Amazon Web Services, Inc. (AWS) that makes base models (FM) of major AI companies via an API to create and scale generative AI applications. AWS customers can now use New Relic to gain greater visibility and insights across the entire AI stack, making it easier to troubleshoot and optimize their applications for performance, quality and of cost.
As we said recently, it makes perfect sense to use controlled, robust, and responsible AI to help build applications. Equally and oppositely, it makes sense to use AI to ensure that AI applications are run with the right ingredients (in the form of large language models, AI logic engines, and connections to data). other data services and application sources) and under the right conditions. operationally – and that’s the essence of application performance management.