Monitor Your LLM Model Using Azure ML Prompt Flow (Preview) — Part 9
This post is in continuation to my previous articles, where I explained about Azure Machine Learning Prompt Flow. Here are those:
Part 1 — Getting started with Azure Prompt Flow
Part 2 — Create, update and test Azure Prompt Flow locally
Part 3 — Chat With Custom Data — FAISS Index & Azure ML Prompt Flow
Part 4 — Integrating Azure AI Search with Azure Prompt Flow
Part 5 — Use multiple prompts to analyze LLM response — Prompt Variant
Part 6 — Deploy And Consume LLM App Using Azure ML Prompt Flow
Part 7 — Evaluate Azure ML Prompt Flow Using Built-in Methods
Part 8 — Integrating LangChain With Azure ML Prompt Flow
Till now, I’ve covered lot many topics related to the Azure Prompt Flow and have witnessed the power of LLM to generate text, facilitate conversations, and what not. But how do we ensure that our LLMs are performing optimally and delivering accurate results?
To be sure on the quality of outcome, we need some kind of monitoring for our LLM models. Let’s move a step further and see, how can we monitor the model which is already deployed in production.
I’ve created a video demonstrating the monitoring of LLM model using Azure ML Prompt Flow and would recommend you watch this for more insights.
Happy prompting!
Related articles
Getting Started With Azure Prompt Flow | by Shweta Lodha | Jan, 2024 | Medium
Create, Update & Test Azure Prompt Flow Locally | by Shweta Lodha | Jan, 2024 | Medium
Evaluate Azure ML Prompt Flow Using Built-in Methods — Part 7 | by Shweta Lodha | Feb, 2024 | Medium
Integrating LangChain With Azure ML Prompt Flow — Part 8 | by Shweta Lodha | Feb, 2024 | Medium