Loading connector details…
Loading connector details…
Choose a unique username to continue using AgentHotspot
by rajeshr6r • Uncategorized
A demo server integrating a local LLM with an MCP server for product recommendations.
Generate product recommendations using a local LLM.
Integrate LLM capabilities with a product catalog backend.
A demo setup combining FastAPI server and SQL database for MCP.
This project demonstrates a setup where a local large language model (LLM) server interacts with an MCP server to provide product recommendations. It includes components such as Ollama with the llama3.2 model, a Python FastAPI MCP server, and a SQL database for product catalog management. The system emulates retrieval-augmented generation (RAG) to enable the LLM to communicate effectively with the MCP server, enhancing recommendation capabilities.