Overview:
Farfalle is an open-source AI-powered search engine that offers the capability of running local models or using cloud models for search functionalities. It supports various search providers and allows users to ask questions to both cloud models and local models, enhancing the search experience.
Features:
- Multiple Search Providers: Utilize search capabilities from different providers like Tavily, Searxng, Serper, and Bing.
- Answer Questions with Models: Ability to answer questions with cloud models like OpenAI/gpt4-o, OpenAI/gpt3.5-turbo, Groq/Llama3, or local models like llama3, mistral, gemma, phi3.
- Custom LLM Support: Users can leverage custom Language Model Models (LLMs) through LiteLLM for tailored search experiences.
- Agent-assisted Search: Conduct searches with an agent that plans and executes the search for better and accurate results.
Installation:
Getting Started Locally:
Prerequisites:
- Docker
- Ollama (if using local models)
- Download and start any supported local models: llama3, mistral, gemma, phi3
- Start the ollama server:
ollama serve
- Get API keys for optional providers like Tavily, Serper, OpenAI, Bing, Groq
Quick Start:
- Modify the
.env
file with API keys (optional for Ollama) - Start the app and visit http://localhost:3000
- For custom setup, refer to the
custom-setup-instructions.md
file.
- Modify the
Deploy:
Backend:
- Deploy the backend to Render
- Copy the web service URL (e.g., https://some-service-name.onrender.com)
Frontend:
- Use the copied backend URL as
NEXT_PUBLIC_API_URL
when deploying with Vercel - Deploy the frontend with Vercel
- Use the copied backend URL as
Using Farfalle as a Search Engine:
- Configure Farfalle as your default search engine in your browser settings
- Create a new search engine entry using this URL: http://localhost:3000/?q=%s
Summary:
Farfalle is a versatile search engine that combines the power of AI models with multiple search providers. Users can benefit from the ability to query cloud models or local models, along with the option to integrate custom Language Models through LiteLLM. With agent-assisted searches and support for various search providers, Farfalle offers a comprehensive search experience for users looking for accurate and efficient results.