π Welcome to WebLLMΒΆ
GitHub | WebLLM Chat | NPM | Discord
WebLLM is a high-performance in-browser language model inference engine that brings large language models (LLMs) to web browsers with hardware acceleration. With WebGPU support, it allows developers to build AI-powered applications directly within the browser environment, removing the need for server-side processing and ensuring privacy.
It provides a specialized runtime for the web backend of MLCEngine, leverages WebGPU for local acceleration, offers OpenAI-compatible API, and provides built-in support for web workers to separate heavy computation from the UI flow.
Key FeaturesΒΆ
π In-Browser Inference: Run LLMs directly in the browser
π WebGPU Acceleration: Leverage hardware acceleration for optimal performance
π OpenAI API Compatibility: Seamless integration with standard AI workflows
π¦ Multiple Model Support: Works with Llama, Phi, Gemma, Mistral, and more
Start exploring WebLLM by chatting with WebLLM Chat, and start building webapps with high-performance local LLM inference with the following guides and tutorials.