Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU https://ift.tt/c8CEYe5

Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU https://ift.tt/c8CEYe5

Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU I spent the last few days building out a nicer ChatGPT-like interface to use Mistral 7B and Llama 3 fully within a browser (no deps and installs). I’ve used the WebLLM project by MLC AI for a while to interact with LLMs in the browser when handling sensitive data but I found their UI quite lacking for serious use so I built a much better interface around WebLLM. I’ve been using it as a therapist and coach. And it’s wonderful knowing that my personal information never leaves my local computer. Should work on Desktop with Chrome or Edge. Other browsers are adding WebGPU support as well - see the Github for details on how you can get it to work on other browsers. Note: after you send the first message, the model will be downloaded to your browser cache. That can take a while depending on the model and your internet connection. But on subsequent page loads, the model should be loaded from the IndexedDB cache so it should be much faster. The project is open source (Apache 2.0) on Github. If you like it, I’d love contributions, particularly around making the first load faster. Github: https://ift.tt/CZIlF7G Demo: https://secretllama.com https://ift.tt/CZIlF7G May 4, 2024 at 02:56AM

0 Comments: