@prem_k ollama? Yeah it’s pretty good, they’re very on top of adding new models
Top-level
5 comments
@prem_k oh fantastic! I’ve played with https://github.com/mlc-ai/web-llm but I didn’t know about the llama.cpp port, that’s awesome @simon #WASM is such an interesting development for web apps that can run locally in the browser, even when offline. 😃 Slightly related note, Motherduck's 1.5 tier architecture powered by WASM is pretty cool too, especially when are able to join between tables on your browser and in your cloud in a single SQL query. Wonder what else will WASM bring. 🤞🏼 @prem_k I love how easy WASM makes it to run tools like Tesseract - I built this OCR tool using Tesseract.js and PDF.js and it works really well https://tools.simonwillison.net/ocr @simon wow! Didn't know about tesseract.js. This could potentially remove the need for RPA 😄 |
@simon no, wllama is the WASM binding for llama.cpp & can Inference the gguf files of the models within the browser itself. It is an alternative to Transformers.js (onnx files) and WebLLM (bin shards).
https://github.com/ngxson/wllama