@simon no, not yet.
I'm yet to look into the model files, but if they're available as gguf or onnx, it should be possible to run with llama.cpp or wllama for gguf and Transformers.js for onnx.
It's also possible to convert gguf files for running with ollama.