Email or username:

Password:

Forgot your password?
Top-level
a1ba@$INSTANCE$host$
@rayslava have you tried that new deepseek?

My favorite test with them is sending them decompiled code and asking to explain what does it do. So far, it at least figured out what it does.
7 comments
rayslava

@a1ba tried an 14B version locally—it worked much better than codellamas for simple cases. Do they have a cloud version with large model too?

rayslava

@a1ba Okay 😀
Will check a bit later then, sounds interesting.

Alexey Skobkin

@rayslava @a1ba
It's no surprise since Codellama is based on LLaMA 2. It's so old now. Even LLaMA 3.0 looks bad if compared to LLaMA 3.1 or 3.3 🤷‍♂️

burbilog

@rayslava @a1ba No, you didn't. 14b means that you have tried Qwen 14b, distilled by Deepseek R1 (the reasoning one). It became much better than original Qwen, but still that's not Deepseek.

Unless you have an insane amount of VRAM you can't run Deepseek V3 or R1.

They really scewed model naming, pubilishing other distilled models as subversions of Deepseek itself.

rayslava

@burbilog @a1ba Okay, I tried the chat.deepseek.com and it's the whole new level.
It still wasn't able to give me working code but it was much (like MUCH) closer to what I wanted initially.

Given this model is released to open source, I can only join to all the people mentioning the new stage in LLM race.

burbilog

@rayslava @a1ba Yeah, it is much better now. But please, don't call it open source. They did not publish the SOURCE dataset, only the BINARY model. We don't call freeware .exe files open source, do we?

Go Up