Avieshek@lemmy.world to Technology@lemmy.worldEnglish · 11 days agoEdward Snowden slams Nvidia's RTX 50-series 'F-tier value,' whistleblows on lackluster VRAM capacitywww.tomshardware.comexternal-linkmessage-square101fedilinkarrow-up1373arrow-down195
arrow-up1278arrow-down1external-linkEdward Snowden slams Nvidia's RTX 50-series 'F-tier value,' whistleblows on lackluster VRAM capacitywww.tomshardware.comAvieshek@lemmy.world to Technology@lemmy.worldEnglish · 11 days agomessage-square101fedilink
minus-squareJeena@piefed.jeena.netlinkfedilinkEnglisharrow-up25·11 days agoExactly, I’m in the same situation now and the 8GB in those cheaper cards don’t even let you run a 13B model. I’m trying to research if I can run a 13B one on a 3060 with 12 GB.
minus-squareThe Hobbyist@lemmy.ziplinkfedilinkEnglisharrow-up15·11 days agoYou can. I’m running a 14B deepseek model on mine. It achieves 28 t/s.
minus-squareJeena@piefed.jeena.netlinkfedilinkEnglisharrow-up6·11 days agoOh nice, that’s faster than I imagined.
minus-squarelevzzz@lemmy.worldlinkfedilinkEnglisharrow-up4·11 days agoYou need a pretty large context window to fit all the reasoning, ollama forces 2048 by default and more uses more memory
minus-squaremanicdave@feddit.uklinkfedilinkEnglisharrow-up4·11 days agoI’m running deepseek-r1:14b on a 12GB rx6700. It just about fits in memory and is pretty fast.
Exactly, I’m in the same situation now and the 8GB in those cheaper cards don’t even let you run a 13B model. I’m trying to research if I can run a 13B one on a 3060 with 12 GB.
You can. I’m running a 14B deepseek model on mine. It achieves 28 t/s.
Oh nice, that’s faster than I imagined.
You need a pretty large context window to fit all the reasoning, ollama forces 2048 by default and more uses more memory
I’m running deepseek-r1:14b on a 12GB rx6700. It just about fits in memory and is pretty fast.