minus-squaretau@lemmings.worldtoTechnology@lemmy.world•Local AI is one step closer through Mistral-NeMo 12BlinkfedilinkEnglisharrow-up4·4 months agoJust beware that like AMD, Intel GPUs suffer a performance hit when using LLMs because of the CUDA specific optimizations in frameworks like llama.cpp linkfedilink
Just beware that like AMD, Intel GPUs suffer a performance hit when using LLMs because of the CUDA specific optimizations in frameworks like llama.cpp