Ironically thanks in no small part to Facebook releasing Llama and kind of salting the earth for similar companies trying to create proprietary equivalents.
Nowadays you either have gigantic LLMs with hundreds of billions of parameters like Claude and ChatGPT or you have open Models that are sub-200B.
I personally think the really large models are useless. What is very impressive is the small ones that somehow manage to be good. It blows my mind that so much information can fit in 8b.
Ironically thanks in no small part to Facebook releasing Llama and kind of salting the earth for similar companies trying to create proprietary equivalents.
Nowadays you either have gigantic LLMs with hundreds of billions of parameters like Claude and ChatGPT or you have open Models that are sub-200B.
I personally think the really large models are useless. What is very impressive is the small ones that somehow manage to be good. It blows my mind that so much information can fit in 8b.
Llama 3.1 405b has entered the chat