A lot of lemmy is very anti-Ai. As an artist I’m very anti-Ai. As a veteran developer I’m very pro AI (with important caveats). I see it’s value; I see it’s threat.
I know I’m not in good company when I talk about its value on Lemmy.
Completely with you on this one. It’s awful when used to generate “art”, but once you’ve learned its short-comings and never blindly trust it it is such a phenomenal help in learning and assisting with code or finding something you’ve a hard time to find the right words for. And aside from generative use-cases neural networks are also phenomenally useful for assisting tasks in science, medicine and so on.
It’s just unfortunate we’re still in the “find out” phase of the information age. It’s like with the industrialization ~200 years ago, just with data… and unfortunately the lessons seem to be equally rough. All the generative tech will deal painful blows to our culture.
That’s a view from the perspective of utility, yeah. The downvotes here are likely also from a ethics standpoint, since most LLMs currently trained are doing so by using other peoples’ work without permission, all while using large amounts of water for cooling, and energy from our mostly coal-powered grid. This is also not mentioning the physical and emotional labor that many untrained workers are required to do when sifting through the datasets of these LLMs, removing unsavory data for extremely low wages.
A smaller, more specialized LLM could likely perform this same functionality with a much less training, on a more exclusive data set (probably only a couple of terabytes at its largest I’d wager), and would likely be small enough to run on most users’ computers after training. That’d be the more ethical version of this use case.
And we can see by the ratio that this was in fact a hot take.
A lot of lemmy is very anti-Ai. As an artist I’m very anti-Ai. As a veteran developer I’m very pro AI (with important caveats). I see it’s value; I see it’s threat.
I know I’m not in good company when I talk about its value on Lemmy.
Completely with you on this one. It’s awful when used to generate “art”, but once you’ve learned its short-comings and never blindly trust it it is such a phenomenal help in learning and assisting with code or finding something you’ve a hard time to find the right words for. And aside from generative use-cases neural networks are also phenomenally useful for assisting tasks in science, medicine and so on.
It’s just unfortunate we’re still in the “find out” phase of the information age. It’s like with the industrialization ~200 years ago, just with data… and unfortunately the lessons seem to be equally rough. All the generative tech will deal painful blows to our culture.
That’s a view from the perspective of utility, yeah. The downvotes here are likely also from a ethics standpoint, since most LLMs currently trained are doing so by using other peoples’ work without permission, all while using large amounts of water for cooling, and energy from our mostly coal-powered grid. This is also not mentioning the physical and emotional labor that many untrained workers are required to do when sifting through the datasets of these LLMs, removing unsavory data for extremely low wages.
A smaller, more specialized LLM could likely perform this same functionality with a much less training, on a more exclusive data set (probably only a couple of terabytes at its largest I’d wager), and would likely be small enough to run on most users’ computers after training. That’d be the more ethical version of this use case.