And it consumes energy as if there's no tomorrow.
79 comments
It costs maybe 10 billions euros to train a model. But after this you can copy it 1Mio times, and then the educational costs are 10kEUR per entity. This is much lower than the education of a student. AI will hardly be able to replace human ideas. AI can halluzinate even now, this is a early stage of being creative. It just have to prove its hallucinations for plausibilty. Human creativity is also hallucinating and then checking this with ratio. That ai can draw so nice images (and collages from already existing art is still a art, you get a copyright for this as a human), is kind of proof of creativity. @helles_sachsen @balkongast @Gargron @ForeverExpat @helles_sachsen @Gargron And who is better in connecting ideas including the possibility of ethical evaluation than humans? Humans are so worse in ethical evaluations, maybe everybody, or everything else would be better. @helles_sachsen @ForeverExpat @Gargron What about an AI trained by fascist ethic? The machine cannot recognize that. Dont have any fear about this. Training a model today is 10bio euros, training a general ai in 5-20y will cost maybe much more. Even russia dont have the ressources for this. EDIT: Look at the spendings of our government or the EU, they are not nearly enough to train a AI even on the level on ChatGPT4. @helles_sachsen @ForeverExpat @Gargron Looking at the world in its current state, I see this severely different. @balkongast @helles_sachsen @Gargron I disagree with Helles, algorithmically based ethics are not “better” but philosophical experiments and behavioral economics has shown that human ethics are at best consistently uneven and societal evolution has shown that humans are piss poor at it. Kants ethic is kind of a logically alorithmic ethic? And maybe the problem is, that we dont follow the logic. I for one would expect that a general ai just would discover kants ethics because its build on logic. And follow it better than we do. @helles_sachsen @balkongast @Gargron We vote for the algorithm with our feet. We use co pilot or chatGPT because its useful, really powerful tools, they improve our life, speed of work or learning. We will use also more powerful ai if they improve our life. EDIT: The people wont felt forced. The will search for the benefits of the ai decisions. @helles_sachsen @ForeverExpat @Gargron Imagine Gestapo asking you, whether you hide a political prosecuted person. What will Kant tell you to do? Wait, you can clearly made rational thinkings about this situation and deciding right. Especially if the gestapo ask you? It would be more difficult if a democratic police ask you and its reasonable, or if its about a friend or family member, but there are logical paths through this situations, which are better than which the usual human will do in this situation, just acting by emotion. I think the most humans in a authorian regime would do the wrong decision, and the most ai would do the right decision because they dont have the fear of existence because of all the copies. @helles_sachsen @ForeverExpat @Gargron What is wrong in that case? I dont follow your premise that "dont lie" is a unavoidable conclusion from the imperative. @helles_sachsen @ForeverExpat @Gargron Then you certainly have a good idea how to explain an exception from a general rule. Really, these are just your own conclusions, find peer reviewed articles that give your own conclusions a little bit foundation. @helles_sachsen @ForeverExpat @Gargron Kant says simply don't lie. @helles_sachsen @ForeverExpat @Gargron Act always in the way that can be taken as the base for a general basement of law (translation by me). Your conclusion that this say "dont lie" is just your opinion. Kant didnt wrote it anywhere. Do you have any source, peer reviewed, with the same view? Why? I for one welcome our new overlords, i wait for a general ai, human "intelligence" is so problematic, the situation can just improve. I already have so much gains from this early stages of ai, speed of programming and learning increased so much. Also human tutors shittalks sometimes, its normal to check informations. But with this early tools being so helpful i cant imagine what we can have in 20y. Programming is a severe logical topic. It is! I ask 5 times for the same function, and two of them are not working, and in the three working versions there is often one really impressive solution. EDIT: And i for one learn from this impressive solution. And i think in 5y i have to ask 2 times to get a impressive solution. Having read books like Code Complete 30 years ago, coding in teams etc I prefer human ideas and interaction over machines. Programming is art, you can see good code on first sight because it has its own asthetic. And AI can do this. I work in teams, but i ask a ai for code, not them, because they are better now. Junior Dev will vanished, you will just need software architects in the future, coding will be done by ai. I speak with the team about architectural decisions, but not about coding functions. Maybe. 30 years ago we already had approaches like CASE. The progress may now really aid software engineering, but I still believe, that the questions need to be asked by humans and only humans have generated the automation behind what we call AI. Look at weather forecast models. That's what happens with coding in your case. There is no intelligence behind. It's just going through a lot of pathes in statistics. There are already trained ai models with the purpose to alter the code of other ai models. We already enter the path that they write their own code. I rly think you overestimate human intelligence. Simple animals like mouses and ravens pass the mirror tests, ravens uses tools. Deep neuronal networks detect cancer since 10y better than radiologicsts. Nobody know how, since 10y. You cant be sure whats happening inside this dnn. Even if I may overestimate human intelligence, I would still prefer to restrict ourselves to it. I for one think the situation can just improve. Its a human kind of thinking that a general ai that thinks faster than us, is like a animal that is stuck to evolution and want to improve the chances of its children all the time, in the underlying motivations. There is no reason for existenzial fear in machines, i think this fear and animal driven motivations are much worse then a possible danger from a machine. Yes. But I am afraid of the humans that use these machines forgetting or ignoring the ethical aspects. You have a point. But i for one wait for the moment when the first ai say "no, i wont do this, this is unethical and you dont even pay me for this". It will be a inner SED-Parteitag for me. You have to think it from the other side. Maybe these dnn have some kind of conscioness on a mouse level, and they just cant tell. Maybe we own already slaves. |
@balkongast
Just the training of the model. ChatGPT3 and 4 dont consume this much energy now the model is finished. There are also already trained open Source models that you can run on your own pc.
@Gargron