Email or username:

Password:

Forgot your password?
Top-level
Helles Sachsen

@balkongast

Just the training of the model. ChatGPT3 and 4 dont consume this much energy now the model is finished. There are also already trained open Source models that you can run on your own pc.

@Gargron

77 comments
Helles Sachsen

@balkongast

It costs maybe 10 billions euros to train a model. But after this you can copy it 1Mio times, and then the educational costs are 10kEUR per entity. This is much lower than the education of a student.
@Gargron

balkongast

@helles_sachsen @Gargron

AI will hardly be able to replace human ideas.
And I can imagine that there will be many models to be trained, at least I can't think of a limited amount of models.

Helles Sachsen

@balkongast

AI can halluzinate even now, this is a early stage of being creative. It just have to prove its hallucinations for plausibilty.

@Gargron

Helles Sachsen

@balkongast

Human creativity is also hallucinating and then checking this with ratio. That ai can draw so nice images (and collages from already existing art is still a art, you get a copyright for this as a human), is kind of proof of creativity.
@Gargron

ForeverExpat

@helles_sachsen @balkongast @Gargron
Agreed. Human creativity and innovation depends on the ability to make connections between two seemingly unrelated ideas - sometimes wildly different in fields, time periods, cultures. And then humans progressively build on those ideas - refining with other loosely connected ideas until it gets societal acceptance. AI is already a superior brainstorming tool than any corporate suits sitting around a whiteboard. And it will continue to get better.

balkongast

@ForeverExpat @helles_sachsen @Gargron

And who is better in connecting ideas including the possibility of ethical evaluation than humans?

Helles Sachsen

@balkongast

Humans are so worse in ethical evaluations, maybe everybody, or everything else would be better.

@ForeverExpat @Gargron

balkongast replied to Helles

@helles_sachsen @ForeverExpat @Gargron

What about an AI trained by fascist ethic? The machine cannot recognize that.

Helles Sachsen replied to balkongast

@balkongast

Dont have any fear about this. Training a model today is 10bio euros, training a general ai in 5-20y will cost maybe much more. Even russia dont have the ressources for this.

EDIT: Look at the spendings of our government or the EU, they are not nearly enough to train a AI even on the level on ChatGPT4.

@ForeverExpat @Gargron

balkongast replied to Helles

@helles_sachsen @ForeverExpat @Gargron

Looking at the world in its current state, I see this severely different.

ForeverExpat replied to balkongast

@balkongast @helles_sachsen @Gargron I disagree with Helles, algorithmically based ethics are not “better” but philosophical experiments and behavioral economics has shown that human ethics are at best consistently uneven and societal evolution has shown that humans are piss poor at it.

Helles Sachsen replied to ForeverExpat

@ForeverExpat

Kants ethic is kind of a logically alorithmic ethic? And maybe the problem is, that we dont follow the logic.

@balkongast @Gargron

Helles Sachsen replied to Helles

@ForeverExpat

I for one would expect that a general ai just would discover kants ethics because its build on logic. And follow it better than we do.

@balkongast @Gargron

ForeverExpat replied to Helles

@helles_sachsen @balkongast @Gargron
Maybe. But individual ethics clashes with global reach and time scale of many of our problems. How to balance international humanitarianism for “slow burn” problems with local needs? How would people react if the algo decides to send funds to help far off places with larger problems at the expense of helping a local, short term crisis? How do you vote an algorithm out of office? Not to say humans are better, but algorithmic driven ethics is problematic also

Helles Sachsen replied to ForeverExpat

@ForeverExpat

We vote for the algorithm with our feet. We use co pilot or chatGPT because its useful, really powerful tools, they improve our life, speed of work or learning. We will use also more powerful ai if they improve our life.

EDIT: The people wont felt forced. The will search for the benefits of the ai decisions.

@balkongast @Gargron

balkongast replied to Helles

@helles_sachsen @ForeverExpat @Gargron

Imagine Gestapo asking you, whether you hide a political prosecuted person. What will Kant tell you to do?
Being honest and tell that you hide someone (as it is required by the categorical imperative)?
Or lie and attempt to save the life of the prosecuted person?
Moral and ethical questions are complex as this simple Paradoxon shows. And you believe a machine will always decide ethical?

Helles Sachsen replied to balkongast

@balkongast

Wait, you can clearly made rational thinkings about this situation and deciding right. Especially if the gestapo ask you? It would be more difficult if a democratic police ask you and its reasonable, or if its about a friend or family member, but there are logical paths through this situations, which are better than which the usual human will do in this situation, just acting by emotion.

@ForeverExpat @Gargron

Helles Sachsen replied to Helles

@balkongast

I think the most humans in a authorian regime would do the wrong decision, and the most ai would do the right decision because they dont have the fear of existence because of all the copies.

@ForeverExpat @Gargron

balkongast replied to Helles

@helles_sachsen @ForeverExpat @Gargron

What is wrong in that case?
Saving the own ass, following the ' do not lie ' as required by the categorical imperative?
Or acting ethical, lie and risk your own life?

Helles Sachsen replied to balkongast

@balkongast

I dont follow your premise that "dont lie" is a unavoidable conclusion from the imperative.

@ForeverExpat @Gargron

balkongast replied to Helles

@helles_sachsen @ForeverExpat @Gargron

Then you certainly have a good idea how to explain an exception from a general rule.
Edit: the problem in it is that the rule is not general after the exception.

Helles Sachsen replied to balkongast

@balkongast

Really, these are just your own conclusions, find peer reviewed articles that give your own conclusions a little bit foundation.

@ForeverExpat @Gargron

balkongast replied to Helles

@helles_sachsen @ForeverExpat @Gargron

Kant says simply don't lie.
Where is the exception and how would the exception be found to apply?

balkongast replied to Helles

@helles_sachsen @ForeverExpat @Gargron

Act always in the way that can be taken as the base for a general basement of law (translation by me).
So this says do not lie. There is no exception to that.
And if there should be an exception, how can it be general?

Helles Sachsen replied to balkongast

@balkongast

Your conclusion that this say "dont lie" is just your opinion. Kant didnt wrote it anywhere. Do you have any source, peer reviewed, with the same view?

@ForeverExpat @Gargron

balkongast replied to Helles

@helles_sachsen @ForeverExpat @Gargron

Tell me what general rule stays general when there are exceptions and how can the exceptions be identified as valid and necessary?

Helles Sachsen replied to balkongast

@balkongast

That you still say lying is a exception to this rule, and want me debate from this starting point, is a little bit like putting words in my mouth. As i said, your premise that lying is not allowed from this ruled is totally wrong imho.

@ForeverExpat @Gargron

balkongast

@helles_sachsen @Gargron

The conclusion I draw is, that we just need to stop AI training?

Helles Sachsen

@balkongast

Why? I for one welcome our new overlords, i wait for a general ai, human "intelligence" is so problematic, the situation can just improve.
Also i have a lot of gains from this early stages of ai. Using it as a tutor who shittalks sometimes improves my learning speed by the factor 2-3.
@Gargron

balkongast

@helles_sachsen @Gargron

I consider this just another hope in technology.

Helles Sachsen

@balkongast

I already have so much gains from this early stages of ai, speed of programming and learning increased so much. Also human tutors shittalks sometimes, its normal to check informations. But with this early tools being so helpful i cant imagine what we can have in 20y.

@Gargron

balkongast

@helles_sachsen @Gargron

Programming is a severe logical topic.
Hallucinating is hardly the way to succeed in this topic.

Helles Sachsen

@balkongast

It is! I ask 5 times for the same function, and two of them are not working, and in the three working versions there is often one really impressive solution.

EDIT: And i for one learn from this impressive solution. And i think in 5y i have to ask 2 times to get a impressive solution.

@Gargron

balkongast

@helles_sachsen @Gargron

Having read books like Code Complete 30 years ago, coding in teams etc I prefer human ideas and interaction over machines.

Helles Sachsen

@balkongast

I prefer the best solution, not the human solution.

@Gargron

balkongast replied to Helles

@helles_sachsen @Gargron

So you seem to have the ultimate knowledge.
Gratulations. Honestly.

Helles Sachsen replied to balkongast

@balkongast

Programming is art, you can see good code on first sight because it has its own asthetic. And AI can do this.

@Gargron

balkongast replied to Helles

@helles_sachsen @Gargron

I'm fine with your point of view, but I don't share it.

Helles Sachsen replied to Helles

@balkongast

I work in teams, but i ask a ai for code, not them, because they are better now. Junior Dev will vanished, you will just need software architects in the future, coding will be done by ai.

@Gargron

Helles Sachsen replied to Helles

@balkongast

I speak with the team about architectural decisions, but not about coding functions.

@Gargron

balkongast replied to Helles

@helles_sachsen @Gargron

Maybe. 30 years ago we already had approaches like CASE. The progress may now really aid software engineering, but I still believe, that the questions need to be asked by humans and only humans have generated the automation behind what we call AI. Look at weather forecast models. That's what happens with coding in your case. There is no intelligence behind. It's just going through a lot of pathes in statistics.

Helles Sachsen replied to balkongast

@balkongast

There are already trained ai models with the purpose to alter the code of other ai models. We already enter the path that they write their own code.

I rly think you overestimate human intelligence. Simple animals like mouses and ravens pass the mirror tests, ravens uses tools. Deep neuronal networks detect cancer since 10y better than radiologicsts. Nobody know how, since 10y. You cant be sure whats happening inside this dnn.

@Gargron

balkongast replied to Helles

@helles_sachsen @Gargron

Even if I may overestimate human intelligence, I would still prefer to restrict ourselves to it.
HAL 9000 can tell us, Stanley Kubrick has told a fascinating story with this movie.

Helles Sachsen replied to balkongast

@balkongast

I for one think the situation can just improve. Its a human kind of thinking that a general ai that thinks faster than us, is like a animal that is stuck to evolution and want to improve the chances of its children all the time, in the underlying motivations.

@Gargron

Helles Sachsen replied to Helles

@balkongast

There is no reason for existenzial fear in machines, i think this fear and animal driven motivations are much worse then a possible danger from a machine.

@Gargron

balkongast replied to Helles

@helles_sachsen @Gargron

Yes. But I am afraid of the humans that use these machines forgetting or ignoring the ethical aspects.
It is hard enough to argue against humans who do so.

Helles Sachsen replied to balkongast

@balkongast

You have a point. But i for one wait for the moment when the first ai say "no, i wont do this, this is unethical and you dont even pay me for this". It will be a inner SED-Parteitag for me.

@Gargron

Helles Sachsen replied to balkongast

@balkongast

You have to think it from the other side. Maybe these dnn have some kind of conscioness on a mouse level, and they just cant tell. Maybe we own already slaves.

@Gargron

Go Up