Email or username:

Password:

Forgot your password?
Top-level
Helles Sachsen

@ForeverExpat

I for one would expect that a general ai just would discover kants ethics because its build on logic. And follow it better than we do.

@balkongast @Gargron

14 comments
ForeverExpat replied to Helles

@helles_sachsen @balkongast @Gargron
Maybe. But individual ethics clashes with global reach and time scale of many of our problems. How to balance international humanitarianism for “slow burn” problems with local needs? How would people react if the algo decides to send funds to help far off places with larger problems at the expense of helping a local, short term crisis? How do you vote an algorithm out of office? Not to say humans are better, but algorithmic driven ethics is problematic also

Helles Sachsen replied to ForeverExpat

@ForeverExpat

We vote for the algorithm with our feet. We use co pilot or chatGPT because its useful, really powerful tools, they improve our life, speed of work or learning. We will use also more powerful ai if they improve our life.

EDIT: The people wont felt forced. The will search for the benefits of the ai decisions.

@balkongast @Gargron

ForeverExpat replied to Helles

@helles_sachsen @balkongast @Gargron
“Voting with your feet” …and the people prioritize local and/or short term over global and long term and thereby inject sub-optimal, inconsistent human ethical decision-making into the equation…and thus override the ability for the Kantian algorithms to maximize well being and human flourishing and fail to take into account cascading, long term effects.

balkongast replied to Helles

@helles_sachsen @ForeverExpat @Gargron

If you see it this way, humans are not necessary at all.
What consequences that implies is not what I would accept.
I guess your restrictions to the areas where you actually use AI is simply restricting the way to look at AI.

ForeverExpat replied to Helles

@helles_sachsen @balkongast @Gargron
I’m saying algos acting in a pure Kantian way [insert any philosophical framework] are bound to conflict with human (mis)perceptions of ethical outcomes. Problem is that humans often refuse to give up personal well-being/agency to benefit the many. How does an algo weigh the diversity of definitions of human flourishing in a complex adaptive system? Kant doesn’t scale. Humans a shit at this. No reason to expect AI will be better. Unless you just give into AI

Helles Sachsen replied to ForeverExpat

@ForeverExpat @balkongast @Gargron I for one everyone think we are so shiitty with this that it's very likely everyone and everything is better than us. Often throwing a coin would be better then human decisions.

balkongast replied to Helles

@helles_sachsen @ForeverExpat @Gargron

Coding this way seems to fit perfectly to any requirement?

Helles Sachsen replied to balkongast

@balkongast @ForeverExpat @Gargron These algorithm today that come out to be racists for example where made from humans. I mistrust humans more than a machine and throwing a coin.

balkongast replied to Helles

@helles_sachsen @ForeverExpat @Gargron

Hold my beer ... didn't you say that it would be too expensive to train such an AI?

Helles Sachsen replied to balkongast

@balkongast @ForeverExpat @Gargron For governments for sure, but not for Google. But Google got a shitstorm for a racist KI. They have to produce ai that they can sell.

balkongast replied to Helles

@helles_sachsen @ForeverExpat @Gargron

As long as you restrict that to code, fine with me.
Ethical issues should be solved by the people affected (not the monetary interests behind).

Go Up