Email or username:

Password:

Forgot your password?
Top-level
Dr. Quadragon ❌

The tragedy is, there's nobody else to do it for us so far.

5 comments
Dr. Quadragon ❌

@shuro It's still built by humans based on the data gathered and produced by humans, therefore inheriting a lot of bias, both implicit and explicit.

So in many ways, it may as well be humans, still.

Шуро
@drq Maybe real AI can have some sort of bias correction.

Something like check "was it popular decision by humans before and if so and there is less popular but otherwise matching alternative then prefer it instead".
Dr. Quadragon ❌

@shuro But that's not the way to validate if the decision is actually good. And a lot of decisions, (most of them in fact) are popular for a simple reason that they are good. Sometimes it's the edge cases where we have problems, and almost always it's those edge cases that reveal paradoxes and anomalies and invalidate the whole system.

Шуро
@drq Well, it depends on the objective.

If we want to just replace humans at the wheel, make the autopilot work just like real pilot but more efficient - then yes.

If we want AI to make better decisions - we have to admit some of human decisions were inherently wrong just because of our nature. So if AI is capable of generating different solution and it works (in terms of logic) then it should be preferred just to see if it is more successful (even if it seems unnatural and thus rarely tried by human operators). And it should learn on these practical results.

The possible downside of this approach is the image above :)
@drq Well, it depends on the objective.

If we want to just replace humans at the wheel, make the autopilot work just like real pilot but more efficient - then yes.
Go Up