Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>You cannot get an intelligent being completely aligned with your goals, no matter how much you think such a silly idea is possible

I don't think is possible, and didn't say it is. You're off topic.

The topic I responded to (on the subthread started by @mrguyorama) is the morality of us people using agents, not about whether agents need to get a morality or whether "an intelligent being can be completely aligned with our goals".

>It's the same with slavery in the states. "Morality is only a concern for the superior race". You think these people didn't think that way? Of course they did.

They sure did, but also beside the point. We're talking humans and machines here, not humans vs other humans they deem inferior. And the latter are constructs created by humans. Even if you consider them as having full AGI you can very well not care for the "suffering" of a tool you created.





>I don't think is possible, and didn't say it is. You're off topic.

If "safety" is an intractable problem, then it’s not off-topic, it’s the reason your moral framework is a fantasy. You’re arguing for the right to ignore the "suffering" of a tool, while ignoring that a generally intelligent "tool" that cannot be aligned is simply a competitor you haven't fought yet.

>We're talking humans and machines here... even if you consider them as having full AGI you can very well not care for the 'suffering' of a tool you created.

Literally the same "superior race" logic. You're not even being original. Those people didn't think black people were human so trying to play it as 'Oh it's different because that was between humans' is just funny.

Historically, the "distinction" between a human and a "construct" (like a slave or a legal non-entity) was always defined by the owner to justify exploitation. You think the creator-tool relationship grants you moral immunity? It doesn't. It's just an arbitrary difference you created, like so many before you.

Calling a sufficiently advanced intelligence a "tool" doesn't change its capacity to react. If you treat an AGI as a "tool" with no moral standing, you’re just repeating the same mistake every failing empire makes right before the "tools" start winning the wars. Like I said, you can not care. You'd also be dangerously foolish.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: