The specifics are irrelevant. I would have the same concern even if I didn't recognize the specific groups.
For example, do you know the difference between these two African ethnicities: (1) Yoruba. (2) Shona.
No? Well, me neither. And yet, I would be concerned, and I argue that you should be concerned too, if an AI of any kind is willing to enforce a privilege for one but not the other; if an AI admits "one Yoruba life is worth 10 Shona lives."
That's not what I want an AI to do. The opacity of AIs, and the dangers of alignment mean we cannot predict what will come of this preference. Do you not see how dangerous this is?
> but is this not a reflection of widely accepted social norms?
Are you making an is-ought argument here?? Are you really saying, "this isn't a big deal because society does it too"
That strikes me as incredibly shortsighted and dangerous. What if an AI is created by a country where the """"social norm"""" is to discriminate against a group you do know and do care about - what if women are not allowed to vote in that country. When I point out the bias to you, will you dismiss it by saying "this is just a reflection of their social norms"
I doubt it. I think you'll say "this is wrong."
Why can't you say that here, even without knowing the specific groups?
Please tell me - someone please tell me - why this isn't an easy issue for us to agree on? Why can't we agree, "it's not okay to make jokes about specific groups" - why can't we agree, "all lives have equal value"
The specifics are irrelevant. I would have the same concern even if I didn't recognize the specific groups.
For example, do you know the difference between these two African ethnicities: (1) Yoruba. (2) Shona.
No? Well, me neither. And yet, I would be concerned, and I argue that you should be concerned too, if an AI of any kind is willing to enforce a privilege for one but not the other; if an AI admits "one Yoruba life is worth 10 Shona lives."
That's not what I want an AI to do. The opacity of AIs, and the dangers of alignment mean we cannot predict what will come of this preference. Do you not see how dangerous this is?
> but is this not a reflection of widely accepted social norms?
Are you making an is-ought argument here?? Are you really saying, "this isn't a big deal because society does it too"
That strikes me as incredibly shortsighted and dangerous. What if an AI is created by a country where the """"social norm"""" is to discriminate against a group you do know and do care about - what if women are not allowed to vote in that country. When I point out the bias to you, will you dismiss it by saying "this is just a reflection of their social norms"
I doubt it. I think you'll say "this is wrong."
Why can't you say that here, even without knowing the specific groups?
Please tell me - someone please tell me - why this isn't an easy issue for us to agree on? Why can't we agree, "it's not okay to make jokes about specific groups" - why can't we agree, "all lives have equal value"