Jump to content

Talk:Ethics of artificial intelligence

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Challenges vs risks

[edit]

Is there any justification for describing the (negative) risks of AI as "challenges"? Crossing the road on foot puts you at risk of being hit by a car. Taking a medicine prescribed to you by a doctor puts you at risk of side effects. These are risks, not challenges. The use of "challenge" here sounds very much like WP:SOAP. Boud (talk) 13:23, 23 May 2024 (UTC)[reply]

[edit]

Should a chapter conserning copyright and other issues with the ownership and value of the source data of AI models based on large data sets be added? It is not a traditional topic of AI ethics but is very relevant to the current wave of generative AI.

Certainly this topic has raised a lot of discussion, and for a reason. This wave of generative AI has been lauded as revolutionary and gained a ton of investments, but the invaluable source material has been collected in somewhat controversial ways, at least in the eyes of some people. 176.72.38.230 (talk) 16:17, 12 June 2024 (UTC)[reply]

@176.72.38.230 That would be a matter of finding reliable secondary sources (see WP:RS) to support such a section. Keep in mind that RS discussion of copyright legal issues would not necessarily support a section here relating to the ethics of AI. WeyerStudentOfAgrippa (talk) 18:53, 12 June 2024 (UTC)[reply]
@WeyerStudentOfAgrippa Yes, you're right about sourcing. However, copyright and other issues conserning the ownership and value of training data are not solely legal issues. Theft, for example is a legal concept, but also ethical. Most consider it is unethical to steal. Theft is illegal because it is unethical! That is why I specifically wrote: "and other issues concerning the ownership of the source data".
I believe many people have strong feelings about the current way of gathering data for generative AI not because they think it is illegal but because they feel it's morally wrong. The discussion however often revolves around the legal side because those people are trying to campaign for what they feel is moral, and in our society you have to argue via law.
But I do not have time for finding sources and writing proper wikipedia text just now. I just wanted to throw this idea here. Wouldn't sources that show that a remarkable amount of people think there is something unethical in the way of collecting source data for AI be enough to show that this is an ethical consern with AI? 176.72.38.230 (talk) 19:07, 12 June 2024 (UTC)[reply]

AI's "influence" in the arts and literature domains deserves to be discussed a bit more?

[edit]

Arts and entertainment professionals are susceptible to being replaced by AI models more than most others. There is a possibility of having an entire section dedicated to this with current examples from pop culture (media, advertising, entertainment, cinema, literature). Gaia1811 (talk) 17:39, 4 December 2024 (UTC)[reply]

It's a great idea and you should do it. It's a matter of pulling together good sources, but I would think there's good sources on this. JArthur1984 (talk) 17:53, 4 December 2024 (UTC)[reply]

Climate emergency

[edit]

Where do we have coverage of the relation between LLM development and acceleration of the climate emergency via increased water and energy usage? Boud (talk) 15:55, 5 December 2024 (UTC)[reply]