aliases | tags |
---|---|
ethics |
Are we all the same from the point of view of AI? (What stereotypes are reproduced and perpetuated by AI? How to counter this?) the chess playing man as a symbol for intelligence and abstract thinking, women in connection with physical appearance (voice, appearance, whiteness, care work -> caring robots however again often depicted as animals, e.g. the strong bear, why?)
AI and "whiteness" -> how is AI represented? how can we understand "data" / how to prevent BIAS? what infrastructure do we want?
"AI has some things to do with intelligence, but hardly anything to do with understanding and not remotely to do with reason." (Precht) What does this mean for our concept of intelligence? (social intelligence, are other forms of intelligence becoming more important?) Intelligence no longer just the privilege of the chosen few?
Are AIs training the ability to make themselves more and more "intelligent"? And could they then possibly develop a self-image, a consciousness, and their own interests? Will a strong AI, a "super intelligence" develop that autonomously and in feedback loops calculates an ever better picture of the world and itself? What do we need AI for? What do people really want to use computers for? Who is actually "the human being" that the IT gurus are concerned with? What sociological concept or theory of human coexistence is behind this?
"The purpose of the penetration of the world and people with the help of AI is not to make human lives better, but to make profits. Such a highly expansive economy and the accompanying transformation of our living world are unquestioningly considered a given by most decision-makers and visionaries of the digital future." (Precht)
Are people themselves becoming a commodity? Platforms are growing in importance and becoming markets themselves. (A new form of governance?) What new policies do we need? (Transparency, monopolization, data protection) AI is based on a continuous stream of data that has so far flowed primarily in one direction, from the individual to the provider. What can services look like that strive for traceability, transparency, and fairness in data use? Are people the subject or the object in the development of AI? What role does China play in an international comparison?
Is there anything wrong with seeing work as a constituent value of our society? Opportunity and risk? What does AI mean for our concept of labor?
Kierkegaard: In fact, individual happiness might depend much more on the question: What does technical progress mean for each individual? And not: What does it mean for mankind? People feel happiness not through objective knowledge of the world or the self, but through the way it relates to themselves. Does AI change anything in the mental state of each individual? How does AI change social coexistence? Changed identities - areas of individual, human identity formation and socialization: To what extent AI will gain access to this most intimate, hence most important level of society, technically depends on how much data we generate about ourselves, how we deal with it, and to what extent we understand how AI will use this data. -> Privacy? -> What effects will AI have on the formation and unfolding of identity? -> Manipulative character of AI
"To have to optimize oneself as a species because, on the one hand, it is deficient and "obsolete" and, on the other hand, it is doomed to expand, is a dangerous ideology that bears some similarities to the totalitarianisms of the 20th century." (Precht) AI-based technologies are enabling a new level of comfort and convenience. How can this new user experience be designed in such a way that users retain a basic level of autonomy and capability despite maximum convenience?
Can something be of value that itself feels no values? "Artificial intelligence does not feel any values. Even if you try to program so-called values into it, it has none. For a value that is not felt at the same time is not one." (Precht) Can value concepts be programmed? (Racism, sexism are reproduced in programming). If AI does not feel its own values, who bears the responsibility of decisions? (What is responsibility in a complex sociotechnical system anyway?) Autonomous decision-making - To what extent and when are machines allowed to make autonomous decisions on their own, and what are the consequences? Where is the limit of what we are allowed to do? How to deal with concrete dilemma situations?
When machines become creative, do humans cede one of their last territories to technology? How is that problematic at all? Doesn't this conceal a very anthropocentric view of the world, with man as the only creative being? How does artificial creativity come about and how can it be used? Where does it encounter its limits? Can algorithms be empowered to generate ideas and artifacts on their own? Since creative ideas and artifacts always require positive selection, i.e., they must be evaluated as "interesting" or "valuable," for example, but algorithms have difficulty evaluating their own output, artificial creativity reaches its limits here Does randomness, the surprising and unpredictable, disappear as algorithms become more accurate in predicting and suggesting what to do? Can AI generate creativity and curiosity when every curiosity is directly satisfied and everything humans compose, craft and conceive is better done by the machine? Do all the alternative/artistic/critical approaches to AI and robotics have any impact... or do they fizzle out/are only for an intellectual elite... is there an exchange between business and art creators? How to deal with dilemma situation? "Nowhere else than in morality is it so clearly evident that humans are the "other of artificial intelligence." Morality without subjectivity is not morality and subjectivity without morality is not subjectivity. Moral judgments consist only of results or even "solutions," but the path, the act of deciding, is itself of paramount importance." (Precht) // Decision-making by computers and robots is per se extra-moral. How can decision-making processes be made transparent? "The more self-learned technology determines us and our coexistence, the less freedom there is for individual decisions and thus for the creation of meaning." (Precht) How to shape the relationship between AI and human freedom?
"Do that, Google." The currently ubiquitously advertised smart assistance systems based on speech recognition technologies will change our conversational behavior. Do we accept these changes in communication or do we use AI to communicate more empathically, friendly and openly with our environment in the future?