AI and your Critical Thinking Curriculum

Educational systems need to prepare individuals to be professionally proactive in a world whose functioning is sustained by invisible machine learning algorithms. We need to focus on an IT and Critical Thinking curriculum to prepare teachers and students for the rapid progress in a digitally driven world.

In our workshop at the Ambient Revolts conference, Unlearning Learning, we are exploring these topics from various perspectives, out of which the ethics of collecting data and the impact of an opaque AI algorithm on one’s life are central themes. We are offering a concise background about the field and visions of the future classroom.

Would a Robot Teacher be loved by students? Would deep gamification based on thorough quantification make learning more engaging and fun? Finally, will learning become a solo trip or will collaborative engagement define the future of learning?

We believe experiences are central to learning.

We believe experiences are central to learning, which is why for this project we devised a social experiment. BGCON participants engaged the ultimate question: Should your life be sorted by machine learning algorithms without your agency?

There have been many advances in the field of artificial intelligence (AI) in recent years, leading to inventions we previously never thought possible. Computers and robots now have the capacity to learn how to improve their own work, and even make decisions. (This is done through an algorithm, of course, and without individual consciousness). All the same, we must not fail to ask some fundamental questions. Can a machine think? What is an AI capable of at this stage of its evolution? To what degree is it autonomous? Where does that leave human decision-making?

More than ushering in a Fourth Industrial Revolution, AI is provoking a cultural revolution. It is undeniably destined to transform our future, but we don’t know exactly how, yet, thus it inspires both our fascination and fear.

To better understand AI, we have to learn, what an algorithm is: According to a glossary provided by the UNESCO Courier “Artificial Intelligence, The promises and the threats” (July – September, 2018), the word algorithm “is derived from the name of the ninth-century Persian mathematician, Muhammad ibn Musa al-Khwarizmi, who introduced decimal numbers to the West. Today it refers to a series of instructions that must be executed automatically by a computer. Algorithms are at work in all areas, from search-engine queries to recommendation systems, and financial markets.

Ethical Dilemmas

Should AI be in charge of your education and career path?

Miles Berry, principal lecturer computing education at the University of Roehampton in the UK, imagined this conversation:

Teacher: “The AI says your daughter should do this course.”

Parent: “Why?“

Teacher: “Nobody knows.”

Opaque AI algorithms could one day pick the courses we take, thus suggesting the career path it would be best for a student to take. But what does ‘best’ mean in this context and whose interests are actually served? Provided that a student could decide to follow their heart instead of an AI’s suggestion, how likely would they be to do that? Also, what would happen to less profitable occupations – would they disappear?

And what if this proves bad for us in the long run?

COLLECTING DATA

While Western European and American educators are focusing on discussing how AI can change education and which are the grey areas, China is already testing those. Hangzhou No. 11 High School, located in the eastern part of the country, uses advanced technologies to constantly monitor the students’ behavior.

A camera placed atop the blackboard scans the classroom every 30 seconds trying to pick up every pupil’s mood by reading their facial expressions. It knows if someone is happy, sad, afraid, upset, angry, fearful or disgusted. “The information collected by system is analyzed and reported to teachers so they can better supervise the performance of their students,” the Chinese media reports.

One of the students is quoted saying:

I don't dare be distracted since the cameras have been installed in the classrooms. It's like a pair of mystery eyes that are constantly watching me.

A student’s disgust towards his teacher of their subject can even have lifetime implications. The experiment carried out at Hangzhou No. 11 High School might become, at some point, part of China’s social credit system, which was launched as a pilot scheme in 2014. The system assesses a person’s reputation by looking at their behavior. If they have a good score, citizens can see a doctor without lining up or can get a loan. They could also find a better mate on dating websites, because such platforms encourage people to share their score.

Several researchers who have analyzed surveillance said that when we know we are surveilled, we tend to alter our behavior. “In surveillance capitalism, rights are taken from us without our knowledge, understanding, or consent, and used to create products designed to predict our behavior,” Shoshana Zuboff, professor of business administration at the Harvard Business School said in an interview.

Also, when a powerful state-entity tends to adopt a surveillance technology, other countries could follow the example, using the precedent to authenticate their decision.

“In an age of terror, our government has shown a keen willingness to acquire this data and use it for unknown purposes,” wrote Neil. M. Richards, professor of Law at Washington University School of Law, in his paper The Dangers of Surveillance. “Although we have laws that protect us against government surveillance, secret government programs cannot be challenged until they are discovered.”

Beta: Open Source Library

A collection of material to help explain AI and the development of technology.

Who controls AI?

Entities such as corporations or governments with large resources might want to control how people are educated. A country’s dictator could decide, for instance, to alter school curricula so that more people are funneled towards the military. With the use of AI, political systems could also more efficiently influence young people’s ideologies and their understanding of what is right and wrong. History has plenty of examples to show for this, but the less transparent nature of some algorithms might make such actions more difficult to prove.

“If,” as Neil Selwyn, a Professor in the Faculty of Education at Monash University, says, “implementing an automated system entails following someone else’s logic then, by extension, this also means being subject to their values and politics.”

Should we let opaque machine learning algorithms change our behavior?

Mathematician Cathy O’Neil talks about opaque algorithms in her book, Weapons of Math Destruction. She says that such tools “define their own reality and use it to justify their results”.

She discusses a system used to assess teachers’ performance based on opaque criteria. Some teachers, suspecting their students’ grades would affect their own employment prospects, artificially inflated their classes’ results. Teachers who taught in poorer areas and were honest about their students’ results in turn lost their jobs, because they were rated as low-performers.

“An algorithm processes a slew of statistics and comes up with a probability that a certain person might be a bad hire, a risky borrower, a terrorist, or a miserable teacher,” writes O’Neil. “That probability is distilled into a score, which can turn someone’s life upside down. And yet when the person fights back, “suggestive” countervailing evidence simply won’t cut it.”

Could we teach AI algorithms to make moral decisions?

Several voices within the technology community have spoken about teaching engineers and data scientists philosophy, psychology, and ethics.

“[I]f we have Stem education without the humanities, or without ethics, or without understanding human behaviour, then we are intentionally building the next generation of technologists who have not even the framework or the education or vocabulary to think about the relationship of Stem to society or humans or life,” Mitchell Baker, executive chairwoman of the Mozilla Foundation said, told The Guardian.

People building algorithms should pledge a Hippocratic Oath.

Mathematician Cathy O’Neil claims that people building algorithms should pledge a Hippocratic Oath, just like doctors do. She takes inspiration from two financial engineers, Emanuel Derman and Paul Wilmott, who sketched a few principles following the market crash in 2008:

  • “I will remember that I didn’t make the world, and it doesn’t satisfy my equations.
  • Though I will use models boldly to estimate value, I will not be overly impressed by mathematics.
  • I will never sacrifice reality for elegance without explaining why I have done so.
  • Nor will I give the people who use my model false comfort about its accuracy. Instead, I will make explicit its assumptions and oversights.
  • I understand that my work may have enormous effects on society and the economy, many of them beyond my comprehension”

How can we avoid the use of algorithms by malicious entities?

In recent years, different actors have been accused of altering election results, spreading fake news, and manipulating large populations. A country’s education system is connected to its ability to innovate – what if a powerful entity were able to interfere with a country’s education system?

Experts from several institutes including the Future of Humanity Institute, the Centre for the Study of Existential Risk, and OpenAI have published a study in which they present some of the malicious ways in which AI could be used. These researchers believe that technologists, policymakers, ethicists, and the general public need to discuss the implications of algorithms. AI researchers should acknowledge how their work can be used maliciously, they say, urging that ethical frameworks for AI need to be created and followed.

Social Experiment

Do you want algorithms to decide your career path?

The aim of the experiment was to show how opaque algorithms work in orchestrating processes that often decide a person’s course in life through the use of personal information in presumptive, biased algorithms.

Social positions and stereotypes shape our lives. Coming from a broken home or a poor neighbourhood can be tickets to or the jail or towards the military. Owning more than one gadget and liking frozen pizza may qualify you for Silicon Valley. In our social experiment, even if you give the maximum number of yes answers to the questionnaire; you’re not in for a prize – you’re going straight to jail. Does jail in this context act as a metaphor for misfits or is the algorithm in fact bogus?

Myers Briggs-style tests are already widely used in HR. But more advanced, quantified, individual mapping will become more and more the norm.

Methodology

We gathered 30 random people from the BGCON and asked them to stand in a line. A moderator was sitting in front and reading out the questions. The participants had to take a step forward if their answer was yes and a step back if their answer was no – just like in the Game of Privilege.

Source: Private.

Over the course of the experiment, they became scattered across the courtyard. An assigned Divider decided the borders of the three groups: Jail, the Military and Silicon Valley.

Then the selected people discussed in focus groups, with the aid of Mitigators, their reasons for being there and their satisfaction or dissatisfaction with the impact these algorithms had had on their lives.

Rationale behind our algorithm

When working on our algorithm, we wanted to sort the group of volunteers into three distinct groups: the military, jail and silicon valley. Why these groups? We consider these to be a helpful, if extreme, representations of common lifepaths.

Source: Private.

These groups were defined by some key attributes and assumptions. For example, people entering to army might have a tendency to play with guns, like to have things in order (making their bed every day) or like playing military games. We have therefore created a set of questions that is representative for each group, that might help us cluster our participants better.

Based on people’s answers to these questions, we were able to split people into these distinct groups. This division into groups was by no means perfect, but this was all part of the plan. We wanted to pinpoint the fact that the algorithms affecting our daily lives can seem like opaque black boxes to the people affected by them. And these people have little way of altering their behavior or determining the course down which they are pushed.

Welcome to the Military!

Source: Private.

It’s unfair, we should rebel! It depends what military we are in... Maybe like a guerrilla group. Can we all collectively decide that?

“What is the military? What brought us here? Survival, solving problems, maybe we said we collaborate well.”

“I am not too surprised, but I don’t want to join, sorry!”

“It makes sense. I did some operational research. So the search suggested that I buy some tactical shoes.

Most of the people in this group were surprised to have been chosen for the military. They suggested that the role of free will and consent should be important. Some attempted to make sense of which answers had brought them there. They were trying to decipher the algorithm. But what if one day the country calls you to arms? Is there still time for questions when the algorithm has picked you?

Welcome to Silicon Valley!

I feel like I can't tell what the algorithm was imagining.

 

Source: Private.

“This experience seems similar to a focus group. Responding to a questions about a product, but the product is unclear.”

“The results also reminds of a scatter plot graph, just could not tell what the axes were.”

“Many of us ended up fairly close to the dividing line. Feels like in real life, people get divided by very little.”

People seemed to be really confused at first, especially those that didn’t really consider themselves tech-savvy. Generally, they had a pretty vague idea of what belonging to Silicon Valley might mean (especially given that they did not know what other groups were). They felt that the questions they were responding to were kind of random. Members also felt that it would have been quite easy to end up in a different group, which they found scary. Small changes in the way they responded to questions might starkly affect their destiny. Some members also compared the experiment to parallel situations in focus groups or a similar experiment called ‘What is Privilege?’.

Welcome to Jail!

Participating in an illegal protest doesn’t make so much difference between myself and perhaps someone who didn’t. I also never played with guns. And I am here. None of the questions you asked were reason enough to get someone in jail.

 

Source: Private.

“Humans are very complex. You can do one thing and do another thing that is good, bad, legal or illegal. And this inconsistency is on the one hand what makes humans very beautiful but also makes it seem that you cannot base it purely on several past decisions because you can’t anticipate how humans will behave in the future. It’s not fair in that sense.”

“None of those questions really answered the question of whether or not someone is a good person. That’s really unclear to me. There might be some good people there or vice versa.”

“I have a university degree and I stepped forward.”

“I don’t have one and I’m still here.”

People got it. There is little distinction between what is considered punishable, bad behaviour and what isn’t. People who are capable of good things are capable also of bad, and vice versa. Algorithmic bias feels arbitrary. And it is!

Gamification

One of the most common uses of algorithms in educational apps, but also increasingly in the classroom, involves what is referred to as the gamification of learning. Gamification is a term drawn from behavioral economics to describe the application of a stripped-down series of game features in non-game contexts. While this form of edutainment can be found in earlier educational video games, such as the 1983 Math Blaster!, gamification is a growing trend in the technologically-enhanced classroom with the ideological push for “personalized” learning tracks. In this context, students are encouraged to engage with algorithms, whether on “game-based” learning platforms, such as Kahoot!, on language-learning applications such as Duolingo, or in “multiplayer classrooms.”

A typical feature of gamification is the use of points, badges, reputation systems and even of leaderboards (through which students rank their progress against their peers). This feature is depicted as supporting personalized tracks (even neoliberal “fast tracks”) toward educational goals. What is downplayed, however, is the way this victory-oriented mindset supports an ethos of competition and works against collaborative learning in the classroom among students of different academic levels with different skills sets and of emotional maturity, etc. Algorithmic-driven learning furthermore assumes that knowledge is pre-given, hierarchized in ‘levels’ and can be mastered through repetition, rather than innovative instruction that might enable students to reach learning objectives through various paths and diverse methods. Gamification also valorizes a low-level kind of activity/interactivity (clicking, swiping, moving up levels, along limited, predetermined pathways), thus potentially devalorizing moments of inactivity, self-reflection, even boredom, as well as different modes of connection and inter/activity.

Gamification should be distinguished from games and especially the genre of “serious games” which have much to offer future schools: and indeed some schools have already organized their curriculum around game play. It is not surprising that gamification has been critiqued by game designers themselves for employing the least creative aspects of contemporary games, including repetitive “grinding” (the performance of the same moves over and over to, say, kill AI-produced monsters and unlock content or advance one’s character, also referred to as “treadmilling” or “farming”), the competitive emphasis on achievement and the adrenaline-driven focus on goals. Gamification, which is based on fixed objectives and automatized feedback loops, is not about “play”, which is a much more open, unpredictable, uncertain and potentially actually “fun” process (not just less boring than filling out a workbook page).

CONTEMPORARY USES OF AI IN EDUCATION

It is expected that artificial intelligence in U.S. education will grow by 47.5% from between 2017 and 2021. At present, AI is used in a wide variety of ways in the classroom, and technology companies claim these tools can help lighten teachers’ workloads to allow them to focus on helping kids. Meanwhile, there is a shortage of teachers in often underfunded schools, with fewer people electing to go into education and many leaving the profession. AI is helping teachers to grade work and transcribe their words in real time, to help kids who are missing school keep up and to spot cheating in tests. There are apps and robots to help isolated kids or kids who have autism, and the University of Michigan ECoach now provides personalised advising assistants to help students succeed in their classes.

Beyond these specific uses, there is also an educational ethos emerging in Silicon Valley which harnesses not only AI-aided learning but looks to respond to the ways in which artificial intelligence, and technology more broadly, is reshaping the world, and to ensure the preservation of the elite status of some families’ offspring. These schools specialise in “personalisation” – tailoring education to the minute-by-minute needs of each child through individualised video feeds, reading materials and tests. The child’s advancement in each subject is visualised, incentivising them to stay ahead. Whether not-for-profit, private or even public, these ‘AI schools’ prioritise the different skills deemed necessary in the new world: Tahoe Expedition Academy, for example, trains students in “constructive adversity” to help them become the ones to rule the machines and not be ruled by them – their social skills, in a world where machines monopolise knowledge and skills, will be what sets them apart. Empathy and an ability to get on well with others will let them stand out. Mark Zuckerberg and his partner Priscilla Chan are leading the push to create an education as individual as each child, aiming to expand these experiments beyond the confines of Silicon Valley.

Existing Uses of AI

Tools

  • The Presentation Translator translates in real-time what teachers say so students can read
  • The University of Michigan ECoach provides personalized advising assistants that help students succeed in specific university classes and degree paths
  • Apps like Zipgrade and Essay-Grader mark students’ work
  • Globalising education to connect with classrooms in other countries
  • Robots to help children keep learning when they are too ill to go to school
  • Helping children with autism
  • No isolation is a Norwegian app that helps children with long-term illnesses as well as people struggling with loneliness in the general population
  • Spotting cheating in applications for jobs and in tests

Individualized education

The form of education driven by advances in AI emphasises skills and social abilities over and above knowledge which resides with a teacher:

  • Zuckerberg and his partner Priscilla Chan are leading the push to create an education as individual as each child, aiming to expand the experiments beyond the confines of Silicon Valley
  • These personalised schools already exist in the Bay Area in not for profit, public and private forms
  • Playlists of tailored videos, reading materials and tests
  • Interestingly seems to best benefit students who are already tearing ahead: Tahoe Expedition Academy offers students “constructive adversity” to help them become the ones to rule the machines and not be ruled by them
  • At the core of Tahoe’s philosophy is that ‘knowledge’ and rote learning will be monopolised by machines – so humans on this upper crust need to excel in interpersonal communication in order to retain a leading edge
  • Tom Hulme, General Partner at Google Ventures advocating kids cornering the market on empathy
  • Real-time data collection drives the student’s learning, gamifying the process of education so that they are prompted to remain on top of subjects where they are lagging behind or struggle

Detecting facial expressions

  • This technology is already used in selecting people for certain jobs: 2016 startup Human analyses people’s expressions to select them for certain roles based on inferences about their personalities
  • Chinese schools are using AI to detect unfocused students, to see if they are napping, listening, or not keeping up
  • They get real-time scores for their attentiveness, which are shared with their teachers

The future of the classroom

What will classrooms of the future look like? As outlined in Encyclopedia of Science Fiction, scifi writers and filmmakers of the 20th century conjured up then-outlandish ideas of “learning pills” or direct knowledge transmission through microchip implementation devices.

They designate as curricula “quadratic religion…complex defamation…construction of viable planets.” They fear that classrooms of the future will create too much obedience and promote too much reliance on technology, cutting out interpersonal connections, but they also foreground factories of “future astronauts and soldiers.”

“Revolution in the Classroom (part 1)” by Paul Garland is licensed under CC BY-NC-ND 4.0

While the 21st century has not yet seen the invention of the education pill, technological innovations in AI and their current applications in education are bringing many prior utopian fantasies and dystopian nightmares to life, creating hybrid human-machine classrooms reliant on robo-teachers and assistants, where grading can be done without human intervention, cameras track how closely students pay attention and engage with lecture materials, and open social student modeling allows students to compare their “quantified selves.”

Researchers looking at the use of AI in education have been positive about its potential to assist and inspire students to achieve their education goals–but are the results unambiguously positive? What ethical dilemmas are involved in reconfiguring classrooms around AI? Does imagining the endpoints of some of the current developing uses–scifi style–help us respond to such questions?

Consider the following potential future classroom scenarios, which building on current uses of AI in the classroom:

  • Entirely online and individualized “classrooms” where a student logs into an educational platform from home and begins a sequence of courses where performance in specific tasks at each level will lead the student into one of a set of designated careers. The student is guided through his or her personal sequence by robot instructors and e-coaches until mastery/employment. Once the algorithms are set up, such an education could be done for free, without cost to the student or the the state, and could be easily modified in language to suit people anywhere in the world.
  • A total surveillance/”quantified self” classroom where students are required to wear trackers that count minutes studying specific topics, physical movements and emotional states in order to optimize their performance in class. AI would not only help teachers figure out how well students have prepared, what they engaged with most and what they need to work on, providing activity readouts for each student in each class that would guide teacher feedback, but this information would also be directly given to students for self-optimization. Cameras could be extensively employed so students and teachers could go back to any moment in class for further clarification or analysis.
  • Classrooms as VR-enabled “portals”: History “classrooms” that are virtual time-machines, taking students via VR technologies back into different moments of the past where they “experience” history through sense immersion and even speak with AI avatars of historical figures who can inform students about the period. The same could be done for geography, anthropology, politics classes, with “portals” taking students underwater or to the top of Everest, to different parts of the world, etc.
  • Classrooms as joint gathering in many possible spaces where students are freed from learning basic math, different languages and geographical and historical “facts,” ie tasks and information that can be completed or provided by AI personal assistants, and instead can focus instead on building socio-emotional capacities, for instance aiding self-flourishing through the creative arts and communal-flourishing through collaborative social problem-solving. When students need “information” for this process, they simply ask their AI personal assistants who provides the relevant data and information; the work of “learning” comes to be about how to decide what kinds of questions to ask, what kinds of projects to take on, how to assemble and transform “facts” into imaginative architectures especially through social skills

And now you: What are the pros and cons of each scenario?

Credits and License

This project was conceived at the 2018 annual conference of the Berliner Gazette AMBIENT REVOLTS. The guests were Kerry Bystrom, Laura Burtan, Júlio do Carmo Gomes, Géraldine Delacroix, Alina Floroi, Andrada Fiscutean, Anja Henckel,Monisha Caroline Martins, Penelope Papailias, Catherine Sotirakou, Rachel Uwa, Erik Vaněk. The workshop was moderated by Claudia Núñez & Cristina Pombo.

All texts and images are licensed under CC BY 4.0. The images were taken by Norman Posselt at the AMBIENT REVOLTS conference.

[ssba]

Leave a Reply

Your email address will not be published. Required fields are marked *