Week 4: Discourse and Surveillance
Week 4 marks the last week of “History and Theory” discussions and also one quarter of the course being complete! While I won’t get into time as a concept, the experience of time at this time feels quite different. This could be because of the changes in the academic calendar, environmental discontent, or a host of other things, but I digress.
With surveillance as the theme for the week, I specifically utilized lecture time to speak to the discussion posts I have received from the class around the feeling of being Black in the digital age, which for many of the students is closely related to feelings of surveillance, fetishization, and commodification. A sense that when engaging within Black community online you never knew who to trust, for example: Is this person actually a fellow community member, Is this a bot, or is this an interloper Blackfishing? Many times when we discuss surveillance we view it through the lens of the state and for the past few weeks we have discussed things like predictive policing apps, facial recognition software, and most recently the use of AI in healthcare. However, within the setting of the university, I think it also interesting to think about how we see algorithms and AI being utilized in the classroom and online education.
Throughout the semester I collect articles that are relevant to class discussion and share them to a discussion board called “Culture Talk”. Recently, I came across an article on the use of algorithms in the classroom to help teachers grade papers. Despite the clickbait in the title, the article served as the basis for my lectures on how surveillance is not only embedded within our society, but in the systems and technologies that we create. As an educator, I could imagine that allowing an algorithm to help grade papers is convenient for instructors who are overburdened with other aspects of their job description and want a faster turnaround time for grading. At the same time, the fact that technology has been created to grade students in various ways sheds light on the belief that grading is viewed to be a subjective process that requires an objective technological eye. Similar to the belief that judges would make better decisions in the courtroom if they listened to the recommendation of a computer, there seems to be a sense that educational AI is not only stepping in to give teachers an extra set of eyes, but also to position technology as the arbiter of truth and rationality in an area as presumably murky and biased as grading.
I wondered if my students would have any concerns around an algorithm quantifying their engagement/participation outside of the classroom, as many learning systems are programmed with the capabilities of doing so. Did they believe that their participation could be quantified in such a way or was there more to their engagement than how many documents they opened or how much time they spent uploading their assignments and responding to discussion posts? In the first week’s lecture, we discussed the fact that technology is not neutral or objective. Due to this, as individuals, it was important to be transparent about the role that our standpoints might play in how we process and engage with the society and technology around us. As an instructor, my transparency in grading comes through creating rubrics and clear instructions for assignments. Knowing all that we don’t know about an algorithm, would that same thought and transparency be given to educational AI? What are the expectations of transparency if and when instructors use these technological capabilities in their own classrooms?
Overall there was not much concern about this quandary as most of the students' posts did not respond to the role of AI in education. Of the one post that did respond to the topic, it was assumed that AI would be a helpful upgrade for teachers. This assertion reminded me of many readings, specifically Marshall McLuhan’s assertion that technology has the power to extend the body and the senses, Donna Haroway’s discussion of “situated knowledge” and the “God Trick” of believing in the objectivity of technology, as well as Tarleton Gillespie’s writing on the relevance of algorithms. Despite our discussions of the biases embedded within society and technology there was still a somewhat techno-utopian belief being communicated that anything that was being created with the intention to help could only further improve our society.
Taking this view into consideration, two of the discussion questions for this week were: “What is the digital panopticon for Black people or within Black communities?” and “How does surveillance map differently onto Black bodies?” The responses were quite varied, from a discussion of how we see the fetisihization of the bodies of Black women and girls within the spheres of media and film i.e. surveillance as another form of the male gaze, the ways that police utilize social media, a brief analysis of the relationship between HeLA cells, Tuskegee, and the concerns around the development of COVID-19 vaccines, corporate and self censorship online, as well as the ways that all of the technology that we engage with utilizes some form of surveillance. Even more so, many people spoke to their own feelings around racialized surveillance and the sense of having to perform a certain version of the self both online and offline. Fears around saying or doing the wrong thing, in the wrong place, at the wrong time was prevalent.
I was fascinated that these concerns did not translate into the discussion of educational AI. Would not the utilization of AI in the classroom also initiate these feelings of surveillance of their performance or instill a need to perform for the algorithm in particular ways to attain the maximum amount of points? With that being said, this lack of concern made me realize that perhaps students always felt that grading came with a sense of surveillance and policing and that by nature of being graded they were always performing a self that required a certain type of behavior to create a specific outcome i.e. good grades. With that being said, I currently sit here wondering about whether or not grading is also an extension of policing and disciplining subjects into specific positions. As we move into next week’s discussion of algorithms and the role of recognition in how we relate to technology, it will be very interesting to continue thinking through these thoughts around the construction of subject positions and the performance of the self both within and outside of the classroom.
Readings
Browne, Simone. "Race and Surveillance." Handbook on Surveillance Studies. New York: Routledge (2012): 72-79.