venerdì 12 febbraio 2021

Film 1952 - The Social Dilemma

Intro: Facciamo un salto in avanti di 152 film e torniamo al presente. O quantomeno all'anno scorso.
Ho colto l'occasione della cifra tonda della precedente recensione per lasciare per un po' indietro la vecchia lista di film - ancora corposa, ma comunque ampiamente sfoltita - per far progredire un po' i titoli più recenti che, da quando sono qui a Dublino, sono certamente influenzati da una serie di circostanze peculiari, capitanate dall'esperienza universitaria e la pandemia. Il che ha necessariamente visto un incremento delle mie visioni di pellicole presenti su Netflix o sulla mia memoria esterna e che da tempo avevo intenzione di recuperare, ma non ero mai riuscito a farlo. 

Proprio per evidenziare questa combo di circostanze particolari, ho pensato di inizare da un titolo che rappresenta appieno questo mio momento irlandese: un film Netflix che ho visto per il corso di Understanding Social Media e che ho usato come pretesto per argomentare la mia tesi rispetto a come la nostra esperienza in internet e sui social sia influenzata dagli algoritmi per il mio saggio di fine semestre. Di cui riporto un estratto. 
  
Film 1952: "The Social Dilemma" (2020) di Jeff Orlowski
Visto: dal computer portatile
Lingua: inglese
Compagnia: nessuno
In sintesi
Building the Reality
Recently released on Netflix’s streaming platform, “The Social Dilemma” (Orlowski, 2020) is a documentary which aims to alert the audience on the numerous threats caused nowadays by social media.
According to the movie, human society is on the verge of a cliff, as algorithms are shaping reality and our perception of it, changing our habits one little step at a time while most of us are not even aware that behind Google, Facebook or Twitter there are artificial learning machines able to influence the way we experience the internet, and therefore our everyday life (Nguyen, 2020).
But is the society really doomed, as Netflix’s documentary so strongly suggests?


To understand what algorithms do, first it is necessary to assess what they are.
Different interpretations of algorithmic technology have been offered by scholars, all of whom agree that an algorithm is a process (or a series of steps) performed by a computer that achieves a desired outcome through the analysis of data. Consequently, automation can be defined as “a device or system that accomplishes (partially or fully) a function that was previously, or conceivably could be, carried out (partially or fully) by a human operator (Parasuraman, Sheridan and Wickens, 2000, p. 287)“.
Algorithms present several key-characteristics:
- They are actants, as they “are not alive, but [they] act with agency in the world (Tufekci, 2015, p. 207)”;
- They are “neither neutral nor objective (Diakopoulos, 2019, p. 18)” and extremely literal in the way they act (Luca, Kleinberg and Mullainathan, 2016);
- They are built to learn how to make new decisions while evaluating data (Diakopoulos, 2019);
- They can predict people’s future behaviours with great precision by creating behavioural patterns (based on data harvested by social media platforms);
- They are black boxes (Gillespie, 2016, p. 53): their code “is changed routinely (almost every week) (Tufekci, 2015, p. 206)” and it is unclear how it functions;
- They filter or select “what information is considered most relevant to us (Gillespie, 2014, p. 167)” based on the data collected. By doing so, they act as gatekeepers of the online information flow.
All these characteristics have direct consequences on how the internet and social media are presented to and perceived by users, and each one of them results in a slightly different experience of the web and its tools.

The internet is not the same for everyone.
Different people get different results when searching on Google, even if the query is the same. When posting on Facebook, most of your friends will be able to see what you shared, but that does not mean that it will be displayed to them all. While scrolling down your Instagram feed, it will prioritise pictures and videos from some of the accounts you follow, but not all of them. The reason behind this selection of contents is, of course, algorithms which have “apparent power, agential capacity and control (Neyland, 2015, p. 119)“ over our internet experience and, to a certain extent, our lives.


As mentioned before, algorithms make decisions based on the analysed data, which is provided by the platforms that employ them. They are built to consider what YouTube channels get most of our attention, which of our friends we tend to reward with more likes, to whom we chat the most on Messenger, what hashtags we use, what Facebook pages we follow, how much time we spend on someone’s Instagram account, what we search on Google, and so on.

Algorithms are “meaningless machines until paired with databases on which to function (Gillespie, 2014, p. 169)“, therefore, as Matzner explains, they are “very much data-driven (2019, p. 125)”. Social media platforms collect all the available information about what people do online in the form of data and metadata - which has generated concerns about users’ privacy protection - so that algorithms can read and translate it into something useful for the platform itself: a behavioural pattern.
By profiling users, the algorithm is able to differentiate each user’s online experience by shaping it around the information stored about them. In this sense, “the computer not only calculates or represents ‘reality’ but generates it (Totaro and Ninno, 2015, p. 147)“.
Content, news, search results and the overall social media experience will be different for everyone as certain content will be prioritized, while some will be overlooked in order to increase the users’ engagement and keep them on the platform longer.
This happens because companies’ main goal is to expose users to ads: the more the users will stay on the platform or website, the more ads they will encounter, the more likely they will click on them. These ads are designed to “target particular users who are likely to buy specific products (Tufekci, 2017, p. 136)“ and they are highly effective because they are tailored around the collected data and metadata. The success of tailored ads is vital to explain how platforms support themselves, as companies like Google and Facebook base their business models on algorithms and their ability to target users with the perfect - therefore most effective - advertisement.
This cycle has no end: the more people who engage with and participate on the platforms, the more successful, popular and powerful they become and, by doing so, the more advertising companies will be inclined to display their ads on those platforms and pay good money for it.


Additionally, by analysing and processing the huge amount of data harvested by social media, algorithms are able to predict the future. Or, to be more precise, they can guess with great precision how people will behave or react in response to what they see, read and/or hear on the internet to a degree where even emotions can be part of the prediction.
These predictions are then sold to business customers interested in human futures, what Shoshana Zuboff calls a new type of marketplace or ergonomic logic, which lead to informational andsurveillance capitalism(Age of Surveillance Capitalism, 2019).
Yet, errors and mistakes may occur. This happens because algorithms are very literal in the way they follow the step-by-step process to the extent that they do “exactly what it’s told and ignores every other consideration (Luca, Kleinberg and Mullainathan, 2016)“, as they are unable to detect any implied subcontext that may exist.
The strictness by which algorithms follow just what they are told is used by platforms and companies to promote the idea that what people are using is essentially an objective and neutral technology. As Gillespie straightforwardly explains:

“[...] this is a way to deflect responsibility: “Google’s spiritual deferral to ‘algorithmic neutrality’ betrays the company’s growing unease with being the world’s most important information gatekeeper. Its founders prefer to treat technology as an autonomous and fully objective force rather than spending sleepless nights worrying about inherent biases in how their systems [...] operate.” (2014)

Promoting algorithms as a super partes entity detached from possible biases or inequities shields companies from consequences caused by their technology’s actions or outcomes and, at the same time, extend the idea of impartiality to companies themselves. Once companies are perceived as fair and neutral, it is more difficult to make them accountable for their errors and mistakes.
It is suggested that our experience of the internet overseen by algorithms is bias-free, but in practice, platforms and websites are subject to a form of content restriction that goes beyond the technology itself. Facebook, Twitter, YouTube, all of them will remove contents that involve profanity, child abuse, “threats of violence, copyright or trademark violations, impersonation of others, revelations of others’ private information, or spam (Gillespie, 2011)“. As algorithms work on what they are told to do, they follow the set of rules decided by companies: “algorithms are created by people and reflect [...] biases of their designers (Berlatsky, 2018)”. This disrupts the idea of neutrality from the very beginning. Naturally, the employment of policies and guidelines by each company attempts to prevent or at least minimise biases and the occurrence of wrongdoings by users in order to provide for a safe and hospitable online environment as possible:

“Our Community Guidelines are designed to ensure that our community stays protected. They set out what's allowed and not allowed on YouTube, and apply to all types of content on our platform, including videos, comments, links and thumbnails.” (YouTube Community Guidelines and policies, no date) 

And yet, as Tufekci concludes, Community Guidelines have “significantly different impacts depending on the community involved (2017, p. 143)“.

When the algorithms are wrong.
Algorithms are not infallible, thus neither are platforms. More so, algorithms’ outcomes may differ from what they were originally intended to be.
Common missteps that may occur are the removal of content erroneously considered inappropriate, issues with the trending topic list of a social media (Porter, 2020), websites that become “choked with low-quality “click-bait” articles (Luca, Kleinberg and Mullainathan, 2016)“. Reasons behind these missteps may vary. Some may be connected to the Community Guidelines:

“Community policing means that the company acts only if and when something is reported to it and mostly ignores violations that have not been flagged by members of the community.” (Tufekci, 2017, p. 143)

This means that a platform like Facebook - which counts over 2 billion subscribers - mostly relies on users’ flagging posts, rather than proactively checking everything that has been posted on their platform. While feedback from subscribers is a powerful tool, it may lead to a wrongful use of the tool itself: what happens when a person or an organization become the target of unfair attacks by other users? Is the algorithm able to detect it or will it simply allow the misconduct to happen? Tufekci reveals that users like social movements or the LGBTQ community are usual targets of this misconduct (2017).

Other missteps may be connected to the trending section of the platforms. In September 2020 Twitter released a statement3 to explain the company’s decision to change how their algorithm picks the trending list. This happened after the question “Why is this trending?” was tweeted more than half a million times during the previous twelve months (Cortés, 2020), expressing users confusion in regards to the various and sometimes random topics trending on the platform. The problem has been so persistent that it led many to believe that the representation of “reality” as seen through the lens of Twitter’s trending topics could distort, and thus compromise, the national debate ahead of the November 2020 U.S. presidential elections (Ingram, 2020).
Twitter’s Trending Topics are now “decided by a combination of algorithms and human curation, and [...] trending descriptions [are] entirely human-curated (Porter, 2020)”. Also, algorithms are deployed to prevent spam or abusive tweets from being displayed on the trending section.


Similarly, Facebook faced a wave of backlash because of its trending topic section, but to a different outcome. How the platform picked its trending list was questioned for years, many criticising its tendency to help boost fake news.
Examples ranged from a conspiracy theory revolving the September 11 terrorist attack (Ohlheiser, 2016), to the untrue report of Fox News anchor Megyn Kelly being fired (Alba, 2017), to the story of an alleged Muslim terrorist attack that had momentum in Slovakia (Frenkel, Casey and Mozur, 2018) while Facebook was “testing a feature that separates users’ posts from content from professional news sites (Ong, 2018)”.
The rise of fake news on the platform’s trending list has been linked to Facebook’s decision to promote automation over human employees to filter its news-gathering operations:

“While employees are still involved in the process of vetting and pinning popular topics to Facebook’s sidebar, the process became far more hands-off in late August, both to increase its scale and to answer accusations of bias from human editors.” (Robertson, 2016)

After years of criticism and struggle connected to “the reliability of any news being distributed through its platform (Kastrenakes, 2018)”, in mid-2018 the company decided to definitively remove the trending list.
Many other platforms and companies like Instagram (Smith, 2020), Google (Cadwalladr, 2016), Amazon (Johnson and Pidd, 2009) and Netflix (Breznican, 2020) faced algorithm-related criticism; music streaming service Spotify has been accused of making “people into more conservative listeners, a process aided by its algorithms, which steer you towards music similar to your most frequent listening. (Hann, 2019)”

[...]

Are we really doomed? Not just bad algorithms.
Although our digital experience is greatly influenced by the algorithmic logic, what this technology is capable of is not just used to trick or spy on users, as it is often deployed for practical and useful outcomes in human activities.
As previously mentioned, social media platforms can be instrumental in enhancing users’ participation to the public sphere, something so relevant that internet access is now starting to be considered not just as a luxury good, but as a basic human right (Bode, 2019).
On the practical side, the algorithmic technology has been employed in different fields and with different new functions; examples may be the creation of a system that exploit “social media to automatically produce local news (Schwartz, Naaman and Teodoro, 2015, p. 407)”, the use of an automated writing technology to write earning reports (Diakopoulos, 2019), or the employment of “pattern recognition algorithms [that] are meant to detect suspicious or abnormal events (Matzner, 2019, p. 134)” when used with smart CCTV cameras. Though useful tools to perform each of these activities, algorithms still require human supervision to properly perform their task.
So, to answer the initial question: is society doomed as “The Social Dilemma” is urging us to believe?


Although it is undeniable that social media and its algorithms influence the way we perceive reality and how the public discourse is built, I believe the movie overdramatization of the events may be misleading. Interviewees often refer to social media’s strong effects on users, who are depicted as powerless and defenceless over the invincible persuasive force of the technology in a kind of communication “too powerful to give room for the recipients to process the information received otherwise (Wogu ​et al.,​ 2020, p. 323)”. Television has been described the same way.
What is important to remember is that the human employment of the technology is not a passive process: as the Uses and Gratifications theory asserts, people’s use of a medium is connected to the possible gratification of specific needs, therefore being active agents over the media they choose to consume.
This alone may not be enough in terms of facing the numerous challenges connected to the increasingly powerful influence hold by digital technologies, yet it looks like a good place to start. 

Cast: Tristan Harris, Aza Raskin, Justin Rosenstein, Shoshana Zuboff, Jaron Lanier, Skyler Gisondo, Kara Hayward, Vincent Kartheiser, Anna Lembke.
Box Office: /
Vale o non vale: Il documentrio in sé è anche interessante, ma non ho trovato sempre efficace la scelta di optare per una parte di narrazione destinata ad un elemento di fiction di cui francamente non c'era bisogno e che va a caricare emotivamente una storia che sarebbe già sufficientemente apocalittica nelle sole parole degli intervistati.
In ogni caso un film sui cui riflettere e che fa riflettere.

Premi: /
Parola chiave: Algoritmi.

Trailer
#HollywoodCiak
Bengi

1 commento:

  1. #HollywoodCiak 1952 #TheSocialDilemma #JeffOrlowski #TristanHarris #AzaRaskin #JustinRosenstein #ShoshanaZuboff #JaronLanier #SkylerGisondo #KaraHayward #VincentKartheiser #AnnaLembke #documentary #Netflix #SurveillanceCapitalism #algorithms #Google #Facebook #Twitter #Instagram #YouTube #Pinterest #socialmedia #capitalism #advertising #Firefox #Uber #society #fakenews #news #trending #trends #audience #data #metadata #TheAgeofSurveillanceCapitalism #SundanceFilmFestival #SIFF #followme

    RispondiElimina