Recently I have gotten caught in some debates related to technology and new media that are giving me whiplash. In my archives circles everyone is panicked: How are we ever going to get control over the massive amounts of information being created in digital formats? How will we preserve it as technology rapidly changes? How can we make sure that everything happening in our time will be recorded and REMEMBERED? We have a sacred duty to researchers! To future publics! To history!
In my tech policy circles everyone is panicked: How can we lead private lives when corporations and governments can scrutinize every online action and detect patterns that unlock our private lives with elegantly simple lines of code? How do we prevent the tracking of our everyday lives in an age of smartphones and geolocation? How can we make sure that information about us gets FORGOTTEN? We must remain vigilant against surveillance! Companies do not have the right to access and sell our data! Just 25 years ago secret police like the Stasi used private information to manipulate and control people! Such sentiments are even more pronounced in Europe.
Are any of us even living in the same reality? Why does it seem like the “good guys” are helplessly wading around in masses of data, losing our precious cultural heritage in the black hole of cyberspace, while the “bad guys” are reducing our complex humanity to bits of data that can be analyzed, manipulated, and sold for a profit?
It all comes down to a question of control and context. As we move more of our lives online, from knowledge-seeking, to communication, to banking, to creating, to paying taxes, to healthcare, we want to know who has control over our online identity. Over our humanity. Is it us? Is it a third party? If it is a third party, what are they doing with our data? Are they trustworthy? What are their agendas and criteria for preserving and providing access to data?
The truth is, we use tech platforms like Google and Facebook because we like that a third party is curating our content. We make a choice to use these platforms because somehow they are useful, be it as a place to connect with people, spread a message, or kill time. And frequently the content that we access (online or in analog format) has been filtered, by algorithms or archivists or “up votes” or editors. These filters are largely helpful and beneficial. In an age of information abundance, content filters are necessary to wade through the masses of information out there and increase the chances that you find the content you want to find. People seem to be more disconcerted by the idea that this content filter is a faceless Facebook algorithm rather than a kindly reference librarian.
The move from human filters to automated filters is generally a good thing. It increases accuracy and neutrality by removing human inconsistency and bias from the equation. It works more efficiently at scale. There is, however, a trust issue. Information professionals, such as archivists and librarians, are trained to preserve and provide access to information that is credible, accurate, and uncorrupted. The profession is infused with a legal and moral mandate to prioritize information that represents “objective proof” to the greatest extent possible. It is also exempt from corporate influences. It is naive, however, to assume that any information filter is entirely devoid of bias and ulterior motives, even if they are accidental or well-intentioned.
The issue of trust and fear could be largely mitigated by increased transparency on the part of information filters. We can make more intelligent decisions about which information services we opt to use if we know what criteria was being used to preserve and prioritize content. Some of that information is currently available — it is just a question of us doing our due diligence as consumers and educating ourselves. But there is still more that companies could do to increase transparency. The uncomfortable truth is that once information is released on public platforms we have very little control over whether it is remembered or forgotten. The best we can do is produce and amplify good information and access information through portals that we like and trust.
I gave a presentation last weekend on this issue at the first ever Being Human Festival in London. I was one of ten early-career researchers giving a five-minute “Ignite!” style presentation and it was great fun. My slides are below and I will post the video recording here when I have access to it.
* I realize that I throw the “royal we” around liberally in this post. If you do not agree with any of these sentiments, please assume that the “we” only covers myself and all the microbes living in my gut and keeping me alive. (My microbes are very amenable and generally agree with me.) If you agree with these sentiments, you are by all means welcome and included in all of my “we’s.”