Kaspersky Security Bulletin

Privacy predictions 2022

We no longer rely on the Internet just for entertainment or chatting with friends. Global connectivity underpins the most basic functions of our society, such as logistics, government services and banking. Consumers connect to businesses via instant messengers and order food delivery instead of going to brick-and-mortar shops, scientific conferences take place on virtual conferencing platforms, and the remote work is the new normal in an increasing number of industries.

All these processes have consequences for privacy. Businesses want better visibility into the online activity of their clients to improve their services, as well as more rigorous know-your-customer procedures to prevent fraud. Governments in many countries push for easier identification of Internet users to fight cybercrime, as well as “traditional” crime coordinated online. Citizens, for their part, are increasingly concerned with surveillance capitalism, a lack of anonymity and dependence on online services.

Reflecting on the previous installment of the privacy predictions, we see that most of them indeed have been big trends this year. Most of all, privacy-preserving technologies were among the most discussed tech topics, even if opinions on some of the implementations, e.g. NeuralHash or Federated Learning of Cohorts, were mixed. Nevertheless, things like on-device sound processing for Siri and Private Compute Core in Android are big steps towards user privacy. We have also seen many new private services, with many privacy-focused companies taking their first steps towards monetization, as well as a bigger push for privacy – both in technology and in marketing – on both iOS and Android. Facebook (now Meta) moved towards more privacy for its users as well, providing end-to-end encrypted backups in WhatsApp and removing the facial recognition system in its entirety from Facebook.
While we hope 2022 will be the last pandemic year, we do not think the privacy trends will reverse. What will be the consequences of these processes? Here, we present some of our ideas about what key forces will shape the privacy landscape in 2022.

  1. BigTech will give people more tools to control their privacy – to an extent.

    As companies have to comply with stricter and more diverse privacy regulations worldwide, they are giving users more tools for controlling their privacy as they use their services. With more knobs and buttons, experienced users might be able to set up their privacy to the extent that suits their needs. As for less computer-savvy folk, do not expect privacy by default: even when legally obliged to provide privacy by default, enterprises whose bottom ine depends on data collection will continue to find loopholes to trick people into choosing less private settings.

  2. Governments are wary of the growing big tech power and data hoarding, which will lead to conflicts – and compromises.

    With governments building their own digital infrastructures to allow both simpler and wider access to government services and, hopefully, more transparency and accountability, as well as deeper insights into the population and more control over it, it is not surprising they will show more interest in the data about their citizens that flows through big commercial ecosystems. This will lead to more regulation, such as privacy laws, data localization laws and more regulation on what data and when are accessible to law enforcement. The Apple CSAM scanning privacy conundrum shows exactly how difficult it can be to find the balance between encryption and user privacy on the one side and pinpointing criminal behavior on the other.

  3. Machine learning is sure great, but we are going to hear more about machine unlearning.

    Modern machine learning often entails training huge neural networks with astounding numbers of parameters (while this is not entirely correct, one can think of these parameters as neurons in the brain), sometimes on the order of billions. Thanks to this, neural networks not only learn simple relationships, but also memorize entire chunks of data, which can lead to leaks of private data and copyrighted materials, or recitations of social biases. Moreover, this leads to an interesting legal question: if a machine learning model was trained using my data, can I, for example, under GDPR, demand to remove all influence that my data had on the model? If the answer is yes, what does it mean for data-driven industries? A simple answer is that a company would have to retrain the model from scratch, which sometimes can be costly. This is why we expect more interesting development, both in technologies that prevent memorization (such as differentially private training) and those that enable researchers to remove data from already trained systems (machine unlearning).

  4. People and regulators will demand more algorithmic transparency.

    Complicated algorithms, such as machine learning, are increasingly used to make decisions about us in various situations, from credit scoring to face recognition to advertising. While some might enjoy the personalization, for others, it may lead to frustrating experiences and discrimination. Imagine an online store that divides its users into more and less valuable based on some obscure LTV (lifetime value) prediction algorithm and provides its more valued customers with live customer support chats while leaving less lucky shoppers to a far-from-perfect chatbot. If you are deemed by a computer to be an inferior customer, would you want to know why? Or, if you are denied a credit card? A mortgage? A kidney transplant? As more industries are touched by algorithms, we expect more discussion and regulations about explaining, contesting and amending decisions made by automated systems, as well as more research into machine learning explainability techniques.

  5. Thanks to work from home, many people will become more privacy-aware – with the help of their employers.

    If you have been working from home due to the pandemic, odds are you have learned lots of new IT slang: virtual desktop infrastructure, one-time password, two-factor security keys and so on – even if you work in banking or online retail. Even when the pandemic is over, the work-from-home culture might persist. With people using the same devices both for work and personal needs, corporate security services would need more security-minded users to protect this bigger perimeter from attacks and leaks. This means more security and privacy trainings – and more people translating these work skills, such as using 2FA, into their personal lives.

To conclude, privacy is no longer a topic for geeks and cypherpunks, and we see how it has become a mainstream topic in the public debate touching on the subjects of personal and human rights, safety and security, and business ethics. We hope that this debate, involving the society, business and governments, will lead to more transparency, accountability, and fair and balanced use of personal data, and that both legal, social and technological solutions to the most pressing privacy issues will be found.

Privacy predictions 2022

Your email address will not be published. Required fields are marked *

 

Reports

How to catch a wild triangle

How Kaspersky researchers obtained all stages of the Operation Triangulation campaign targeting iPhones and iPads, including zero-day exploits, validators, TriangleDB implant and additional modules.

Subscribe to our weekly e-mails

The hottest research right in your inbox