In our previous privacy predictions piece, we outlined trends for 2023. As expected, there was a notable increase in the adoption of digital IDs to replace paper documents. For example, California expanded a pilot program for digital driver’s licenses, and Russia introduced laws enabling biometrics-based purchases of alcohol and tobacco. This trend is set to continue, with the European Commission finalizing the EU Digital Identity Wallet agreement. Australia has also unveiled a national strategy for digital identity resilience, aiming for mainstream use in 2024.
We expected organizations to try to reduce the impact of the human factor on data security, so as to bring down the number of insider threats and social engineering attacks. The issue intensified with the widespread use of chatbots for work, leading employees to inadvertently share sensitive data. Notably, major companies like Amazon, Apple, and Spotify are taking measures to prevent data leaks by limiting engagement with such tools.
Whereas we expected the Metaverse to be the focus of the privacy debate, AI stole the spotlight. Despite this, the European Commission has introduced a new strategy on Virtual Worlds, recognizing their transformative potential for EU citizens. Although no immediate regulations are on the table, issues related to metaverses are emerging, such as the British police investigating virtual rape. Interestingly, metaverses are gaining traction in social and political spheres, illustrated by a Columbian court conducting its first trial in the metaverse.
We have not seen any spikes in demand for privacy insurance by individuals in 2023. However, the insurers often include data breach risks into personal cyberinsurance policies. According to Statista, this market is expected to grow significantly by 2025. Given that privacy concerns are rising, we suggest that although our prediction was not fulfilled in 2023, this is a long-term trend that we will observe for years to come.
The same can be said about our prediction on the diversification of the web tracker market. In 2023, we did not see any significant changes in tracker distribution. However, the internet continues to split, with certain resources being banned in certain countries, so the tracker landscape will most likely change in the near future.
As we can see, some of our predictions are likely to come true in the long term. The year 2023 marked the emergence of several important trends, which will influence the privacy field in 2024. Below, we look at some of the important developments that, in our opinion, will affect online privacy in the upcoming year.
Expanding the concept of private data
While the conventional understanding of private data in cyberattacks primarily includes personal and identifying information, the photo, video and voice data was not necessarily part of this concept. This is no longer adequate in 2024. The increasing exploitation of biometric data by scammers who craft voicefakes and deepfakes emphasizes the urgency of enhanced measures to safeguard such information. In recognition of this evolving landscape, the EU is closely working on formulating a legislative framework that will specifically address facial processing technologies to strengthen data protection.
AI-enabled wearables might start a new debate on privacy
Most people have accepted ever-present tracking devices in their pockets, which are smartphones, and in their homes, such as speakers with smart assistants. However, wearables, such as smart glasses, especially equipped with cameras, tend to raise more suspicion. For example, there was a heated debate on privacy implication of smart camera-fitted glasses, resulting, among other things, in them being banned in some pubs. With the rapid pace of AI development, some companies, for example, Rabbit and Humane, have been working hard on bringing these capabilities to wearable devices. One of the implications of a device such as an “AI pin” is having a camera always staring at the face of your friends and a microphone that may be listening for your commands. While this is not a huge step away from a smartphone, the overt character of these devices may seriously concern people around you, especially those who care about their privacy—that is, of course, if these devices gain traction.
AR and VR development to call for new privacy standards in 2024
When Apple launches a new product, it usually attracts public attention to both that product and similar ones. Public attention leads to a debate on privacy, especially if the technology is new enough not to be well regulated. With the launch of Apple Vision Pro and the increasing integration of AR/VR into daily life, its privacy concerns are likely to come to the forefront. Governments and regulatory bodies may respond by tightening privacy regulations specific to AR/VR devices. Given the immersive nature of these technologies, there could be a focus on protecting user data, ensuring secure interactions and addressing potential risks, such as unauthorized data access or misuse. Transparency and accountability within the AR/VR ecosystem may be in the focus as well.
Leaked passwords will give fewer reasons to worry—if there is anything to leak
It seems that all the passwords in the world have already been leaked. According to haveibeenpwned, more than half a billion unique passwords have been compromised in known leaks. When a password is leaked, it can come in different forms: from plaintext on most poorly run websites to strong salted cryptographic hashes. In the worst case, the passwords from the leak can be restored to their original form (for example, if they were improperly hashed) and used to access other accounts belonging to the same user (in an attack called credential stuffing). These attacks can lead to dire consequences: for example, a recent genetic data breach was the result of credential stuffing.However, we think that the importance of data leaks containing passwords will continue to decline. The first reason is the rising prevalence of two-factor authentication, where an additional code sent via SMS or generated in a special authenticator application, such as Kaspersky Password Manager, is used to confirm your login. The second reason is that the use of passwords for authentication will continue to decline. Now, some services, most prominently Google, already feature passwordless authentication via passkeys. Other services are ditching passwords in favor of biometric authentication. Combined with continued use of single sign-on, such as Sign in with Apple, we believe that these factors will lead to decline in both the magnitude and significance of password leaks.
Advancing privacy through the rise of assistant bots
As the prevalence of Assistant Bots utilizing natural language processing (NLP) continues to expand across diverse sectors, there arises a compelling opportunity to harness these technologies for bolstering user privacy. Imagine a future where bot assistants play a pivotal role in safeguarding personal data, particularly in call interactions.For instance, a sophisticated bot assistant could seamlessly handle the user’s calls to ensure that sensitive information, such as the user’s voice, remains protected. This proactive measure acts as a deterrent against potential fraudsters attempting to record voices for malicious purposes like deepfake manipulation—an unsettling trend already gaining traction. Bots serving as intermediaries between a caller and the user exist already. With large language models now available, there may be major development in this area, and more advanced technology may start appearing on the market.
As we anticipate advancements in this field, we expect to witness the integration of these advanced bots into communication systems in the near future. This evolution not only enhances user privacy but also reflects a proactive approach in adapting technology to address emerging threats in the digital landscape. However, it is worth noting that the adoption of new, sophisticated assistant bots will most likely be accompanied by discovery of new vulnerabilities that enable spammers and scammers to trick these bots into giving out sensitive information or approving an unnecessary purchase.