Publications

IT security in 2030 – only humans will be the same

Twenty-three years ago one of the major hacker attacks in the history of the US was widely reported in the media. In this context, Clifford Stoll published his book The Cuckoo’s Egg, which has gone on to become a classic of IT literature. In the same year Tim Berners-Lee invented the World Wide Web, Intel launched its brand new 486 processor with 25 MHz, and the Berlin Wall fell. Back then, though, computing was the preserve of small communities of much-ridiculed geeks, so 1989 remained in the collective memory as the year the Wall came down. Everything else was immediately forgotten.

More than two decades later, computers have not only become socially acceptable but, thanks to Apple, they are even status symbols. These days anyone going to a library for information, transferring money at a bank counter, or exchanging traditional “snail mail” business letters, normally does so as a private individual: in our professional environment it is almost impossible to avoid the use of search engines, online banking and email.

What long-term impact will this development have on our lives? It is clear that any attempt to predict the IT security landscape in 2030 will have to be a forecast of security and society in general. IT is now almost everywhere, and its scope will only increase in the future.

Taking great steps towards the future

Future generations will pinpoint 2012 as the year where the hegemony of the traditional personal computer was ended – ironically by the very company that was once inseparably connected with it – Microsoft. Like Apple and Google, the Windows giant is also now opting for a multiple-device approach: cell phones, tablets and increasingly television sets with a built-in operating system are all steadily challenging the PC’s traditional dominance. Online cloud storage instead of a hard drive in the desktop computer, applications from the App Store instead of downloads from dubious websites – times are changing, just as surely as analogue cameras have been superseded by digital devices. Gradually, more sophisticated camera phones are in turn replacing these. There is no doubt that analogue films, digicams and PCs will still exist in ten years’ time, but their heyday is past. Against this background, it would hardly be surprising if smartphones lost their current cachet as the must-have techie toy within the next five years, because augmented reality glasses are now the up and coming thing.

These are special optical aids – including a built-in camera, display and computer. The special feature of augmented reality is that for the wearer reality and computer-generated images merge together. Thanks to a continuously activated camera and the face recognition function, a poor memory for names won’t be a problem any longer as the name, age and profession of the person we are talking to will be automatically displayed next to his or her face. On holiday, foreign-language menus will simply be replaced by a translated version. And if you get lost, you can use the pedestrian navigation system to virtually project your path onto the pavement in front of you. Additionally, a multimedia diary based on the continuous data flow of the camera, the microphone and GPS will automatically be created. No need to be impatient: in 2013, Google plans to deliver its Google Glasses to developers, and other manufacturers will follow. This fusion of the real and the virtual world will change so many things in the coming years that we simply cannot overestimate the importance of this development.

But when it comes to technology, there is always a dark side: if millions of people walk around with cameras permanently activated, nothing outside their own houses will remain private. Society will monitor itself and it remains to be seen if and how a balance between technology and privacy can be maintained. Automatic deactivation of cameras at particular places is an option for which Apple has already filed a patent.

Another problem: even the sharpest minds can only use the information at their disposal. Seeing is believing, as the saying goes. Just imagine an activated augmented reality feature tricking you into thinking that the restaurant you were going to visit was infested with cockroaches – you’re going to go to the establishment across the street instead, aren’t you? We’ve already seen how these “opinions” can be manipulated by paid “customers” in relation to hotel ratings on the Internet – but we still get tricked. So the advertising industry itself will be particularly interested in “optimizing” our reality to suit its agenda. It really would be dangerous if the system as a whole were hacked without the person concerned realizing that he or she is living in a dream world which has nothing to do with reality.

Of course, the future involves much more than just augmented reality. But even when putting aside fascinating future topics such as nanotechnology, genetic research or food-producing 3D printers, you certainly won’t be bored: the much-heralded Artificial Intelligence is closer than ever to becoming reality. In 1997, IBM’s Deep Blue defeated the Russian Garry Kasparov 3.5-2.5 (the chess world champion had managed to win the previous match played the year before) – but its success was based on enormous processing power rather than true intelligence. Nevertheless, in 2001, IBM launched the amazing Watson computer that managed to trump a 74 times Jeopardy winner in a quiz game. This victory was due to the language processing technology that has been perfected over the years and to algorithms capable of independently gaining new findings on the basis of existing data. In the same year, Apple presented Siri, an imperfect but nonetheless trend-setting digital assistant. As expected, Google has joined the scramble to develop the smartest computer system. In parallel, a number of other projects have been initiated – partly supported by EU funds – whose objective is to emulate a complete human brain in a computer. Whether they are creating digital assistants or artificial brains, researchers have the will and the financial backing to complete this long journey.

Ultimately, we are looking at nothing less than the complete autonomization of our environment: cleaning robots, cars, even houses. We’ve already seen the Google Car obtain a license in the state of Nevada this year, making it the first ever motor vehicle to be recognized as capable of driving itself without human input. There have also been great advances in the field of robotics research: recently, the US agency DARPA demonstrated a robot known as the Cheetah, which runs at 45 km/h, i.e. faster than any human being. When these robots can be animated by genuine artificial intelligence, those visions we’ve seen in sci-fi films could start to become reality. Whether you look forward to all this or it just makes you shudder is up to you!

 

Everything changes

Discontinued model

Solution for the future

Television, PCs, laptops, tablets and smartphones Augmented reality glasses and displays available everywhere – from watches to monitor walls
Software, movies, music bought ‘in a box’ Cloud-based content, charge according to frequency of use
Video game consoles Virtual play worlds which can be entered from any computer system
Manually controlled cars Fully automated transport systems
Cash Digital payment systems
School Individualized teaching by intelligent systems, perfectly tailored to students’ individual skills
Workers Robots

 

The world in 2030

So, what will life in the future look like? Augmented reality devices will mostly replace today’s ultra-popular smartphones. Overwhelming progress in directly connecting computer chips and optical nerves will enable blind people to access augmented reality. And the most popular videos in the YouTube of the future will be self-made 3D movies – including unlimited viewing angles and freely focusable depth range. Traditional game consoles will disappear. Instead, virtual universes will be computed by huge computer systems distributed across numerous cities – partly located down in the basements of big apartment buildings to keep transmissions paths as short as possible. It’s a great opportunity, if you can create play worlds which are interesting enough to persuade other players to join up and pay a subscription or joining fee.

Extremely powerful computers are required for all this. The rule of thumb is: the smaller the transistors in a processor the better the speed. With every further step towards miniaturization, Intel and co. are approaching the limits of what is physically feasible. In the past, though, processor developers have repeatedly demonstrated their creativity, and a massive increase of the core number on every chip could also be an option. At present, the processing performance of computers is expected to double every 18 months, while the price remains the same. This would mean that in 18 years’ time, computers would be four thousand times faster than those of the machines available today. In theory, home computers could be more powerful than that IBM Watson supercomputer (for the techies: 2880 Power7-cores featuring 3.55 GHz each), at a cost comparable to an ordinary laptop today. It would be possible to create the first Toy Story film on a home desktop, in real time and at cinema resolution, and the first-ever complete computer simulation of the simplest known genome Mycoplasma Genitalium, celebrated just a few months ago as a scientific milestone, would be a standard experiment conducted in school classrooms.

The quantum computer technology we hear so much about is also expected to have matured by 2030. Considering the present state of knowledge, it won’t be possible to solve every typical computer problem using a handful of quantum bits, but the cracking of strong RSA encryption (applied for instance to provide security for emails and online-banking transactions) could have become reality twenty years from now.

It seems certain, however, that rootkits, Trojans and phishing attacks will continue to be serious problems over the coming years, with attackers concentrating more on servers than on devices. This is because more vulnerabilities appear in complex environments and it is reasonable to assume that the operating systems of tablets and smartphones will be “purged” within the next few years and that the bulk of the coding will be shifted into the cloud – and thus onto the server side.

It is, of course, not just the financial implications of the computer viruses of the future that are serious. The detection of the sabotage worm known as Stuxnet in 2010 confirmed suspicions that malware could also have a political context. The continuous militarization of cyberspace will produce legions of professional malware authors as the creation of Trojans and the carrying out of web attacks is legitimized and even supported by some states.

Singularity

As suddenly as this trend has emerged, its end may not be far off – humans working to create new security threats could be superseded by machines fighting against machines. This is where the progress of artificial intelligence could be crucial. The magic word is “singularity”, used in future studies to describe the point in time when computers will be sufficiently intelligent to develop independently without human support. It sounds incredible and there is still huge controversy among scientists about when – or indeed if – we will reach this point. I don’t want to bore you and duck the question: my guess is that it will arrive in 10-15 years.

Even today it is difficult for us to keep pace with the breathtaking speed of new developments. But when singularity is achieved, the pace will accelerate significantly yet again: it’s as if prehistoric man had discovered fire at breakfast, had been catapulted into the Middle Ages by lunch time, witnessed the industrial revolution in the afternoon and then arrived in the computer age by the evening!

Our minds, as well as our senses, developed over millions of years by the gentle hand of evolution, will be suddenly exposed in one blink of history’s eye to technologies that are generations ahead of our biological development.

Now, there’s nothing bad about technical aids, though our ancestors certainly took advantage of the illiterate once written language had been invented. In recent years, search engines and services like Wikipedia have pushed information processing forward. But one thing still applies: it is man who holds the ropes, while the computer lends its technical horsepower to the task at hand. If, though, we allow our lives to be entirely optimized through digital assistance systems, that balance of power will be reversed. Every attempt to return control to the human mind will inevitably cause efficiency losses.

After a certain point, intelligent systems could become so superior to us that we would no longer be able to grasp the mechanisms and reasons underlying the advice they gave us. We would resemble infants who trust their mothers blindly because they have no other option. The difference, though, is that a child grows and will, as an adult, finally be able to stand on his or her own feet. Humanity could be dependent on the assistance of computers for the rest of our lives.

Even if the next few years are relatively calm, we need to start thinking now about how to deal with these developments. How should researchers react to a breakthrough in artificial intelligence? When all is said and done, a highly intelligent system could be abused as a weapon. Developing nuclear devices is forbidden to the average citizen by international treaties – but building an all-powerful intelligence at home will present the creators with no legal problems at all and regulating them will probably be impossible. Let’s just hope that the lucky creator of the first genuine artificial intelligence will not immediately decide to go for world supremacy!

Another challenge is how to deal with the truth: we expect computers to be absolutely objective. If a head of state was just told that he is wrong in front of his people and asked if he would kindly resign – would he accept? Or will we force computers to adopt our “truths” against “their” better judgment? In such a world, we would not need antivirus software but psychologists as the compulsory processing of contradictory information can only lead to digital psychosis – remember the film 2001: A Space Odyssey?

Common sense

The future holds exciting opportunities, but there are also lots of risks and our own weaknesses, in particular, will play a major role. In the 1950s, scientists Peter Milner and James Olds experimented on rats with electrodes implanted in their brains’ “pleasure centers” – if these animals were offered the chance to stimulate themselves at the push of a button they continued doing that until they died of complete exhaustion. On a computer-controlled planet, neither clocking in nor job centers would exist. Everybody would be free to realize his or her own dreams and talents. Depending on our self-discipline, a world full of artists, athletes and writers might emerge – or, on the contrary, a sad little heap of lethargic couch potatoes!

When I am asked in interviews how people can best protect themselves against Internet threats, I always emphasize – apart from technical solutions – the importance of common sense. And if common sense ever fails, we can only hope at least that the computers will keep cool heads!

 

The biggest opportunities and dangers of our digital future

Technology

Utopia

Dystopia

Singularity Living in paradise – everybody does what he likes – machines handle the rest. The last world war could have been won by a laptop that only “acted under orders” – or which classified humanity as a whole as a risk to security.
Intelligent infrastructures Traffic flows, logistics – everything is perfectly coordinated, thus preserving resources and also the environment. In case of a malicious attack, cities could be cut off from food supplies, citizens held hostage in their houses, and the doors of prisons opened.
Digital life coaches Personal all-around advice: never again forgetting appointments or wasting time with paperwork. Complete technology dependence and thus the associated risk of leading heteronomous lives because of manipulated data.
Medical robots Cheaper operations, reduced risks of medical malpractice and wrong diagnosis, no waiting times at the doctor’s surgery. Loss of expert knowledge as medical education will be financially less attractive. Cases of death resulting from hacked systems.
Military robots The advantages are obvious – for those who have the more powerful robots. If, in the event of war, you have no fear of human losses on your own side, then the threshold for starting an aggressive war will be lower.
Augmented Reality We extend our perceptive abilities and gain new insights into ourselves by continuous life logging. Total loss of privacy and dependence on computerized prostheses in the long run.
Cashless payment solutions Shopping becomes more convenient. Tax fraud becomes impossible so the tax burden is distributed fairly and equitably. Abandoning cash entirely means, in the event of a computer breakdown, no standardized medium of exchange would be available.
Quantum computers The opening of amazing perspectives, especially in the field of science, e.g. in the simulation of chemical elements. Quantum computing could pose a serious threat to some encryption technologies such as RSA.

 

The German version of this article was originally published on the 10th of December 2012 in the book “Vision 2030” (GABAL publishing house)


Follow me on Twitter

IT security in 2030 – only humans will be the same

Your email address will not be published. Required fields are marked *

 

Reports

How to catch a wild triangle

How Kaspersky researchers obtained all stages of the Operation Triangulation campaign targeting iPhones and iPads, including zero-day exploits, validators, TriangleDB implant and additional modules.

Subscribe to our weekly e-mails

The hottest research right in your inbox