27/04/2024

Image by Freepik

Alexandros Niklan
Sr. Security Consultant

Let me start by saying that this text is not going to process or analyze any present state of CyberSec & InfoSec of today in reference to AI. It is also not going to present any possible involvement of AI into CyberSec & InfoSec.

This is a text/article/analysis (whichever someone thinks it can be said about it) about what is to come on AI and how this prospect will also affect the security world and not just for IT domain.

We have only recently been informed that there is a breakthrough on robotics integrating AI and presenting first humanoids, not just as a future concept but as a today’s achievement which will be put in production by companies like BMW and Nvidia. In regard to BMW especially, these humanoids will be set in the production line at CA, US factory among human workers.

Nvidia on the other hand (along with other companies) have presented robot-humanoids which seem to be able to process complex situations and at the same time interact with human orders and challenges.

Now, someone may say that this was to be expected as AI today is becoming more and more advanced by absorbing petabytes of data and at the same time, CPUs/GPUs are capable of processing millions of options, possibilities, and outcomes in a split of a second. But all of these still have limitations. Some of them will be presented in the following lines.

First limit is placed in the “body” of AI. Technology of today has a certain limit when it comes on processing (CPU/GPU limits, even on multiscale). This means that at some point AI may reach a wall that will not have any margin to go beyond that. This is a line which has already been mentioned by many scientists and it is considered also as a barrier for a possible evolution of self-awareness of AI. While this is quite a complex issue to analyze, facts indicate that no matter how much data are combined and how generative AI will use them to produce efficient results, this is not still capable of bringing “life” to the machine.

Second limit is also placed in the “body” of AI. Most of AI platforms are compartmentalized by companies and organizations in order to protect their source code and patent/copyrights. Not to mention national and international privacy laws that bring a barrier (NYT lawsuit for example), which also limits access to significant data portion for several AI platforms. This means that data needed can not be reached therefor all outcomes may have flaws or room for evolving, if access if given.

Third limit is energy. We witnessed robots on video and how they will be put into work or act upon set challenges by humans. But in order to do this, a huge amount of energy is needed. And that is only for the platforms based on cloud services and software. How a robot such as the BMW one will work in a 24/7 basis, has not yet been cleared out. Especially when we already know that e-cars for example have a very limited performance on today’s cell batteries.

These, as said, are only some obstacles which are already been in focus of many scientists. How do they think they will be solved? This is where it gets really interesting.

The solution to this , according at least to some experts and engineers, is the “Hive collective”.

Hive collective is similar to the IoT we know today. But with significant differences. It would be something with no barriers, it will use all CPU/GPU capability available for processes (kinda like the SETI program used private terminals some years ago). It will also use a “collective AI” which will evolve now to a huge data repository with unlimited options to examine by a single “terminal” (aka robot or whatever platform is connected to it).

As for energy, while we are all expecting a significant breakthrough in quantum technology (where entropy is still a block thing), it was only yesterday when EU commissionaire, Von Der Leyen stated that Nuclear energy should be the focus now , as the stepping stone to transition from mineral produced energy to green energy. Nuclear energy as we know has no limits also when it comes to production. Limits are only set by our present technology. So imagine if a new modular reactor (already presented in USA) will be used to supply power to central server farms running AI software.

I know that many now will either smirk or even laugh since this is a scenario that cries out “skynet/matrix” but this is not the purpose of this text. The issue is elsewhere and it is a bit more real than what some will rush into as a conclusion.

Here is where security comes in play. We already know that AI of today was initially presented and built with no real consideration in security and privacy. Only some months ago EU and USA were trying to enforce legislation on privacy and security (access control) along with an ethics framework. This is again a catch-up game as profit may be leading the way , but as usually there was no risk analysis present beforehand.

Security now is called to limit and control , unauthorized access, unauthorized use, malicious usage, even preventing usage of AI as a strategic weapon of producing scenarios or maybe guide an arsenal against an opponent. But is this achievable?

There are two answers here to my opinion. One is very disappointing and unfortunately is the one that is valid and applies today. Security as said is playing catch-up to a product which is very aggressive in self-evolving and on top of that , there is a human side to control when interacting with it. Legislation, frameworks, code of ethics are all very good , impressive but they all lacking in one very serious point. They are built by humans for humans. Controls are built and estimated based upon previous experiences where technology evolution was linear and slow in progress. AI is nothing like that. Generative AI especially is a sample of what it can do as a tool.

Controls should be built into the coding. Be a core part of it. And i am not talking just about the process (aka SDLC) but the coding itself. Regardless if someones uses for example Python or Logic to built something, this needs to be done with fail-safes in every I/O module. In other words, controls should be put within the mind of AI itself and not just on the process or human interactions with it.

Why it should be done like this? Lets make a scenario here.

We already know that Generative AI , besides giving nice articles, images, artistic outcomes of any combination and nature, it is also being used to rectify and correct options and results , as a part of becoming better in quality, speed and of course efficiency.

Now lets not jump to the hive scenario and the “skynet theory” (hoping not to displease readers here) but lets go into a scenario of a “stuxnet” work, built by a group of people , using a generative AI platform with a specific goal against an opponent of theirs. This work will have lines of coding inside created by AI to overcome and achieve goals by transforming or evolving against any countermeasures. This may be selected to be used against a power supply network (Ukraine vs Russia war e.g. ). Lets say it is able to successfully accomplish its goals. But for some reason the coding team has no “self-destruct” routine in place, or something is “buggy” enough and wont be executed. What will happen next? Will this “work” evolve into something that will go over IoT? Will it be able to bring down also other networks of any kind?

As said i know that many will not agree or even smile at that with sarcasm, but i could respond with only two words: “Dark Net”. If a sample of this source code is leaked and used by groups do you really think it is still not something possible? And if this happens will any kind of policies, SOPs, frameworks and process controls be able to defend against it? I hardly don’t think so. They could be useful for monitoring but not on stopping an advanced attack of multiple levels and points of breach.

This is why i am saying that security when it comes to technology of AI , needs to be transformed and evolve prior to any progress of the product and be integrated into its core with coding fail-safes which would be able to be triggered as a “break glass” rule. A major switch off that could issue a “kill -9” command and incapacitate an AI platform/robot or even hive of AIs and humanoids, if something goes wrong.

Again. As said in the first place this is a text presenting something that is not yet here, at least in its worst form, but things tend to go to that way and we should, at least, consider how to be proactive as Security Experts, than being again the last gear moving and play an endless (and ineffective) catch-up with the AI as it is rapidly growing and evolving.

 

Facebooktwitterredditpinterestlinkedinmail
Geopolitics & Daily News Copyrights Reserved 2024