iStock-856080954-1

 

The Oxford dictionary defines Artificial Intelligence (AI) as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

Before we begin our examination of AI and Cyber Security we should first look at the complexity issues we face as CISO’s in protecting our organization’s systems, networks and data.  Our systems and data are housed in datacenters and clouds supporting literally hundreds of different software applications, operating systems (OS), etc. that need constant attention in the form of patching and upgrades.  (Raise your hand if you have applications running obsolete operating systems because the applications are deemed to be essential to the business and are so antiquated that they cannot be ported to newer, supported OSs!). To make matters worse, we have a staggering number of security products deployed in an effort to protect our systems. These security systems themselves also demand constant attention not only in monitoring their alerts and alarms but in patching and upgrading them as well.  These examples of systemic complexity beg for AI solutions to help secure them.  A June 28, 2018 BizTech article states:

“Security is not working. While security as a percentage of IT spend continues to grow at a robust rate, the cost of security breaches is growing even faster. Organizations are spending close to $100 billion on a dizzying array of security products. In fact, it is not uncommon for CISO organizations to have 30 to 40 security products in their environment.”

 

Defensive Aspects of AI and Cyber Security

(Good Guys)

Thirty years ago, one of the most promising aspects of AI was the expert system. The idea was to sit down with a subject matter expert (SME) and encode their knowledge into a computer program (mostly if-then-else encoding). This concept held promise for applications such as medical diagnoses, stock picking and capturing institutional knowledge for soon-to-be retired SMEs. Expert systems used neural networks and special purpose languages such as LISP.  Fast forward to today and the expert systems of the past have given way to the AI subfields of machine learning and deep learning. The elder, machine learning, is largely based on task-based algorithms and used for applications in prediction, analytics and data mining while the newer, deep learning, uses methods based on learning data representations (as opposed to task-specific algorithms).

To paraphrase Malcom Gladwell’s somewhat controversial assertion is that it takes a human 10,000 hours of deliberate practice to become world class SME in any given field. We can see that an AI system employing machine or deep learning can exponentially shorten the overall practice time required to be considered an SME. To build a body of knowledge for IBM’s Watson to compete on the popular Jeopardy program, researchers put together 200 million pages of both structured and unstructured content, including dictionaries and encyclopedias. When asked a question, Watson initially analyzed it using more than 100 algorithms to identify names, dates, geographic locations or other entities. It also examined the phrase structure and grammar of the question to better gauge what's being asked. In all, it used millions of logic rules to determine the best answers. Incidentally, Watson easily beat its human contenders.

The obvious application of machine and deep learning is for the AI system to sift through the millions of alerts and alarms captured from the various security systems.  These systems can analyze patterns and correlate attack data to ferret out potential threats and compromises.  Will AI systems like Watson replace human security analysts? Potential defensive uses of AI might be to:

  • Provide a comprehensive, enhanced view into the security of the network and hence better situational awareness by analyzing software vulnerabilities, configuration errors, and threat intelligence to isolate high-risk situations that require immediate attention.
  • Enhance and accelerate incident detection by analyzing and correlating millions of security alarms and alerts across disparate security tools.
  • Enhance and accelerate incident response by prioritizing incidents and automating remediation tasks.

 

Offensive Aspects of AI and Cyber Security

(Bad Guys)

AI is a dual-purpose technology - it can be used for both good and bad purposes. We can make the case that most of the AI-based defensive techniques and methods can also be used by the bad guys to make attacks more adaptive, faster and difficult to detect. A February 2018 report, The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation identifies three areas of malicious AI as expansion of existing threats, introduction of new threats and change to the typical character of threats.

AI attacks are not relegated to the digital realm but bleed over into the physical and political realms as well.  Consider self-driving automobiles that can be hacked in various ways to stop, start and speed up. Probably none of the AI-related technologies provides more of a “human face” than robots. The robotics industry is growing rapidly as illustrated in the May 2018 bolr.robotiq.com article which cites several key statistics: 

  • According to Loup Ventures research, the industrial robotics market is expected to grow by 175% over the next decade. The primary focus of that growth will be on collaborative, assisting platforms rather than traditional automated machinery.  The same Loup report states that 34% of the industrial robots sold by 2025 will be collaborative – designed to work safely alongside humans in factories and plants.
  • One of the most prominent electronics and component manufacturers in the world, Foxconn, has converted 60,000 jobs into automated ones.
  • As mentioned in IDC’s Worldwide Healthcare IT 2017 Predictions report, there will be a 50% increase in the use of robotics for medical and healthcare delivery services by 2019.

My personal experience with robotic surgical systems was that these complex systems were not designed and implemented with basic security protections. I suspect most if not all of the robotic systems are not designed with basic cyber security protections. It is easy to foresee that malicious hacks into these systems are imminent.

In the political realm, examples of malicious AI could include: fake news reports with realistic fabricated video and audio, automated, hyper-personalized disinformation campaigns, and manipulation of information availability. 

In summary, AI will offer significant advances in cyber security as well as enhancing cyber-attacks.  The proliferation of robots, autonomous vehicles, personally worn or implanted medical devices, etc. will benefit society and the overall human condition while at the same time exponentially expanding an already fertile attack surface.

With CISOBox, you'll know you have the best tool to deploy in the aftermath of a cyberattack. As AI expands the areas vulnerable to cybercrime, don't wait. Schedule your demo with CISOBox today. 

 

Share This Article

  

CISOBox Demo

See how CISOBox can help you with incident response handling, including graphs, analytics, and communication coordination.

Higher Education Case Study

Wondering if CISOBox is right for your organization? Read about Case Western University and the impact CISOBox had for them.