Artificial Intelligence – AI –

Words AI on a black background technology photo

What is it Really?

By Kenrick Bagnall, Founder of KONCYBER

Artificial Intelligence (AI); we seem to be hearing more and more about it each day. We hear about banks using it in fraud detection, customer service chatbots, manufacturing automation, and even smart homes and social media. But what is AI and why should it matter to you and me?

What is Artificial Intelligence? Marrian Webster defines AI as, “Software designed to imitate aspects of intelligent human behavior, an individual program or set of programs designed this way, something (such as a robot) that operates by means of AI software.”

I decided that the best way to get a clear definition of what AI is, was to simply ask AI itself. Many of you reading this would have heard of the popular AI by the company OpenAI, called ChatGPT. ChatGPT is known as large language model (LLM) AI, more on LLMs later. I posed the following question to ChatGPT: “What is the overall goal of Artificial Intelligence?”

Answer:

“The overall goal of Artificial Intelligence is to create machines or computer systems that can perform tasks that would typically require human intelligence. This includes the ability to learn from experience, adapt to new situations, understand, and process natural language, recognize patterns, reason, and solve complex problems. The idea is to mimic human cognitive abilities and decision-making processes to some extent, if not surpass them, in certain domains.

Overall, the ultimate goal of AI is to create systems that can effectively and intelligently interact with the world, understand context, make decisions and benefit humanity in various aspects of life, including healthcare, transportation, education, entertainment, and more.

While AI has made significant advances, achieving full artificial general intelligence (AGI) that matches or exceed human intelligence remains an ambitious and challenging long-term objective.”

Why are we now hearing more and more about AI?

I will suggest that four things have changed recently to bring AI more into the spotlight.

Point One. The algorithms or programming code and logic used to construct these systems have become far more complex. This complexity has been driven by more advanced and powerful programming languages that feed off of large data sets

Point Two. The source data models that these systems (AI) can reference for learning have grown exponentially over the past three decades. This requires tremendous processing power to manipulate all of that data.

Point Three. Microprocessors including the advanced chips specifically designed for applications like blockchain and crypto currency, are now being used for the huge processing demands that AI places on very expensive computer systems.

Point Four. More money than ever before is being put into the development of AI-based systems. The likes of Google, Amazon, Facebook and Apple, have been pouring resources into AI recently. Add to this the similar contributions of

Microsoft, IBM, Nvidia, Intel and more. It’s clear to see why AI has exploded into the mainstream in recent years.

When Things Intended for Good Are Not Used That Way.

When we look back at the AI’s self-described definition and objective, we see an important line. AI is basically indented to make decisions and benefit humanity in various aspects of life. How often has it been witnessed that things designed for good end up being used by the criminal element for nefarious purposes? Can AI be used not only for the betterment of mankind, but in the wrong hands also be harmful to mankind? The simple answer is yes.

AI, Future Crime & Public Safety.

In this section of my article, I will be referencing a research paper published by the University College London (UCL), Dawes Centre for Future Crime and AI-Enabled Future Crime. The paper was published in July 2020, and based on the content, I believe there are some conclusions that can be drawn.

AI global risk impact

Let’s set the stage with a brief review of risk. In traditional property and casualty, risk is generally measured by looking at the likelihood, severity, and impact of an event. At a high level, cybercrime risk considerations look at the susceptibility of a system, how accessible it is and how capable the threat actor (or cybercriminal) is.

The UCL study took into consideration the following four factors when considering cybercrime which was enabled with the help of AI.

  •  The harm caused;
  • The profit gained by the criminal group, vastly financial and denominated in digital currency (Bitcoin);
  • How achievable is the crime, in terms of readiness and availably of the technology; and Þ  How difficult is it to defeat (detect, prevent) the crime.

The study found many interesting things about AI and how it impacted cybercrime and the resulting impact on the public. To summarize for this article, the four most impacted areas of AI enable crime are as follows:

1.      Audio/Visual Impersonation

Impersonation of another person on video or audio. This could be the impersonation of

children to relatives over video calls to gain access to funds.

2.      Tailored Phishing

Phishing is a social engineering attack that aims to collect secure information or install malware via a digital message purporting to be from a trusted party, such as a bank.

3.      Disrupting AI-Controlled Systems

As AI systems become ever more essential (in government, commerce, and the home), the

opportunities for attack will multiply, leading to many possible criminal and terror scenarios

arising from targeted disruption of such systems, from causing widespread power failures to traffic gridlock and breakdown of food logistics.

4.      AI-Authored Fake News

Fake news is propaganda that aims to appear credible by being, or seeming to be, issued from a trusted source.

With consideration to the UCL paper published in 2020, based on a couple years of research, we can fast forward to 2024 and look to see where things are today. A current look at Analytics Insight tells us the following about what’s happening today with AI-enabled crime: Attacks against individual victims, both public and private sector organizations, and critical infrastructure, shows us the following top AI enabled cybercrime areas:

a.      Advanced Persistent Threats (APTs)

A sophisticated, sustained cyberattack known as an advanced persistent threat (APT) occurs when an intruder enters a network undetected and stays there for a long time in order to steal sensitive data.

b.      Deepfake Attacks

These employ artificial intelligence-generated synthetic media, such as videos or images, to impersonate real people and carry out fraud or disinformation campaigns.

c.      AI-Powered Malware

Malware that uses AI has been taught to think for itself, adapt its course of action in response to the situation, and

particularly target its victims’ systems. Malicious software that uses AI to avoid detection and adapt to changing environments, makes defense more difficult.

d.      Phishing

Using natural language processing and machine learning, attackers can create more convincing phishing emails and

messages that are designed to trick individuals into revealing sensitive information.

e.      DDoS Attacks

Distributed Denial of Service (DDoS) attacks use AI to identify and exploit vulnerabilities in a network, allowing the attacker to amplify the scale and impact of the attack.

By examining 1-4 from the 2020 UCL report and cross-referencing the current insights in a-e, we can form the conclusion that AI- enabled cybercrime does not mean “new crime”. However, it does mean an increase by an order of magnitude in the sophistication and complexity of already existing attacks. Over a span of 4 to 6 years we have seen a marked increase in the effectiveness of AI-enabled fake news, deep fakes of public figures and celebrities, targeting phishing attacks and attacks in critical systems and infrastructure.

This combined with the growing use and sophistication of Large Language Models (LLMs). An LLM is a type of AI that can recognize (read, hear, understand) written or spoken words or text, and revert its output in the same format. Yes, you can talk to the AI, and it can respond back to you in a way you can understand. That is ChatGPT in a nutshell.

Closing Thoughts and Takeaways.

What the defenders of our data networks we are seeing 4-6 years ago has not changed. It’s the same crime only more sophisticated in its approach, execution, frequency, and voracity.

AI does pose a serious threat in terms of cybercriminals taking the technology meant for good and putting it to criminal use. But that’s not what I would call a game changer. The game changer as I see it is when AI is powered by this thing called Quantum Computing and managed by the structure and capability of Blockchain Technology. The latter two will be for another article at some point in the future, but my point here is that current AI won’t move the needle in terms of “new crime”. But Quantum powered AI will be a game changer, the likes of which we have never seen before.

So, what do we do? This hybrid threat will force the expanded adoption of AI into all aspects of every organization’s cybersecurity defenses program. It will force the use of AI in every government’s infrastructure and public safety protection program. And, it will force law enforcement to adapt new investigative tools and techniques to function in a landscape of new technologies and new AI enabled crime.

For the everyday citizen, you and I must continue to secure our personal devices, use long and strong passwords, and password managers. We must secure our homes as with each passing month we install and attach even more Internet enabled devices. We must stay up to date on current threats and be aware of our own human vulnerabilities as we browse, check email and text messages and shop online. We must be hypervigilant as we are on the front lines for protecting our own personal devices and data.

Artificial Intelligence is here to stay. We have yet to see the best, and the worst of it.

Kenrick Bagnall, Founder KONCYBER Senior Cybercrime Investigator Hackers for Change Board Member
647-988-5782