July 2019:

July: AI Is Both Friend and Foe in Cybersecurity

Sometimes our greatest strengths become our biggest weaknesses.

Artificial intelligence can out-poker-face the best human poker players in the world; it has the potential to revolutionize surgery by allowing surgeons to do their work remotely, and it may help scientists predict the probability of life on other planets. From enabling humanoid robots and improving systems automation to helping enterprises take a proactive stance on cybersecurity, AI and its potential applications are truly exciting. Perhaps the most exciting possibilities are those we can’t predict—those “unknown unknowns” that even today’s brightest minds can’t envision.

The U.S. DoD (Dept. of Defense), in its 2018 AI Strategy, which looks at ways the nation can harness AI to advance security and prosperity, says the most transformative AI-enabled capabilities will most likely arise from “experiments at the ‘forward edge,’ that is, discovered by the users themselves in contexts far removed from centralized offices and laboratories.”

The DoD points to AI applications on the future battlefield as well as in a wide range of businesses and industries as examples of why the U.S. should prioritize addressing various challenges posed by AI as well as leveraging its myriad opportunities.

One opportunity for AI is in cybersecurity. MarketsandMarkets research projects the AI in cybersecurity market will reach $38.2 billion by 2026, up from $8.8 billion in 2019. By correlating huge amounts of data collected from multiple sources, AI can spot cyber attacks in realtime—something no human could possibly do. Because AI-based systems learn directly from the data, they can react without a human first having to identify the attack pattern and code it into a rule language. AI will make predictive analysis in cybersecurity easier, but isn’t it too true in life that our greatest strengths can sometimes become our biggest weaknesses?

Source: MarketsandMarkets

Live from LiveWorx – Jim Heppelmann

Peggy and Jim Heppelmann, president and CEO, PTC, sat down for a candid conversation about digital transformation at LiveWorx 2019. He shared that the world is changing fast and companies that are resting on their laurels are going to be in trouble.

AI as an Ally in Cybersecurity

Some of the top cybersecurity threat trends facing enterprises today include targeted phishing attacks, which rose 297% in one year, according to one report, as well as ransomware, the exploitation of old, unpatched systems, and the weaponization IoT (Internet of Things) devices. Logan Kipp, technical architect at SiteLock, says stealth attacks are also on the rise, and enterprises need to prepare themselves for this trend to continue.

“(SiteLock’s) recent annual report on website security trends found that though there was a significant uptick in the overall number of attacks a website faces per day, ‘noisy’ attacks that are easy to spot, like website defacements, continued to drop in popularity this past year,” Kipp says. “Enterprises need to be aware of how hackers are more commonly leaning toward quieter attacks as ‘stealth’ categories, like backdoor, shell, and filehacker/file modification, were found on more than 50% of all infected websites in the report’s sample.”

AI can help businesses combat these top threats through threat pattern recognition and anomaly detection. “AI helps companies understand what activity is normal and call attention to any malicious activity that is abnormal for a system,” Kipp explains.

“It also makes this process faster than manual monitoring, which speeds up detection and incident response. This can help businesses limit the damage of cyber attacks.”

Targeted phishing attacks rose 297% in one year

With AI, Kipp adds that short-staffed cybersecurity teams can rely on the tech to identify and address security issues by bringing the automation of pattern recognition to the next level through supervised—and perhaps later even unsupervised—machine learning to develop better algorithms.

Kelvin Coleman, executive director of the NCSA (National Cyber Security Alliance), says AI has had a tremendous and positive impact on cybersecurity. “Both speed and efficiencies have enabled businesses to identify vulnerabilities and stay ahead of the curve,” Coleman says. “By recognizing data patterns and vulnerable human behaviors, businesses are able to adopt a more proactive, protective posture. As part of this, one of the goals of AI is to automate prediction and detection and immediately respond. Coupled with this, the national distribution and availability of 5G will move more data and improve responsiveness.”

Darktrace is an example of an AI company for cybersecurity that leverages machine learning to identify threats and respond. The company’s Darktrace Antigena, an autonomous response technology, helps companies react to cyber attacks while they’re in progress, giving security teams what they need most in times of trouble: time.

“Across the board, AI will make predictive analysis markedly easier,” Coleman adds. “AI greatly enhances the ability to detect and identify threats. It can also filter non-threatening user behavior, basically separating risks from routine practices. This helps security teams stay ahead of attacks. Experts agree that this new era in tech and cybersecurity is driven by prediction, detection, and response.”

Anthony Ferrante, senior managing director and global head of cybersecurity at FTI Consulting, says the use of AI and machine learning to work smarter, advance business objectives, and protect against cyber threats is becoming more and more prevalent. “I have witnessed this firsthand as a cybersecurity advisor for a large cohort of companies across the globe,” he says. “AI can learn from large sets of data to build models and find patterns that will help it make decisions and respond to a stimulus independently—or at least with little to no human guidance. I have seen companies use AI to understand behaviors in a large computer network. This can allow the AI to detect potential unauthorized access or use of data on the system—something that, since the dawn of computers, has largely been a human-oriented task.”

In short, AI can significantly improve corporate cybersecurity programs by creating systematic processes for flagging cybersecurity threats. Unfortunately, that same automation can be used to automate attacks.

Women are working and leading the best companies. As tech continues to evolve, women are more focused than ever to demonstrate their strength, passion, and dedication to help innovate and lead technology in the age of digitization and automation across the globe. Today’s progress comes as a result of decades of hard work, although there is still quite of bit of work yet to be done.

Peggy’s Blog

Trusting Zero Trust Cybersecurity?

What is Zero Trust cybersecurity? I mean “Zero Trust.” A Zero Trust network or Zero Trust architecture is a concept based on the idea that organizations shouldn’t trust a device, just because it’s inside the enterprise’s network perimeter.

Read More

Ransomware Ain’t Getting Better

For this column, I want to take this time to review the latest incident report that’s come out analyzing last year’s cybercrime. In addition, I want to examine cloud threats and what businesses can do to be prepared.

Read More

The Future of AI and Cybersecurity

Have you ever wondered how AI (artificial intelligence) is both a friend and a foe in the IoT (Internet of Things) industry’s cybersecurity efforts? This column is going to address some of the challenges companies face when implementing AI and machine learning projects in the enterprise.

Read More

Technology Days by the Numbers

The construction, engineering, architectural, and building industry is at a crossroads. The continued economic expansion in the U.S. is beginning to see somewhat of a little slowdown that could just have some significant consequences.

Read More

You Can’t Fix Stupid Drivers

In the state of Illinois drivers are experiencing a lot of changes this holiday driving week. First at the gas pump, they are seeing a permanent spike in gasoline taxes to .38 cents a gallon. The new increase is expected to be applied to fixing roads, bridges, highways, and other infrastructure.

Read More

Malicious and Adversarial Uses of AI

Adversaries will always do what they can to stay ahead of the game. This could mean designing their own AI and machine-learning techniques to penetrate systems. So, while AI can relieve some of the monotony on the protective side, it can also improve the efficiency of determined attackers. James Sherer, partner at BakerHostetler, says attacks predicated on people and their perceived shortcomings are not going away. Rather, perpetrators have begun to use more creative vectors that deploy those same types of approaches.

“We understand that any advances made in these technologies, even if cordoned off and initially directed toward ‘good,’ have the potential for misuse and weaponization,” Sherer explains. “Further, if AI systems work correctly, they can become part of the scenery within a security profile. (This) puts them at risk for compromise by hackers focused on undermining the AI approach itself and underscores the need for organizations to continue to monitor the operations of those AI systems and not rely upon their operation without question.”

AI holds great potential to support and augment human experts through vulnerability analysis, but its potential is also worrisome, according to Daniel Riek, senior director, AI Center of Excellence, and Mike Bursell, chief security architect, Office of the CTO, at Red Hat.

“There are now sophisticated language-parsing techniques that can forge extremely plausible-looking messages across multiple channels—say email, text, and social media—in realtime, tricking humans into thinking they need to react in particular ways,” Riek and Bursell say. “The ability of AI to take existing video or still shots and animate them to near-perfect accuracy should worry us. It’s all well for an IT security person to turn down the CEO’s request to change a password when it comes in via an email, but when it seems that the CEO is addressing them directly on a video call? That’s difficult.”

Rebecca Herold, founder of SIMBUS and CEO of The Privacy Professor, points out an important reality of AI: It’s only as good as the algorithms humans create. “Poorly engineered AI for cybersecurity could actually cause more cybersecurity problems instead of improving upon cybersecurity,” she says. “To begin with, if the AI is weak or insufficiently tested and flawed in its algorithms, an attacker could simply exploit those vulnerabilities to create chaos within the associated digital environments—to shut down networks, to steal then delete files, etc.”

Additionally, Herold says adversaries could use AI to impersonate valid users, spread malware in insidious ways, automatically trigger DDoS (distributed denial-of-service) attacks as soon as a vulnerability is identified, and continuously adjust attack methods to avoid being detected by anti-malware systems. What’s more, AI could be used to make phishing attacks more effective by learning about targeted victims and tailoring attacks to exploit psychological weaknesses.

“AI is still a comparatively young field,” Herold adds. “Expect not only the possible benefits of AI to increase in coming years and really amaze and astound you with what benefits they bring, but also expect just as much amazement to come from the AI threats from malicious use of AI and the extremely destructive digital—and also many that will lead to physical—destruction to lead organizations to ask, Why weren’t we warned about these terrible possibilities of AI use?”

Consider yourselves warned. “Keep your eyes wide open and your awareness up for the many impacts, positive and negative, that AI can bring,” Herold says. “Assign someone or some team or department to be responsible for keeping (a) finger on the pulse of your organization’s appetite to start implementing AI solutions, and be sure to do proper risk assessments of such solutions before deploying them.”

SiteLock’s Kipp observes that while advances in AI offer the great promise of automating away human error to support strong cybersecurity practices across the enterprise, automation can create an opportunity for cybersecurity complacency. “Because there could be AI-powered technology primed to mitigate issues around the clock, IT managers may neglect to maintain good cyber hygiene,” Kipp explains. “AI and automation technologies will need to strike a balance eliminating human risks without creating consumer and technical complacency.”

BakerHostetler’s Sherer similarly says good cyber hygiene and preventing complacency are key. “There’s no magic to the initial steps companies can take to upgrade their protection,” he says. “They should work to protect their devices and networks by first applying good hygiene, patching systems, updating out-of-date hardware, especially that hardware that cannot be updated or uses out-of-date software, and then evaluating and testing those applications.”

After basic hygiene measures are met, Sherer says security centers and professionals could deploy AI to determine where certain workflows can be simplified and where AI can alleviate repetitive tasks and related security alert fatigue, improving speed, accuracy, and results. “Part of the protection will come from a lack of complacency on the part of security teams,” he says. “If their focus can be turned away from the routine and monotonous, they can creatively question what the data presents, including logs and their analysis, and look beyond published threat signatures and even network patterns to focus their investigations.”

In today’s connected world, data is an organization’s greatest asset, but it’s also an organization’s greatest liability. This reality, coupled with the rise in AI and machine learning, is changing companies’ approaches to corporate cybersecurity. “Cybercrime is on the rise and shows no signs of slowing down. Combatting adversarial uses of AI will require a combination of technology and policy solutions. This includes public and private sector cooperation and information sharing,” concludes FTI Consulting’s Ferrante. “Companies need to understand the big picture of data privacy and security and invest serious time, energy, and resources to securing data and properly preparing for breaches. No company is immune to a cyber attack.”

Want to tweet about this article? Use hashtags #M2M #IoT #cybersecurity #AI #artificialintelligence #data #security #automation #machinelearning #privacy

Transcription

Episode 617 | 06.04.19

David Van Dorselaer: A 5G Revolution in Manufacturing

Guest Contributors