How to protect your company from disinformation, scams and deepfakes: Thoughts and practical advice from Yuri Dvoinos, Chief Innovation Officer in Aura

How to protect your company from disinformation, scams and deepfakes: Thoughts and practical advice from Yuri Dvoinos, Chief Innovation Officer in Aura

In the corporate sector 30 years ago, operating systems were the main targets for attack. Thanks to the emergence of effective anti-virus programs, hacking them has become much more expensive than before. Nowadays, hackers attack people, not systems. They try to create an illusion — text, sound, or video — and make employees believe they are dealing with top-level company executives or someone trustworthy. AI tools, in particular, are actively used for this purpose.

For over 15 years, I have created technology products focused on cybersecurity, and for the past three years, I have been leading the innovation program at Aura. This position allows me to work closely with an adept Data Science team.

I must admit that it is still difficult to effectively counteract fraudsters technologically. However, if the company and its employees adhere to basic security standards, they could repel most threats. Here are my recommendations on how to resist different types of attacks.

The first model: Impersonated texting patterns

The evolution of scams began with fake texting styles. In 2013, Evaldas Rimasauskas and his accomplices created Quanta Computer — a counterfeit of a real computer equipment supplier from Asia with the same name. For two years, they allegedly showed up to many companies, including Facebook and Google, with bills for equipment debts. Accountants received convincing invoices and paid them, but the money went to accounts controlled by Rimasauskas. The total loss amounted to more than $120 million.

One thing that makes different conversations stand out in the digital world is communication style. We see a familiar email, accompanied by a conversant texting pattern, and have little to no doubts that we are talking to the right person. This is the so-called disposition of trust — initially trusting everything we see.

If you know someone’s mannerisms, choice of words, texting patterns, and so on, you can copy their communication style. Attackers only need to ask Large Language Models (LLMs) to imitate texting style from a small sample. The malicious message can look like: “Hi, this is Sundar. We have a problem, and I am writing to you directly as we need to act on it now. Check this link and tell me if you see the same thing I do.” Hackers send these kinds of messages to employees en masse, and there is a high probability that at least one person out of 100 will click the link. That’s all they need. If at least one person is hacked, the entire company’s security can be compromised.

What should an employee do?

A good rule of thumb is never to trust a single communication channel. If you receive an atypical text message, call back. If you receive an atypical email, send an SMS or message via a corporate messenger. It is difficult to compromise all corporate communication channels at once.

What to do at the company level?

Focus on protecting the corporate communication ecosystem and your employees’ private devices. It is impossible to separate corporate communication from personal communication entirely. An employee’s phone always contains a bunch of work-related numbers and emails. This is a common angle of attack for hackers. They can access classified data, read emails, and copy the style of correspondence.

The least controlled devices pose the greatest threat. Therefore, the better the personal protection of employees, the better the company’s overall security posture. The mass adoption of Aura tools among employees and their families helps businesses enhance their protection. Our AI-powered assistants filter and block unwanted spam calls and texts on personal devices. 

The second model: Forged voices

The next stage in the development of disinformation is fake audio messages. With a dataset containing short recordings of a person’s speech, hackers can create voice synthesizers to imitate that voice. Although this model is imperfect, there are still examples of successful attacks.

In 2020, a bank manager in Hong Kong received what he thought was a call from a company’s CEO. Because the manager had spoken to this person several times and this voice was almost indistinguishable from the original, he had no doubts. The “CEO” allegedly asked to transfer $35 million to a certain account to buy a new company. The manager obliged. Attackers usually add background noises to the audio, as if calling from a busy street, making it more difficult for an employee to recognize signs of a forged voice. 

Some sophisticated attacks are multi-channel. For example, they send an email with a phishing link. Shortly afterwards, the potential victim gets a call saying, “It’s the CEO”, asking them to check the link they have just shared. If the sense of urgency and pressure gets to the victim, it is much more likely the link will be opened without extra precautions, and the scam will succeed. These are targeted attacks against a specific person. The more popular a brand becomes, the more likely it will be attacked. The most vulnerable are usually companies that have experienced an active growth phase but still need to integrate enterprise-level security systems.

What should an employee do?

In addition to checking through another communication channel, you should contact the person responsible for the company’s security system. If there is no automated process for checking suspicious communication, there should be clarity on who to report it to.

What to do at the company level?

Security-related rules and processes should be clearly defined. Large corporations, for example, often comply with SOC or ISO 27001 cybersecurity standards and requirements. In short, all employees need to know who is responsible for security in the company and who to write to in case of an attack or suspicious communication.

The third model: Deepfake videos

The pinnacle of corporate scams is deepfakes, i.e., impersonating a human with AI-edited images or videos. One recent case involved a $25 million scam when a financial officer of an international company in Hong Kong transferred money to fraudsters via video conference. The scammer posed as the company’s financial director. In this case, hackers created a video illusion that mimicked the company’s CEO or investor talking using AI technologies.

One way to get employees to look at a deepfake is by sending a Zoom link with a message: “Hi, this is the CEO. We have an urgent meeting. Go to the Zoom link, and we’ll explain everything there.” A surprisingly large percentage of employees will respond to and act on that relatively simple scam. It is a major security breach when they see someone who looks like their CEO and speaks with their voice.

What should an employee do?

Do not trust. All employees should realize the main rule: “No matter how much psychological pressure you get, don’t respond to suspicious communication and contact a security specialist first.”

What to do at the company level?

Conduct special training for employees. A common challenge is that these are usually very boring, and it is difficult for people to process this information, especially the technical aspects. Therefore, it is worth investing in more engaging gamified cybersecurity training programs. 

Focus on the security officer

Without a security officer, avoiding these types of scams is hard. The CISO (chief information security officer) or another security professional is responsible for integrating employee device protection systems, two-factor authentication, strong passwords, antiviruses, and scam-recognition tools. 

The CISO must answer the following questions: 

  • What software does the company use?
  • Which employees have access to what corporate resources and information?
  • What corporate information can be shared across different applications?
  • What can and cannot be copied?
  • How should employees report scam attempts or security incidents?

Usually, security officers are employees who have worked in companies for a long time because it is better to trust one person than to give access to new candidates every year. This is especially true for large corporations, where data protection has built up over the years. For example, Jerry Geisler took the position of Global CISO at Walmart back in 1991, and today, he has 33 years of experience in this position. Similarly, Stefan Schmidt started building security at Amazon 12 years ago.

AI tools will augment our ability to see risks

Cyberattacks and disinformation have long been part of corporate life. This is a universal international threat. According to a Proofpoint report in 2023, two out of three businesses in the UAE alone fell victim to phishing attacks. The main vector of attacks has been and remains human because of how we process information and believe what we see or hear. Developing critical thinking concerning a familiar voice or writing style takes more time and effort as people have been forming these behavioural patterns for centuries. However, AI-powered tools can monitor corporate communication channels for suspicious content. AI cybersecurity assistants will become our best friends, as they can simultaneously check every incoming message against dozens of attributes.

For example, they might scan the metadata of a message regarding suspicious markers and alert that “this email looks similar to another, but it is the first time you are receiving an email from this sender.” They might also analyze the context of the message and point out that it contains linguistic constructions similar to known scams. AI assistants can also indicate whether the message has alarming emotional sentiment—a sense of urgency or pressure tactics. By analyzing dozens of data points, such systems can augment our abilities to recognize suspicious communication by assigning a “risk level” to each message.

I believe that AI technologies that recognize suspicious communication should continue to develop. This is the future of cybersecurity systems protecting against AI-generated scams and deepfakes. After all, spotting a danger signal in time is 90% of the job to repel a cyber attack successfully. – Yuri Dvoinos, Chief Innovation Officer at Aura.