Understanding ransomware attacks and the people behind it

Ransomware and cyber attacks are increasing against manufacturers. Some are trying to understand more about the people behind them and their motives, which are not always clear.

By Gregory Hale March 6, 2020

No one can deny ransomware is a problem hitting the manufacturing automation sector, so much so that companies are getting stopped in their tracks after suffering an attack. Just ask Maersk, FedEx and Merck to name a few.

Over the 2016-2017 timeframe, it appeared more factories were suffering from ransomware attacks, so Stephen Hilt, senior threat researcher at Trend Micro, and his team decided to put together a realistic virtual factory, or honeypot, to understand why these companies were being held for ransom.

Setting the honeypot trap

“We wanted to see what was being held for ransom,” said Hilt during his presentation at the S4x20 conference in Miami. “We learned more about the process.”

The honeypot went live May 6, 2019, and they shut it down right before Christmas. He said at first, they had a little activity, but not any full-fledged attacks.

“Between May 6 to July 24 we had very little activity,” he said. “But in late June and early July, we opened it up and it showed more activity; then we started to see more action. An actor came in an installed a python installer. Someone came in an installed a backdoor and we were pretty excited. We had a ransomware attack.”

The resulting attack on the virtual factory was on and it shut down the facility.

“The ransomware attack had us down for four days,” Hilt said. “We tried to look like a real victim. They were asking for $10,000, but we negotiated and dropped it down to $6,000. We interfaced with the actor to gauge their knowledge.”

The virtual company officials sent an email to the attackers asking them to decrypt a file as an example, to make sure that they did in fact have the decryption key.

“During this part of our exchange, we acted the part of a disgruntled company representative asking why the threat actor was doing this in the first place,” Hilt said. “They answered succinctly and obliged us by decrypting a sample file. We sent them the conveyor belt programmable logic controller (PLC) programing file (Omron CXP file), which they decrypted accordingly, suggesting they were unaware we had in fact sent them an important file.”

That exchange was very telling for the team.

“Their knowledge of control systems was minimal,” Hilt said.

While it was a virtual environment and not a real factory, Hilt said, “The factory was down for four days while we negotiated. If we didn’t have that backed up, it would have been a very costly attack.”

After the initial attack, the virtual company suffered other attacks which varied in terms of severity.

One attack came from what they said was a “good guy attacker. The person wrote a note saying we had an open port and you should create a password,” Hilt said.

Hilt and his team learned a great deal about attacks and how a solid honeypot should work. They even found on white hat attacker found the virtual company on the Internet and reported it to the proper authorities. Hilt and his team then reached out to the researcher to let him know it was a honeypot and not to worry about it. Hilt said the researcher said that was one of the most realistic fake companies he has ever seen.

The researchers concluded if you want to run a high-functioning honeypot, daily interactions are needed. In addition, you have to deal with incidents as they happen. Do not wait, otherwise you will see your honeypot collapse. Also, Hilt said, “don’t put control systems on the Internet, ever.”

“Our findings from this honeypot experiment should serve as cautionary examples for organizations, particularly those that run industrial control systems (ICSs) and smart factories, to ensure that adequate security measures are in place on their systems,” Hilt said.

Know your attacker

When thinking about a hack on the critical infrastructure entity like a utility, the first thought is it has to be a nation-state attack. It is easy to jump to that conclusion even before looking and digesting the facts. It can happen to anyone from a researcher to an executive to a reporter covering the topic.

Hold on for one minute, just stop and look at all the facts, said Jason Larsen, ICS principal at IOactive during a session at S4x20 in Miami. He pointed that out when he gave a presentation saying it took him 14 hours to weaponize an attack on the grid — and he is not a nation state.

“In a fairly skilled attack, this is the time it would take,” Larsen said. “All things being equal, it would take about three weeks to create an attack similar to the Ukraine.”

As it turns out, Larsen was hired by an electric power utility to prove how long it could take to pull off an exploit chain and create an attack.

Those 14 hours involved analyzing an Ethernet-to-Serial gateway, finding exploitable bugs, writing exploits for those bugs, and constructing an implant that would manipulate some points during a later part of the engagement, he said.

In terms of the 2015 Ukraine attack, “We look at an advanced persistent threat (APT) as a super adversary with superior skills. We are viewing them with mystical powers,” Larsen said. “They didn’t have any advanced skills.”

While the group in the attacks against the Ukraine may have advanced skills, when you look at the actual attacks, they were not super sophisticated, so anyone could have pulled them off.

“This was totally doable by an individual,” Larsen said.

In incidents like the two separate attacks against the Ukraine power grid, the systems may not have been hardened enough to fend off an attack.

In any event, the types of attacks that show any kind of details are ones used as political statements, like Ukraine and Stuxnet, where an Iranian nuclear facility ended up falling under attack by the U.S. and Israel in an effort to slow down or stop the country’s nuclear enrichment program.

In a non-political incident, when there is a cyber attack, the attackers always look to clean up the system after the attack occurs to erase their presence.

“The clean-up phase is to make people think it was a system hiccup,” Larsen said. So, if there is a slight blip in the system and it appears nothing is there, operators can just “blame it on the process. But if it is a cyber incident, you don’t get a second bite at the apple.”

Larsen added there was no clean-up on Stuxnet and Ukraine. “These were political statement attacks.”

In explaining the Ukraine attacks Larsen said the bad guys were learning as they went along.

“The first Ukraine attack had a denial of service, but they didn’t know the system,” he said. “The team was skilled in information technology (IT) hacking, but not skilled in ICS. The second Ukraine attack, they were more knowledgeable. They progressed and they were talking a bit better.”

But they were still not literate in the ICS control environment.

Related to not being literate, Larsen said there are two payloads when it comes to an attack, the physics payload and the cyber payload.

“Neither Ukraine attack showed they had any knowledge of the physics payload,” he said. “It was super sexy to report it was an APT, but they didn’t have competence in ICS control.”

That leads Larsen to say more security professionals need to spend time working on and developing the basics. Because, if it only took him 14 hours to create an attack, others can do the same thing.

“Some dude in the basement could have written the exploit,” Larsen said.

Don’t fall for the hype of a reported attack. Understand what it is and know if you have the basics down, you can prevent an attack from occurring.

This content originally appeared on ISSSource.com in two parts. ISSSource is a CFE Media content partner. Edited by Chris Vavra, associate editor, CFE Media, cvavra@cfemedia.com.

Author Bio: Gregory Hale is the editor and founder of Industrial Safety and Security Source (ISSSource.com), a news and information website covering safety and security issues in the manufacturing automation sector.