Cyber security experiment reveals threats to industrial systems
Since this was a simulated target, would a skilled hacker be able to realize that he wasn’t in a real control system?
Luallen: I have assessed systems for my courseware where I could virtualize it or use the real equipment. When I look at the virtual version path, I know that it doesn’t have the sophistication needed for the types of attack surfaces that I want to represent. If I flip that around and think of how an attacker will think the system should react, I don’t think you have to be too sophisticated to do that as part of the evaluation. If you don’t want to get caught, you have to make sure something is real before you go after it.
So there are hackers and there are hackers. We tend to think of them in a more abstract sense rather than as individuals.
Assante: We use the hacker label in a very general sense. Some individuals and groups bring in different skill sets. If the actors involved can actually see how they’re interacting with the target system, and they are highly experienced with the components of that system and how those components behave, then they are not going to see the things they expect to see, which will help them determine that they are looking at a facsimile and not the real thing.
There are ways to say, “What am I looking at?” You give it a command with the expectation that a particular component will respond in a particular way, and if it doesn’t, you know you aren’t dealing with a real-world situation. The good news is that I don’t think many threat actors are at that level of sophistication and experience with ICS components. Every system is made up of many different things in different layers. Different hackers are good at different parts.
Conway: The bad-news side to that discussion is that we can say the very good people are very limited in numbers, and those very good people would have identified that this was a honey net. Those people would not have brought to bear all their tools and capabilities just for someone else to capture them and do some analysis. So if you’re talking about people who are not the best of the best and look at what they achieved, that’s the scary piece of information. This system was online and available for a short period of time, and you had numbers of people getting in, doing HMI attacks using SQL injection, cross-site request forgery, stealing credentials, exfiltrating the VPN configuration files, and so on. There are a lot of bad things that happened, and we can say that this wasn’t the best of the best, because they would have known they were in a honey net. [Honey net and honey pot are similar in concept, but the former suggests a larger-scale system. Ed.]
Assante: Another bad thing that is harder to get our arms around is that all this activity was on a few honey nets. In the defensive communications circle, we know incidents are occurring, we have generalized reporting by the ICS CERT and that kind of thing, but we know that real-world reporting is much more limited. If this experiment is any indicator, we have to believe that attacks against real systems are occurring, or at least intrusions or interests, and those compromises are very difficult for the system owners to detect. Owners have a hard time acknowledging and understanding that their systems have had reconnaissance run against them or a real live intrusion. Most end users don’t have the capability for detection, but for those that do, their freedom or desire to share that information is limited. Unfortunately, we as defenders have a very limited view of the state of play.
Scary stuff, certainly. So now what?
Conway: When we look at it and say, “What do we do about it?”, I think of things like, disable Internet access, look at your trusted resources, impose a USB media lockdown, whitelist applications, and so on. But then I ask myself, “Did Trend Micro do anything to make these honey nets more visible as targets?” I look at how much time and effort they put in to make sure these systems were indexed and queried with Google. They made sure they’re accessible within SHODAN. They went into all the environments and customized and tailored them so they had a right language setting for the different web browsers. So turn that around and take the approach that asset owners should do that kind of reconnaissance on themselves. Asset owners should ask, “How attractive a target are we? Can someone find our system through Google? Are we available on SHODAN?” If you try it and find that you are easy to locate, how do you make yourself less visible to attackers? We say security by obscurity is a waste of time and irrelevant, and I think that’s true if you’re being specifically targeted, but if people are just looking for a target of opportunity, it definitely makes sense to keep yourself more hidden.
Luallen: That’s a key point. The open source intelligence that people can gain from companies promoting themselves, or connecting themselves, or making too much information available through SHODAN, or vendor documentation, or even presentations at cheer-me-on conferences.
Assante: Reducing the attractiveness of your system for compromise certainly works when people are applying a capability or tool that they have looking for it (for example, crafted searches for Internet facing ICS components). If you reduce the observables for them to find you, that’s a good thing. What it doesn’t do is help if somebody is finding you for a different reason, meaning you’re a target because of the community you serve or other reason for a directed attack.