Terry Zink's Cyber Security Blog

Discussing Internet security in (mostly) plain English

Live Free or Die Hard

Live Free or Die Hard

  • Comments 4

Spoiler alert.

This past weekend, I got a chance to watch the 4th installment in the Die Hard series, Live Free or Die Hard.  I hadn’t seen the whole thing end-to-end before, only parts of it.  It was nice to finally get a chance to see the whole thing.

Overall, I like it.  It’s so far over the top that it’s completely unbelievable… but that’s the point.  It’s supposed to be unbelievable.  A jet plane flying around the city at low speeds and hovering like a helicopter in between parts of a freeway?  John McClane getting hit by a car and walking away?  Bad guys falling 20 feet onto concrete below and not even suffering a limp?  Whatever.

But what about the basic premise of the story?  In case you haven’t seen it, at the beginning of the film, various government agencies experience a major shutdown.  Hackers infiltrate the computer systems of the FBI, departments of transportation, nuclear facilities… well, nearly every agency in the United States and they proceed to shut it down.  The villain behind it is a disgruntled employee of the Department of Homeland Security who is a brilliant programmer and security expert.  After the events of September 11, he warned his superiors that the nation’s cyber infrastructure was vulnerable to attack.  Rather than listen to him, he was ignored and/of vilified, and fired from his job.  To get revenge, the villain plots a major hacking operation to demonstrate to his superiors that they should have listened to him; this proves that the nation’s infrastructure is vulnerable.  In reality, this is all a smokescreen as it is a diversionary attempt to steal billions, possibly trillions, of dollars of wealth.  In the hacking world, the villain would be classified as a cyber warrior.

Of course there are some things in the movie that are completely unrealistic like the physical stunts above.  Furthermore, why would the bad guys hack into a hacker’s computer and wait for them to hit the Delete key that detonates some C4, rather than them executing the explosion remotely?  That seems a little inefficient to me.

But that’s not the question I want to address.  What I want to ask is whether or not the nation’s cyber infrastructure is really as vulnerable to attack as the movie makes it out to be.

My answer?  Unlikely.  There are a couple of problems with this scenario:

  1. The bad guy’s team was too small. 

    I counted a team of maybe 3 hackers on the bad guy’s team, not including himself.  That is way too small a team to control multiple that much computer systems.  Over here, we have a lot of people running a network that is not nearly as complicated as multiple government departments.  It takes constant monitoring and tons of documentation to keep things running smoothly.  And many times, things don’t run smoothly.  It would take a very long time to code something up, test it, deploy it, and control it while evading detection during the entire time the operation was running.

    Of course, something like that might be possible but three people is not enough.  It takes forever to get done all of the stuff I mentioned.  And it is very resource intensive.  Nobody writes code that executes as perfectly as the villain’s does the first time they try it out.  Of course, maybe they tested things but the government has a lot of independent systems.  The left hand doesn’t know what the right hand is doing.  So, you need guys who are familiar with each of the government’s various departments’ computer systems.  And know how to control them.  That just isn’t possible with 3 people.

    The computer hackers running the operation would be busy all day trying to evade detection and the amount of psychological pressure on them would be intense (especially when your boss is holding a gun and waving it around, and his girlfriend could knock your teeth into next week).  Nobody under that type of pressure avoids making mistakes, so you have to build automated mechanisms to control stuff for you.  And if you do that, it takes time to code it.  And if you take time to code it, even if you’re a great programmer, it’ll still have bugs.  The flawless execution of their stuff was completely unrealistic without having back up teams responding to issues that would inevitably come up.

  2. The nation is vulnerable to attack, but not in the way they made it out.

    The uber-point of the nation’s security being vulnerable is correct, but not in the way they were making it out to be.  In my first point, I say that the team is too small.  I go on to say that government departments have all their stuff implemented differently.  I don’t know this to be true, of course, but I surmise that each department built their stuff independently of each other.  Some may have built their stuff on Linux and MySQL.  Others may have used Ruby.  Others, Perl.  Maybe there is some Java, Exchange, PHP (ugh) and Oracle.

    And when stuff is built independently, they don’t talk to each other.  And when they don’t talk to each other, it is very difficult to take them all over simultaneously

    Furthermore, when computer systems get big, particularly when they were implemented in the 1980’s or 1990’s, they aren’t documented very well.  If you work at a company whose infrastructure was written long ago, you’ll know how disorganized it is.  The code is poorly written, you will probably have GOTO’s going to GOTO’s, and there is no written support.  If you want to figure out what is happening, you have to “decompile” the code in your head or on paper.  It’s a mess.

    Thus, if an organization as large as the government is going to be attacked, what is more likely to happen is that rather than being controlled, it is more likely to be shut down than having control of it given to an external attacker.  A hacker can break in and deploy a worm, but this is much more likely to cause systems to crash and not boot than it is give control to a remote user.  Remember, it is not a single organization with a unified communication system, it is multiple computer networks that must be compromised and controlled. 

    Poorly written code doesn’t act like a cohesive unit.  Instead, it deadlocks and becomes unresponsive.  Memory leaks, and resources do not get released.  It’s the equivalent of having large paperweights on your desks (like my IP phone at work) and servers that sit there, spinning their wheels and doing nothing. 

    When the governments of Estonia in 2007 and Georgia in 2008 were attacked, and when Twitter suffered a DDOS attack in 2009), they shut down the nation’s, or web site’s, computer systems but they didn’t control them from the inside to make them do nefarious things.  They “just” rendered them inoperable.  So, we can all take solace in the probability that if a hacker ever takes over, traffic lights will only go out.  We don’t have to worry about them all turning green.

  3. An emergency data dump wouldn’t go to only one server in one location.

    Or, I certainly hope not.

    As I said in my introduction, the taking over of a nation’s computer systems was only a diversion.  When this happened, all of the nations banks, financial institutions, trading accounts, etc, started downloading of all of its data into a data center located in Maryland (I think).  This computer data center was supposedly the Social Security Administration, but in reality it was designed to be a redundant backup in the case that a real emergency happened.  Of course, this emergency did happen, and the bad guy is the one who designed it that way.  Thus, his goal was to create an emergency, trigger this data download into the servers, and then walk away with all of the money (or delete it, sending America back to the Stone Age).

    Okay, I won’t get into all of the problems, but let me say this – if this guy was so brilliant, then his design has a flaw.  If you really were going to do this, you wouldn’t download all of the data into one location.  You would download it into two locations.  Remember, this is absolutely critical information and losing it would be disastrous.  Therefore, you’d have a backup.  That’s so obvious that a designer has to know that.  What you would probably do is download it to two separate (redundant) servers in the same data center, and then do the same thing in another geographically separate data center.  That way you have double-redundancy for a set of data that is so important.  Clearly, this bad guy can’t be that smart if he designed it to have one backup.  What a doofus.  No wonder he got fired.


I could probably name more problems, but this will do. But like I said, this movie isn’t real life, it’s entertainment.  It’s not supposed to be realistic.  And for what it was worth, it was a good ride.  I liked it.

Yippie-ay-yo-kay-yay!

image

Leave a Comment
  • Please add 6 and 5 and type the answer here:
  • Post
  • Thats impressive, U watched it from a security pro point of view whereas I just watched it for the thrill, altho being a security pro myself I shud have come up with a few flaws myself.

  • Creating redundant backups of important data is a no brainer, everybody knows this, even a 5 year old.

  • i think the onstar working after all the networks are down is a big flaw

  • Outer space computer systems must not be hackable yet

Page 1 of 1 (4 items)