Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Computer intrusion inflicts massive damage on German steel factory (itworld.com)
123 points by Evolved on Dec 24, 2014 | hide | past | favorite | 77 comments


I've done some work in steel factories, though only for offline/closed systems that have no interaction with the main control code.

Factories like this one probably operate at around 60%+ capacity, so they'll be operating sometimes all day, sometimes all night. If you ever get the chance to visit, do so, even if you don't really care about how steel is made. The sheer scale of everything is amazing.

Everything is very big, very hot and if you have to hit the big red button, it costs a lot of money. Unscheduled downtime is very expensive. Steel tends to be workable when it's hot/molten and therefore pliable. If you suddenly stop a machine then you're left with solid steel in places you don't want it which takes a lot of time and effort to remove.

One of the common reactions to this story is "Why didn't they hit the emergency stop?" - the answer is because it costs an absolute fortune to do so.


To be more pedantic, the answer is "that was the emergency stop - there's a reason they only use it in emergencies".


The more interesting question is why is the industrial control computer connected to the internet.

Shouldn't these things be on a separate network protected by an air gap all the time?

Having stuff that doesn't need internet access connected to the internet is like asking for trouble.


That wouldn't necessarily solve the problem, if you look at what happened with stuxnet. Getting a USB stick into a domestic steel factory probably isn't hard.

As another commentator said, these plants are designed to be operated by people who don't know the difference between a mouse and a keyboard. The systems must work perfectly 24/7. I've seen the legions of machines running XP because the legacy software runs on it and God forbid they upgrade to Windows 8.

That said everything seemed pretty secure there, loads of IT red tape needed to get anything on the network.


Well the usual solution to that is glue up the USB port or no USB stick is allowed to enter or leave the facility.

This doesn't make it impossible to get access but it make it a hell of a lot more difficult.

For one thing the hackers don't have direct access they need to rely on the virus to do the dirty work and cannot do intelligence gathering on what systems are being used that well either.


> why is the industrial control computer connected to the internet.

My guess is remote monitoring.


So what's wrong with a separate internet connection, network and a computer with all its USB ports glued up connecting trough a VPN to the facility computers.

You can read all the stats off the screen.

Slightly more insecure then complete isolation but i suspect still better then what they had.


And that VPN would connect to ... the office network where everybody runs outlook ? Or does it connect to a set of machines that run ssh with passwords shared with forums on the internet ? This configuration, incidentally, is probably exactly what the factory was running.

There are a number of things that are commonly brought up as solutions to security. VPN's, Anti-virus, IDS, disabling USB, nat, ... none of them are. The solution to security problems is knowing all possible interactions between your software and the outside world. If you don't, there is at least a good chance it can be hacked.

Of course, not having to know all interactions, some would say, is the reason we have computers in the first place.


The VPN could connect to a Qubes-isolated VM.


I've worked for classified organisations and they don't do this, because it's unsafe.

For instance that VM would have to be an http server to actually see the data, it would have to run on a managed (through sssh ?) host, ...

I mean why not give it an output-only serial line instead (isolating control signals. NO error recovery, windowing, ... allowed) ? At that point it doesn't really matter what's on the other side. The point here is that in this way you can guarantee information only flows in one direction. Plus it's dead simple (it will malfunction and at that time there will be many, many voices saying it's too simple, but it's not).

The system on the other end of the serial line can be as convenient an insecure as you want, because it's not trusted to be secure. Needless to say, in practice there's still considerations of redundancy, so there are multiple output systems sending data over different fiber paths to different destinations. But all of them have the RX pin connected to ground.

There is lots of security hardware that does this.

This is how it works. Network of trust. A trusts B to ... Software trusts hardware to ... Operator trusts hardware to ... Operator trusts software to ... you make an overview of this and then you scratch anything you can. Depending on the level of security required you accept varying levels of inconvenience.


Thanks for the reminder on one-way channels. Each situation is different, e.g. software-based systems can build upon hardware separation (including one-way data diodes), while retaining the option for software defenses to evolve in response to ever-changing threats.

(PDF) Air Gaps: http://www.invisiblethingslab.com/resources/2014/Software_co...

DIY Fiber diode: http://www.synergistscada.com/building-your-own-data-diode-w...


How do workers melt solid steel once it's cooled in places?


I wondered the same and found this informative article:

http://www.steelguru.com/article/details/Mzc%3D/_Blast_Furna...

Fascinating.


Fascinating indeed. Here is a related article about an explosion that happened while trying to restart a chilled furnace: http://www.hse.gov.uk/pubns/web34.pdf


One thing these news stories don't do a good job of getting across is exactly what a blast furnace is. So here's what I got out of that link: a blast furnace is something like a 10-story can, lined with fancy insulating bricks and a ton of embedded pipes for water-cooling so that the insulation doesn't melt. On the outside are a bunch of walkways and vents and ducts and pipes and stuff just like a scene out of Terminator.

You chuck various rocks (iron ore, limestone, coal) into this multi-story brew and they gradually react with each other (the mixture heats itself) as you blow oxygen into openings at the bottom. Over a period of weeks the iron sinks to the bottom (and the waste sinks close to the bottom but not quite) where you extract it. Once or twice a decade you might stop making steel and empty it all out so you can replace the bricks and do maintenance.

If you let it get too cold, the iron and the waste (slag) will freeze and clog the vents, so you drill into it with "a long, consumable, steel tube fed with pressurised oxygen gas" - it's not like you have heating coils built in, you just toss the fuel in with the rest of the stuff. I guess this is also why the "chilled hearth" at the bottom is such a problem (the point of having a blast furnace is that it's mostly-iron that collects down there, not fuel). Of course, the frozen steel and slag is still insanely hot, just solid now. On the other side of problem-land: if the cooling system leaks and too much water gets in, the whole thing might explode and jump about 2 or 3 feet in the air (this "too much water" is on the order of tons, mind you -- this can happen if parts of the cooling system melts).

Yeesh. I'd been thinking they'd lost something an order of magnitude or so smaller -- more like the size of a cement mixer.

And of course each of your top experts (water systems, electrical systems, etc) might want to go home before the 10+ year campaign is over, so you install remote monitoring and if there's a crisis you call the top expert and he can take a look at it and provide advice instead of getting up and spending half an hour to drive there and then give advice (possibly a very useful capability in many other crises)...


Chucking stuff in is close to the truth. In research furnaces they lob bin bags full of various materials they want to add to the cast depending on the grade of steel they're trying to produce. The whole bag goes in, the plastic is negligible.

Often these things are heated (at least initially) with big ole' electrodes or by using induction. Nominal current is in the kiloAmperes. No pacemakers please!

To put it into perspective (again), most steel plants have their own dedicated power stations on-site.


There's two major types of furnaces: blast (coal) and induction (electrical).


Thanks. Apparently a chilled furnace can take months to "rake out." I bet that's a slow, manual process of heating up some bits with a hand oxygen lance, gradually nibbling until the passages and so forth are cleared.

By chance, does know the name of those giant drill-like things in The Deer Hunter iron works? Is that also an oxy lance of some sort?


By heating it... large torches are probably required. I wouldn't be surprised if it was cheaper to replace a clogged assembly than to try to melt the steel back out of it.


I think typically they use thermic lances, because torches aren't even close to enough.


Oxygen on a stick.


i am working in a steel plant for over 20 years now, and it is easy to bash the security of those people.

but just some facts from my world :-)

first those plants are build for lifespans of over 30 years. general problem is 15(normal review time) years ago no one was thinking about network security as we thinking about it know . most businesses didn't even have a large internal network wich did include the production and were connected to the internet.

second you can't just shutdown this things. if you have to shutdown a blast furnace we are talking about minimum stand time of 5-7 days. calculate about 400k to 1m € per day on standstill cost. and that is only for the blast furnace. if the blast furnace is not running in some steel plant NOTHING will run. (e.G. hot rolling plants)

third there is no good solution on the market. if some of your guys would look into the software wich is sometimes running those large machines you would get sick to your stomach. As a more security focused person in my plant just to convince management to change the std admin passwords was a handful (well that changed like a year or two years ago). The thing is market decides what security is gonna be implemented. since there has now been a breach and a very expensive one most companies i am talking to are more focused on security now. The thing is they won't just throw away their software stack they worked on for 30 years. and reviewing software is hard and time consuming. so it will be interesting how this is developing.

and no i am not working in that plant ... :-)

and sorry for the bad english


You can make your English look a lot better by starting every sentence with a capital letter, and capitalizing the word "i".


Spelling feedback is bike-shedding. The content is fine.


It's an interesting case of misplaced good intentions though. Here's someone with good intentions providing direct, actionable feedback. And getting negative feedback. It's how the system is supposed to work, but I hope a lot folks realize their well-intentioned comment that gets downvoted might well be getting downvoted for similar reasons: your good intentions are misplaced.


I never understand why people need to connect industrial plants to the Internet. Do they actually need to control them over the Internet instead of on-site?

And, if they need to use the Internet on-site, can't they make an air gap and segregate the computers that can access the Internet from computers that can access the plant machinery?


I remember a rule a controls engineer once told me: never connect the plant to the Internet. Nothing clever, no humorous quip, no deep insight. Just don't do it. If you do need to get data out to the living world, and you will, then you carefully set up individual firewall rules just for the data system - which very much can not drive the process system.

This is where I begin ranting on the topic.

Why? Because plants aren't secure. They're meant to run all the time by people who may or may not know how to properly use a mouse. There will be passwords taped to monitors, systems that automatically log in to prevent start up delays, and authentication of the order that it-better-just-work-by-default. Under no circumstances should that damn system ever, ever be exposed to the outside world. Not by Ethernet, wireless, or flash drive. It's a young innocent facing the cruel, brutal internet; it's going to get hurt.

New plants are fancy and have highly trained workers with brilliant industrial IP wireless systems with state of the art VM servers and oh god did that guy just plug his phone charger into the damn wrapper HMI and now Windows media player has popped up and no one can acknowledge alarms (this partly why PCs tend to be in large metal boxes, that and dirt/water). No imagine that sort of silliness, but driven by the less savory from outside the intranet.

Just don't risk it. You'll never have a problem, and no one is ever going to care enough to hurt the plant... Except but for that one time when suddenly all the convenience and hubris won't bring back the machine that just slagged itself from some malicious command from outside.


The terrifying part of this is that computers with Windows Media Player installed are running critical infrastructure. Shouldn't that be a stripped-down Linux machine with perfectly understood characteristics and close-to-zero attack surface?

Yes, your scenario is bad I can totally see it happening, but the problem is not that an employee plugged in his phone, it's that you are using a desktop OS for office workers for controls that really non-optionally need to always just work.

Maybe it's not a Windows Media Player launch, but what happens if Java shows up in the taskbar wanting an update, or your anti-virus software (yikes) wants to bug you about updating its definitions, or a modal dialog from the OS comes up? The fact that those are even things that can happen is pretty scary.


>The terrifying part of this is that computers with Windows Media Player installed are running critical infrastructure. Shouldn't that be a stripped-down Linux machine with perfectly understood characteristics and close-to-zero attack surface?

Maybe I'm appealing to the wrong sentiment but most of these control softwares aren't available for Linux. Large CO2 generation plants, tissue plants, whatchamaycallit, it's just not available. So, the problem is with the vendors.

Another problem is that the people who decide to buy the software are completely clueless about what is secure or not. They may ask for 'advice' from their more tech savvy juniors but that is merely a formality to confirm their view point, as anything contrary to their already decided viewpoint is quickly discarded with the mental rationalization of 'He probably doesn't have my experience' or 'I know better because I'm senior and I've been around these longer than him.'


Still begs to reason why wouldn't they provide a stripped down industrial Windows for these cases, rather than just a layer of Enterprise apps on top of the regular thing. Even their "Enterprise for Manufacturing" page is all about apps and Metro tablets.


They do, it's called Windows Embedded, and OEMs customise it to include only the components they need.


> that is merely a formality to confirm their view point

Is there a term for this? I often give out to people when I see they're just going to discard my advice if it's not what they have already decided.


Confirmation bias.


Windows has overwhelming market share in the process control industry. Microsoft has long standing partnerships with the majority of the process control vendors. The attach surface argument was never relevant when networks were physically isolated. There is a slow shift towards Linux however many systems have extremely long lifespans.


>The attach surface argument was never relevant when networks were physically isolated.

If the network is designed according to this philosophy, then it will be trivial for an insider to breach the airgap. That could be someone who hates his boss, someone who's about to be fired, somebody getting paid by a competitor, somebody getting paid by a criminal enterprise planning on shorting the stock, somebody coerced or coopted by a state actor.

If the process control network is soft and chewy for anyone who can put his finger on an ethernet or USB port, you are still far from secure - as Iran learned, by the way.

Windows Embedded is relatively sane, but that's not going to have Java and Windows Media Player and antivirus software hanging out, and it's (in part) designed to let you whittle its size and attack surface down to exactly what you need. But vanilla Windows having marketshare is just baffling to me.


Seems to me, an insider wouldn't need to "breach the air gap". Quite literally they could just walk over to the controls.

So defending against the disgruntled employee, or impostor employee, armed invading non-employees,...that should be the problem realm for onsite security and management, not software designers.

But yes, you're right. That is baffling. People are fcking terrible with computers, and for most of the roles they shouldn't have to be more competent. The controls should be about as flexible as an atm machines user interface.


>Quite literally they could just walk over to the controls.

Control systems may not be designed for IT security, but they are designed for safety. You would expect:

- Limits that prevent an operator from pushing a parameter to an obviously insane value

- Alarms that sound audibly and visibly on other control panels, in a control room, etc. when a situation is heading out of control or is actively dangerous

- Automated failsafes that take action to correct dangerous situations

- Audit trails that indicate what buttons were pushed, possibly by whom

- Logical access control so that i.e. line workers cannot change configuration, damaged equipment can be immobilized, a particularly sensitive operation enforces a 2-man rule, etc.

- When an employee is fired (or goes home for the night), he can no longer influence the plant in any way.

All of these would make sabotage by walking up to the controls difficult - at the very least, someone else would know about it in time to evacuate, and at best, the system would automatically correct itself while locking you out and sounding an alarm at your supervisor's desk.

If I've pwned the control system, then I can push parameters beyond the engineers' limits while MITMing and falsifying reports from sensors so that everything appears to be normal, no failsafes kick in, and no alarms go off until everybody is dead. Forensic examination of the audit log would not show me doing anything strange.

If it's my last day and I've plugged a tiny, GSM-enabled, PoE attack platform into an ethernet port, the the fact that security has taken my badge won't stop me - I can do all this from home.


Not all of these things can be solved by a control system alone, at least not without a ton of investment in RFID and other auto-id infrastructure. Some human is still going to have to administrate your system, and he or she needs to be educated and trained, and they need to value security.

In the article's case, for example, they made it sound like the "hacker" basically conned someone into giving him access to the remote management interface. The only way you can fix a problem like that in software is to make the interface totally inaccessible.


In a lot of shops, some old crusty box is used until it fails. It could be running xp or win2k.

At least that was how an ivy-league's student housing maintance shop and also a nuclear engineering service shop were run.


They do not see themselves as targets. Their systems are likely bespoke, or at the very least obscure. And, more importantly, they have bigger problems to worry about, like operating their businesses.

As it becomes more clear that yes, someone will go to that trouble and it will have catastrophic consequences, you would hope that these things would get better.

More likely, someone will pay a "security consulting" company $100 million to run a Nessus scan and tell them to turn on Automatic Updates on their Windows infrastructure.


You don't pay them the 100mil to run the scan. You pay them the big bucks to park themselves at the top of the lawsuit list when things go wrong. Its more like insurance at that level than technology.


A better, more serious answer: always have two networks. Preferably physically separated (though I hear virtual networks are pretty ok with the right router equipment). The machines holler on one, and the administrative support on the other. I've seen what happens when it's even just tried to put both on the same layer, and it's inevitably some form of minor disaster. Not just because of security, mind, but because you really don't want your file transfer to a network drive to even slightly lag a sensor yelling back to the PLC about an interlock's state.

If you need to get on the process network, use a VPN, and only open to a machine that can't actually run equipment. A programming terminal may be made available to save costs so an integrator doesn't need to fly in for every support call, but these access points tend to require a VPN through at least two firewalls. (And even then, often you would still insist on them coming in person, for all manner of other reasons.)


See also: https://www.tofinosecurity.com/blog/why-vlan-security-isnt-s... (and its comments, which echo different points)


OK, granted they may want to monitor the plant remotely. Then they could have a plant-connected machine dump UDP monitoring packets to an Internet-connected machine, and have the plant-connected machine block all incoming packets from the Internet-connected machine.


It seems that there are many ways this could be done right (and does not seem a particularly hard challenge), it's just that people in charge probably were pretty much inept at that task.

You know how it is, no one cared about it until it happened. It just wasn't a priority.


I get the impression from dealing with german companies that they tend to be very good at traditional "engineering" but when it comes to it/computers they are 10 or 15 years behind.

I also think that in germany its considred that the good engineers and asociate profesionalas want go and work for firms like Audi.


It's not just Germans. Anyone that isn't primarily in software has this phenomenon. Mobile phone makers, for example, are a disaster, and it's only having a whip wielded by a software company with some power over them that prevents it becoming a complete train wreck.

In my experience the most dangerous are engineers in other domains that learned just enough programming to get the job done but can't understand the giant holes they've created and not run into.


Testify (Brother or Sister) having worked for a big telco we regarded the mobile side as grade inflated "amatuers"

I still recall one of my colegues (working on the core IP network) being amused that one UK mobile provider was still using NT4 in their core network.

Ill be nice and not write what we thought about the US cariers


Well use private curcuits or go old school use modems with dialback ie you call the modem it disconects and calls you back on a hard coded number


It is very unlikely that the process control network is connected to the Internet. However it is almost certainly connected to the corporate Intranet. Think about all of the metric data available on the process control network - that is needed by engineers for analysis, ERP systems for financials, asset management systems for maintenance etc. With an air gap, you can't do any of that in real-time.


The solution is to run a historian in the DMZ, only the historian can read data from the DCS, and the corporate systems (ERP, BI etc) read data from the historian. And nothing from outside can update the DCS.


The German document isn't that useful. It's just a general overview of computer security with anecdotes, not a technical analysis of this attack.

Interestingly, there was a cooling water leak and an emergency shutdown at a steel plant in Pakistan in October. That plant is still off line. That's probably unrelated, though.

http://www.newspakistan.pk/2014/10/27/pakistan-steel-mills-r...


More background info about the incident: https://translate.google.com/translate?hl=de?sl=auto&sl=de&t...

Steel plants run for years without a shut down, so this was a large scale incident as the had to shut it down because of major damage.

Not related to the plant in Germany in any way, just to get you an idea how some other steel plants operate: C# WinForm based GUI control room app and Java based server app on Windows server. The server controls the various SPS. Several steel plants around the world were build with that software setup and it was not designed to be connected to the internet.


The register has speculation that it was a Thyssen Krupp plant in Brazil I susepct that if it had been actualy ingermany there might have been better security.


Nope. Just last year, Germany's biggest IT magazine ran an article about hundreds of industrial systems having remote control UIs with insufficient security (unencrypted login, default passwords) exposed to the internet.


But where they hacked?



Oddly enough, the Ars article is just a slight rephrasing that adds zero value beyond the original article: http://www.itworld.com/article/2861675/cyberattack-on-german...


Thanks! Changed.


To do external monitoring, couldn't you have the computer for the plant display the information on a screen in a particular font and then an internet-connected computer read the video and OCR it?


I can't help but feel there's a rush to judgement here. If you read the article it clearly states that the Federal Office for Information Security (BSI) said, quoting the article:

"describing the technical skills of the attacker as “very advanced.”"

And

"not only was there evidence of a strong knowledge of IT security but also extended know-how of the industrial control and production process."

And HN rushes to judgement to quickly blame workers who can't use a mouse and Microsoft.

Yes, the average worker in a manufacturing plant is not a CS grad. It is the job of engineers to develop systems that are usable by, well, the target user.

Most Heart Surgeons don't have a CS degree. And based on meeting a number of them during the course of my business I am comfortable saying that quite a few of them are "computer challenged". Yet, most of us would not have a problem being on that operating table, yes, with a room full of computers, a good number of them running MS software and with an OR team that is likely to use the same "123456" password on everything.

In a hospital you have IT and engineers who setup an infrastructure medical professionals can use. The same is true of steel plants. Yes, there's probably a lot more older code in your average steel plant. I just don't think characterizing them as IT or security morons migt be fair.

The BSI characterized the attackers as sophisticated across disciplines. Let's not engage in senseless conjecture.

I've owned and operated a small manufacturing plant consisting mostly of what I call "big iron" CNC equipment. Things are seldom as simple as discussions on various fora on the 'net would like them to be. Yes, in my case I air-gapped the plant and even individual machines and remote monitoring was done through a separate network that had no command-and-control capabilities at all, just sensing and reporting. There was no way to jump from the sensing network to command-and-control of any one machine, much less the plant. Even if you were physically at the factory this was pretty much impossible. Nobody wants a CNC milling machine with a 30HP spindle controllable from the internet. People are not that stupid...even if they can't use a mouse.


This sounds like an inside job, seems too specific (and obscure) an attack. Idle speculation, maybe a disgruntled ex-employee's offspring? Who knows.


Anyone know what's the motivation?

People do not work that hard to destroy something without a reason. Someone was really mad at them - ex employee maybe?


I doubt it's another steel manufacturer but who knows? Maybe someone in the business with connections to black hats had some money to spare and said: Look what you can get going about this cyberwar stuff everyone is talking about...

There is also this: http://www.heise.de/security/meldung/Verwundbare-Industriean... (In german)


I have to admire your cynicism. To consider that this attack might have been nothing more than a sales demonstration...

That's a scary thought, even as it sounds like something out of a James Bond film script.


Why does it have to be someone with a solid motive?

The companies attacked always go on about how skilled and unstoppable their attackers are, but for all we know their software was terrible and a bored 13-year-old shut down their factory because 13-year-olds do terrible things for no reason because the parts of their brains that let them tell good ideas from bad ideas haven't grown in yet.

The guy responsible for the shit software isn't going to tell the CTO his software is shit, the CTO isn't going to tell the CEO his department is incompetent and needs a good house cleaning, starting from the top, and the CEO isn't going to admit culpability to the insurance companies and shareholders who are ultimately on the hook for the damages.


Bored 13 year olds usually try to be flashy or show off, but they rarely spend significant effort really causing damage.

There's just nothing in it for them to actually cause damage, when demonstrating the potential to do so (without actually doing so) provides all the benefits (bragging, etc.) with much less risk.

Yes, you can have one deranged person doing it, but it's just not likely.


But your 13 year old may not understand the possible real-world effects of their hacking. They could just be playing and made a mistake.


Particularly relevant for me since I'm currently reading Countdown to Zero Day.


Stuxnet made it "acceptable" to do this. I hope the US government recognizes that.


Actually the Trans-Siberian pipeline made this acceptable, which was a cyber attack in peacetime responsible for the largest man-made non-nuclear explosion in history. Or the Turkish pipeline attack. Or the Enigma Machine hack.

The crucial parts of warfare systems are C4ISR: Command, Control, Communications, Computation, Intelligence, Surveillance, and Recognizance.

Computer systems have been a target of covert ops for as long as they have existed. What's happening now is that middle-weight nations (North Korea, Iran) and non-state actors (Anonymous, al-Qassam) are now able to get in on the game, which is disrupting the status quo established by the USA and USSR.


Unambiguous rankings of explosions is basically impossible, so claiming it was the biggest man-made explosion in history is nonsense. This overly-bold claim was promulgated by former Air Force secretary Thomas C. Reed.

http://seanlinnane.blogspot.com/2014/12/largest-man-made-non...

https://en.wikipedia.org/wiki/List_of_the_largest_artificial...


We can agree it was a pretty large fuel-air explosion caused by the failure of SCADA software? The rest is just details.


Damn. Dominating. Tell me more.


In the Trans Siberian pipeline explosion the KGB stole some Canadian software that had been booby trapped.

http://en.wikipedia.org/wiki/Siberian_pipeline_sabotage


Because when I hack to destroy I always wait for the US Government to do it first.


Well, I think what the parent is saying here is that the United States set a precedent for other nation states to engage in destruction of industry when they see national security or state advantages - while the myth of America being a fair and even handed juror of world affairs is not true, it is pervasive, and it (and the "West") is used as a basis for comparison. It also provides an out for a state actor who is caught and there is an attempt at an international judicial (rather than military) response: they can point at Stuxnet and suggest (convincingly IMO) that the United States should face the same standards of judgement and if they stand up a proportional reprimand. This gives an additional sort of 'insurance'.

I would disagree with the parent that Stuxnet is the same type of activity (it's private industrial sabotage rather than state military sabotage). The papers with the most lip service regarding cybermilitarization (inside the US) try to suggest international norms by breaking types of operations down into an ontology that separates national security operations and military operations from activities that interfere with private enterprise, citizens and from infrastructure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: