Shifting cyber landscape presents risks, offers opportunities
Information now tops infrastructure as main target of hackers
- By William Jackson
- Mar 31, 2011
Robert Carey, the Defense Department’s deputy CIO, recently summed up the state of cybersecurity: “It’s the information, stupid.”
The introduction of increasingly powerful mobile devices and adoption of cloud technology have made the enterprise perimeter — already blurry — even less distinct, and that's forcing organizations to focus on protecting information more than infrastructure.
“It creates a dilemma for us,” said Carey, who participated with other government officials in a panel discussion on cybersecurity hosted by FedScoop. Agencies are pressing ahead with the adoption of cloud computing “albeit with caution.” New technologies, such as Internet-enabled handheld devices, are being adopted because of their convenience. Yet “security is the antithesis of convenience,” Carey said, and users will need to accept some restrictions on their access to information.
Rapid innovation forces administrators to adapt quickly with new technology and policies for IT security. But the changes also create opportunities, said retired Rear Adm. Betsy Hight, now vice president of Hewlett-Packard's cybersecurity practice. Hight said the network no longer is the primary target for attacks. Seventy percent of attacks now target applications, and that is a shift in the threat landscape that has not been adequately dealt with.
NIST releases guide to security automation protocol
NIST revises specs for automating security
“Once we give someone else responsibility for the infrastructure, that allows us to turn our attention to the applications,” she said. Hight said she foresees the development of risk-level agreements with cloud service providers that specify a maximum level of risk that a customer is willing to accept, much as service-level agreements specify a minimum level of service.
Automation is critical to adapting to a rapidly evolving IT environment. Carey called for a shrink-wrapped security layer, with automated appliances that can make it easy to monitor and control a growing number of devices without hands-on management. A step toward that goal is the development of the Security Content Automation Protocol, a National Institute of Standards and Technology specification for expressing and manipulating security data in standardized ways. SCAP's protocols enumerate hardware and software product names and vulnerabilities, including software flaws and configuration issues. The Office of Management and Budget requires agencies to use products that can use SCAP for checking compliance with Federal Desktop Core Configuration settings.
Donna Dodson, deputy chief cybersecurity adviser at NIST, said SCAP and other tools need to work with mobile platforms so that administrators can better understand how the devices are being used and control them when they make their way into the enterprise.
“People are bringing them into the work environment,” from the battlefield to the office, Dodson said.
Bobbie Stempfley, director of the Homeland Security Department’s National Cybersecurity Division, agreed that the shift is inevitable. “It’s not about fighting it; it’s about embracing it,” she said.
No one who spoke at the symposium suggested doing away with traditional perimeter defenses and security controls. But even with good perimeter defenses and controls over the access and use of information, organizations need to understand that breaches will happen, said Rodney Joffe, senior technologist at Neustar. Failure is not only an option but also inevitable, he said.
“If you do all of the right things and do all of them right, you have to accept that at some stage your security is going to be breached,” Joffe said in an interview. But breaches continue to surprise organizations, which are not prepared to respond quickly, he said.
Joffe said he advocates a policy of assuming the worst will happen, having tools and processes in place to watch for evidence of that, and preparing ways to mitigate it and patch previously unknown vulnerabilities.
That approach assumes that every defense is flawed. That's not a new idea. It is a truism that there is no such thing as 100 percent security. But organizations seldom act that way, preferring instead to believe that, until shown otherwise, if they are in compliance with regulatory requirements and best practices, they are secure.
Assuming fallibility is good sense, Joffe said. “It’s not a defeatist attitude — it’s reality.” By accepting that failures will occur, organizations can more quickly identify them when they do happen, find the source of the breach and correct it.
One of the reasons that good defenses will fail is insider threats, Joffe said. People with legitimate access to resources cannot always be prevented from doing the wrong thing, by mistake or intentionally.
“There is no way to defend against stupid,” he said. “People will continue to do stupid things.”
William Jackson is a Maryland-based freelance writer.