fbpx

WITH DISTRIBUTED WORKFORCES and mobile technologies, the network perimeter has evolved beyond the physical limits of most corporate campuses. The days when the perimeter was an actual boundary are a fond memory. Back then, firewalls did a decent job of protecting the network from outside threats, and intrusion prevention tools protected against insiders. But, over time, the bad guys have gotten better: Spear phishing has made it easier to infiltrate malware, and poor password controls have made it easier to exfiltrate data. This means that the insiders are getting harder to detect, and IT assets are getting more distributed and harder to defend. Complicating matters, today’s data centers are no longer on premises. As cloud and mobile technologies become the norm, the notion of a network edge no longer makes much sense. New network security models are required to define what the network perimeter is and how it can be defended. CIOs and enterprise security managers are using different strategies to defend these “new” perimeters, as corporate data and applications travel on extended networks that are often fragmented. The borders between trusted internal infrastructure and external networks still exist, but the protection strategies and security policies around network applications, access control, identity and access management, and data security require new security models. Here we look at four network edge-protection strategies in use today: protecting the applications layer, using encryption certificates, integrating single sign-on technologies and building Web front-ends to legacy apps.
1. Provide application-layer protection. While next-generation firewalls have been around for some time, what’s new is how important their application awareness has become in defending the network edge. By focusing on the applications layer, enterprises can better keep track of potential security abuses because IT and security teams can quickly see who is using sensitive or restricted apps. One way to do this is to develop your own custom network access software that works with firewalls and intrusion detection systems. This is what Tony Maro did as the CIO for medical records management firm EvriChart Inc., in White Sulphur Springs, W.Va. “We have some custom firewall rules that only allow access to particular networks, based on the originating device. So, an unregistered PC will get an IP address on a guest network with only outside Internet access and nothing else. Or, conversely, a PC with personal health information will get internal access but no Internet connection,” Maro says. “This allows for a lot more finegrained control than simple virtual LANs (VLANs). We also monitor our DHCP leases and notify our help desk whenever a new device shows up on that list.” Another method is to incorporate real-time network traffic analysis. A number of vendors, including McAfee, Norse Corp., FireEye Inc., Cisco, Palo Alto Networks Inc. and Network Box Corp. use this analysis as part of their firewall and other protective devices.
2. Make proper use of encryption and digital certificates. A second strategy is to deploy encryption and digital certificates widely as a means to hide traffic, strengthen access controls and prevent man-in-the-middle attacks. Some enterprises have come up with rather clever and inexpensive homegrown solutions, while others are making use of sophisticated network access control products such as Mobile IAM from Extreme Networks Inc. that combine certificates with Radius directory servers to identify network endpoints. “We use certificates for all of our access control because simple passwords are useless,” says Bob Matsuoka, the CTO of New York-based CityMaps.com. The company found it needed more protection than a username and password combination to its Web servers, and providing certificates meant they could encrypt the traffic across the Internet as well as strengthen their authentication dialogs. While this approach increases the complexity of Web application security for his developers and other end users, it also has been very solid. “Over the past three years we haven’t any problems,” Matsuoka says. One of the tradeoffs is his company is still operating in startup mode. “You can have too much security when you are part of a startup, because you risk being late to market or impeding your code development.” Several vendors of classic two-factor tokens such as Vasco Data Security Inc. and Authentify are also entering this market by developing better certificate management tools that can secure individual transactions within an application. This could be useful for financial institutions that want to offer better protection and yet not something that is intrusive to their customers. Instead, these tools make use of native security inside the phone to sign particular encrypted data and create digital signatures of the transaction, all done transparently to the customer. To some extent, this is adding authentication to the actual application itself, which gets back to an application-layer protection strategy.
3. Use the cloud with single sign-on (SSO) tools. As the number of passwords and various cloud-based applications proliferates, enterprises need better security than just re-using the same tired passphrases on all of their
connections. One initiative that seems to be gaining is the use of a cloud-based SSO tool to automate and protect user identities. Numerous enterprises are deploying these tools to create complex, and in some cases unknown, passwords for their users.
SSO isn’t something new: We have had these products for more than a decade. What is new is that several products combine both cloud-based software as a service logins with local desktop Windows logins, and add improved two-factor authentication and smoother federated identity integration. Also helping is a wider adoption of the open standard Security Assertion Markup Language, which allows for automated sign-ons via exchanging XML information between websites. As a result, SSO is finding its way into a number of different arenas to help boost security, including BYOD, network access control and mobile device management tools. Post Foods LLC in St. Louis, MO, is an adherent to  SSO. The cereal maker uses Okta’s security identity management and SSO service. Most of their corporate applications are connected through the Okta sign-in portal. Users are automatically provisioned on the service (they don’t have to even know their individual passwords), so they are logged in effortlessly, yet still securely. Brian Hofmeister, vice president of architecture and operations for parent company, Post Holdings, in St. Louis, says that the consumer goods company was able to offer the same collection of enterprise applications, across its entire corporation of diverse offerings quicker through the use of SSO and federated identities, and still keep the network secure.
4. Consider making legacy applications Web-based. A few years ago the American Red Cross was one of the more conservative IT shops around. Most of its applications ran on its own mainframes or were installed on specially provisioned PCs that were under the thumb of the central IT organization based in Washington, D.C. But then people started to bring their own devices along to staff the Red Cross’ disaster response teams. The IT department started out trying to manage users’ mobile devices—and standardize on them. But within two or three months, the IT staff found the mobile vendors came out with newer versions, making their recommendations obsolete. Like many IT shops, the Red Cross found that the emergency response teams would rather use their own devices, and these devices would always be of more recent vintage, anyway. In the end, they realized that they had to change the way they delivered their applications to make them accessible from the Internet and migrate their applications to become more browser-based.

Categories: Knowledgebase

0 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.