So here's an interesting spin on de/re-perimeterization...if people think we cannot achieve and cannot afford to wait for secure operating systems, secure protocols and self-defending information-centric environments but need to "secure" their environments today, I have a simple question supported by a simple equation for illustration:
For the majority of mobile and internal users in a typical corporation who use the basic set of applications:
- Assume a company that:
...fits within the 90% of those who still have data centers, isn't completely outsourced/off-shored for IT and supports a remote workforce that uses Microsoft OS and the usual suspect applications and doesn't plan on utilizing distributed grid computing and widespread third-party SaaS
- Take the following:
Data Breaches. Lost Laptops. Non-sanitized corporate hard drives on eBay. Malware. Non-compliant asset configurations. Patching woes. Hardware failures. Device Failure. Remote Backup issues. Endpoint Security Software Sprawl. Skyrocketing security/compliance costs. Lost Customer Confidence. Fines. Lost Revenue. Reduced budget.
- Combine With:
Cheap Bandwidth. Lots of types of bandwidth/access modalities. Centralized Applications and Data. Any Web-enabled Computing Platform. SSL VPN. Virtualization. Centralized Encryption at Rest. IAM. DLP/CMP. Lots of choices to provide thin-client/streaming desktop capability. Offline-capable Web Apps.
- Shake Well, Re-allocate Funding, Streamline Operations and "Security"...
- You Get:
Less Risk. Less Cost. Better Control Over Data. More "Secure" Operations. Better Resilience. Assurance of Information. Simplified Operations. Easier Backup. One Version of the Truth (data.)
I really just don't get why we continue to deploy and are forced to support remote platforms we can't protect, allow our data to inhabit islands we can't control and at the same time admit the inevitability of disaster while continuing to spend our money on solutions that can't possibly solve the problems.
If we're going to be information centric, we should take the first rational and reasonable steps toward doing so. Until the operating systems are more secure, the data can self-describe and cause the compute and network stacks to "self-defend," why do we continue to focus on the endpoint which is a waste of time.
If we can isolate and reduce the number of avenues of access to data and leverage dumb presentation platforms to do it, why aren't we?
...I mean besides the fact that an entire industry has been leeching off this mess for decades...
I'll Gladly Pay You Tuesday For A Secure Solution Today...
The technology exists TODAY to centralize the bulk of our most important assets and allow our workforce to accomplish their goals and the business to function just as well (perhaps better) without the need for data to actually "leave" the data centers in whose security we have already invested so much money.
Many people are doing that with the servers already with the adoption of virtualization. Now they need to do with their clients.
The only reason we're now going absolutely stupid and spending money on securing endpoints in their current state is because we're CAUSING (not just allowing) data to leave our enclaves. In fact with all this blabla2.0 hype, we've convinced ourselves we must.
Hogwash. I've posted on the consumerization of IT where companies are allowing their employees to use their own compute platforms. How do you think many of them do this?
Relax, Dude...Keep Your Firewalls...
In the case of centralized computing and streamed desktops to dumb/thin clients, the "perimeter" still includes our data centers and security castles/moats, but also encapsulates a streamed, virtualized, encrypted, and authenticated thin-client session bubble. Instead of worrying about the endpoint, it's nothing more than a flickering display with a keyboard/mouse.
Let your kid use Limewire. Let Uncle Bob surf pr0n. Let wifey download spyware. If my data and applications don't live on the machine and all the clicks/mouseys are just screen updates, what do I care?
Yup, you can still use a screen scraper or a camera phone to use data inappropriately, but this is where balancing risk comes into play. Let's keep the discussion within the 80% of reasonable factored arguments. We'll never eliminate 100% and we don't have to in order to be successful.
Sure, there are exceptions and corner cases where data *does* need to leave our embrace, but we can eliminate an entire class of problem if we take advantage of what we have today and stop this endpoint madness.
This goes for internal corporate users who are chained to their desks and not just mobile users.
What's preventing you from doing this today?