I was just reading the ComputerWorld article on some upcoming features of Windows Server 2012 R2, and I see some really intriguing improvements:
In Windows Server 2012 R2, the PowerShell command-line scripting language introduces a feature called Desired State Configuration, or DSC. It uses a declarative syntax to define a configuration for a server and then uses PowerShell remoting to apply that desired configuration to a group of servers all at once.
I’ve been waiting to be able to do something like this for years!
The Hyper-V advances are great too, especially since it looks like Live Migrations between 2012 RTM and 2012 R2 will be supported, and that RDP will be available to even non-networked VMs. And, finally, we will be able to resize VHDX drives on the fly! Here’s another link that provides more details about upcoming Hyper-V features (in a dreaded slideshow).
Lastly, for now, are the storage improvements including those related to de-duplication.
Windows Server 2012 R2 continues to build on Microsoft’s recent cloud integration focus, and should offer a unified experience no matter where your servers are (on premise, public cloud, private cloud, etc).
I’m definitely looking forward to this update, and will be checking out the preview early next week.
So…. I finally had a chance to finish reading the latest full edition of Microsoft’s latest Security Intelligence Report.
There’s a lot of really good info in the report. The executive summary also does a good job of highlighting key points. That said, I had a couple of observations of my own that others might find interesting.
Overall, the data led me to conclude that people who keep their operating systems up to date – whether we are talking versions or patches/service-packs – are more likely to pay attention to other aspects of security, such as malware protection.
While this isn’t necessarily an unexpected conclusion, it’s good to see the charts and stats which lend support for it. This might help us to convince both consumers and corporations that this is important!
Operating System Anomalies
I found it interesting that the percentage of systems with up-to-date anti-malware solutions was found to be higher on the x64 editions of Windows than on their 32-bit counterparts. Based on anecdotal evidence, I would have expected that more people running x64 Windows would have elected to forgo malware protection software.
Another puzzler for me was why 64-bit Vista was worse than 32-bit Vista for both protected and unprotected systems. I’d love to hear the explanation for that one, as I can’t imagine any reason why that should be.
The final interesting discrepancy (again with Vista) is that Vista SP2 numbers for both 32-bit and 64-bit editions are better than the corresponding Windows 7 RTM numbers. No good reasons for why this should be. The most careful users tend to be the ones who keep up with the latest OS – that has been my experience, and that’s what the overall data seems to suggest for every other operating system reported. Weird.
Here’s an excerpt from the report:
The RTM version of Windows 7, which had the highest percentage of unprotected computers of any platform (shown in Figure 4), also displayed the highest infection rates for unprotected computers, with a CCM of 20.4 for the 32-bit edition and 12.5 for the 64-bit edition. This correlation suggests that a larger population of unprotected users within a platform creates an attractive target for attackers.
This has been argued for some time, particularly during the epic OS wars of the past few decades. Yes, even though it is not the complete answer, there is definitely some truth to idea that the size of market impacts the size of the opportunity for infection and thus will have a direct impact upon the amount of malware that is created.
Just look at the mobile market, which sports a different ranking of market share vs. the desktop, and we can see that the size of ecosystem, not underlying OS, is the most significant indicator of the amount of available malware.
No, it’s not the only factor, of course, but it’s clearly a very significant factor.
Given that unprotected systems/users in Japan faired better than the worldwide average for all protected systems/users, I wonder if there are additional regional, geographic, cultural or socio-economic factors that contribute to how safe or at-risk any particular group of computer users will be?
It would be interesting to determine what the discrepancy was (if any) between the average number of installed applications on infected and unprotected systems vs. that found on protected and uninfected systems. I’m certain that we can learn something from that as well.
In general, it seems to me that people who are security minded will keep up with patch management and employ other good, safe computing practices, including the installation of anti-malware solutions, whereas those who are not so security minded are likely to engage in much riskier behaviors which include going to risky sites, not using malware protection, etc.
That’s my first pass… If anything else stands out over the next few weeks, I’ll follow up with another post.
If there is one lesson that technologists need to understand in order to be successful, it’s that business is ultimately more about people than about process or technology. At the end of the day, how people think, behave and operate will have be the greatest influence on the success of any organization.
With that said, it has been my observation that every single business operates in two contexts or modes. For now, we’ll call them Mode 1 and Mode 2, where Mode 1 is the normal or typical mode of operation, and Mode 2 is the mode of operation in or around the timeframe of “an incident.”
From a technology perspective, the nuances of Mode 1 and Mode 2 are somewhat different depending on whether we are discussing general technology management or information security management. For general technology management, the issues tend to be related to the following:
- Redundancy (system, application, network or site)
- Disaster Recovery and Business Continuity
For information security management, the items in question tend to be related to the following:
- Access Control
- Segregation of Duties
- Logging and Monitoring
- Operational Risk of any kind
Remember, the following is true of every business…
In Mode 1, or normal mode, the business moves along as fast and as seamlessly as possible. This is the mode of getting-things-done, and the goals are to increase business, improve revenue, and get things out of the door. Under regular circumstances, this mode is supported, condoned and many times even sponsored by the organization’s senior management team.
Mode 2 is the mode that gets turned on immediately after "an incident." An incident can be anything related to a system crash or failure, a broader outage, a security breach, or anything that requires PR or other customer communication. It can be anything that affects our IT Operations or Information Security considerations listed earlier. The severity of the issue or incident will dictate how long Mode 2 remains in effect.
As soon as it is recognized that something highly undesirable has occurred, the senior management team – or its duly elected representative – leaps into Mode 2, and tries to drag the entire business with it.
Mode 2 is where security and high availability are really and truly taken seriously.
Generally speaking, most IT operations teams try to operate in Mode 2 as often as possible, but over time, they will get worn down to the point where they hang out in a sort of Mode 1.5 state. Information Security teams, however, have are motivated by more paranoia, and almost always remain in and around Mode 2 . (This tends to annoy other people, who are likely to suggest that the InfoSec folks lack people skills or don’t understand how business needs to operate.)
Despite this general grumpiness about the relentless Mode 2 focus of the InfoSec team, as soon as there is a security incident – or even a near miss – the senior team jumps into a Mode 2 context, and starts trying to find out why the business hasn’t been in that mode the whole time!
Nothing is more frustrating than being blamed for something happening that you saw coming but were not allowed to reasonably address, when the blame is coming from the very people who could have facilitated the solutions.
Some points about corporate mode switching:
- The larger the organization, the harder it is for the senior team to get everyone out of their normal mode and into Mode 2 during an emergency.
- After a while, no one is fooled by the switches between Mode 1 and Mode 2 – not the employees and not the customers.
- Systems engineers and administrators that have grown weary with the futility of trying to do the right thing, ultimately stop trying, and just do whatever they can get away with. This increases the number of incidents, but they no longer feel the torture of the inevitable blame.
- We should be very, very afraid when Information Security professionals get worn down by the same futility, because the stakes are higher.
While it is generally accepted that people are the weak link in any security model (or any operational technology model, for that matter), it is rarely recognized that the low-level employees are not the biggest problem. This is because the employees on the bottom part of the org chart can only pose a risk to the operational or security posture of the typical business if the folks in the upper portion of the org chart allow them to by the corporate culture that is created, nurtured and enforced.
Still think this is not a regular situation in the corporate world? Then take some time to look at the next set of breach notifications or outage notifications, and see if you can identify organizations that have temporarily escalated to Mode 2.
And, trust me, you’ll have plenty of opportunities to look at some high profile outage and security breach notification in 2013…
I finally got a chance to deploy a Meraki MR12 wireless access point.
These are some sweet devices. I wasn’t that happy with the PoE brick that it comes with for power, but getting it setup has been quite pleasant.
As an enterprise device, it supports all sorts of options for authentication, including RADIUS authentication, and LDAP.
I still have to get the SNMP monitoring configured, even though I was able to get the MAC-based authentication via RADIUS configured without much stress.
It is a fully cloud managed solution, and the management is quite robust. For technologists who are interested, Meraki has provided several ways for you to get your hands on one.
I would advise you to take advantage of this offer while it lasts. I’m definitely going to take a look at some of their other offerings over the next couple of months.
A holistic approach to information security needs to address a corporate strategy for buying or building solutions. Such a strategy will have an impact on how a company looks at staffing and technology investments.
There are two basic ways to look at major investments of information technology and information security: you can buy or you can build.
Option A: The BUY model
In this model, an organization selects industry standard tools and technology, and aims to hire above-average to guru-type employees who will integrate the technology into the corporate environment. The staff is reasonably interchangeable in this model, although technology costs are on the higher end for implementation. Ongoing maintenance costs are average.
Option B: The BUILD model
In this model, an organization hires the best and brightest, and uses software and/or hardware components to build custom solutions for the organization. This affords the greatest flexibility of tools, but requires higher staffing costs, and support and maintenance are tied to the staff for the full lifecycle of the solution. Staffing changes are a bit more traumatic to the organization, and knowledge transfer is more important than in the BUY model.
Both approaches have merit, but they impact the company’s cost structure in different ways. Option A puts more emphasis on purchasing the right tools. Option B puts more emphasis on hiring strong technologists who will build the right tools and frameworks for the organization. Either way, you always need to have good, dedicated people, but the specific skill sets required in our staff will differ for the BUY model vs. the BUILD model.
There is no *best* option, especially when you include factors like availability of skills, time to hire, competition from other companies, etc. Not only is it valid to use either option, but it is valid to use a combination of both options. A major constraint is the existing staff, and this often heavily influences the direction that an organization ultimately takes. Once the company has settled upon the model that will define the general direction of their investment strategy, they need to do two other things:
1. Identify the security risks facing the organization and prioritize their remediation.
2. Start simple with every investment, then evaluate for possible expansion after it has been implemented.
Identify Risks & Prioritize
It is almost impossible to resolve issues that are not known to exist, and it is extremely difficult to set priorities and make wise choices for the investment of time or money, so the identification of risks must be performed. It must also be updated regularly. A stale risk profile is in many ways much worse than a non-existent risk profile, as it can lead to a false sense of security.
Round One: Simple Solutions
Once risks have been identified and prioritized, there should be every attempt made to implement a solution that balances simplicity and thoroughness. It doesn’t matter whether the solution is being built or purchased, the goal is to get something useful in place that will address the key risks identified, but which is expected to be potentially replaced or expanded within 12-15 months.
Ideally, the initial deployment should take no more than 4-6 weeks, and should cover a minimum of 80% of the initially understood needs of the organization for the risk it is intended to address. Getting something in place quickly and cheaply will be of immediate benefit to the organization, and it will help expose which features are really needed by the organization vs. those which were only nice-to-haves.
For organizations employing the BUY model, it makes it much easier for them to evaluate the merits of the vendor feature lists that will be vying for the corporate budget.
For organizations employing the BUILD model, they can quickly get to work on another “Round One” project, and start putting the necessary team, budget and executive support in place for any “Round Two” projects which have been identified.
For SMBs that have a limited hierarchy and are not used to any sort of formality in solution procurement, embracing this strategy can be a very effective way to add some needed process maturity without becoming overly bureaucratic. The value of a size appropriate cost/benefit analysis, and the introduction of some process discipline into the planning, procurement and deployment methodology cannot be overstated for SMBs.
Remember: Enhance the security posture of your organization by assessing the needs and make-up of your business in order to select and implement the appropriate investment strategy. Then, you can begin to mitigate your technology-focused business risks quickly and cost-effectively.
You can think of it as Agile Security, if you really want a buzzword to work with, but it is just as effective even without a cute name. Don’t delay – get started today.