So…. I finally had a chance to finish reading the latest full edition of Microsoft’s latest Security Intelligence Report.
There’s a lot of really good info in the report. The executive summary also does a good job of highlighting key points. That said, I had a couple of observations of my own that others might find interesting.
Overall, the data led me to conclude that people who keep their operating systems up to date – whether we are talking versions or patches/service-packs – are more likely to pay attention to other aspects of security, such as malware protection.
While this isn’t necessarily an unexpected conclusion, it’s good to see the charts and stats which lend support for it. This might help us to convince both consumers and corporations that this is important!
Operating System Anomalies
I found it interesting that the percentage of systems with up-to-date anti-malware solutions was found to be higher on the x64 editions of Windows than on their 32-bit counterparts. Based on anecdotal evidence, I would have expected that more people running x64 Windows would have elected to forgo malware protection software.
Another puzzler for me was why 64-bit Vista was worse than 32-bit Vista for both protected and unprotected systems. I’d love to hear the explanation for that one, as I can’t imagine any reason why that should be.
The final interesting discrepancy (again with Vista) is that Vista SP2 numbers for both 32-bit and 64-bit editions are better than the corresponding Windows 7 RTM numbers. No good reasons for why this should be. The most careful users tend to be the ones who keep up with the latest OS – that has been my experience, and that’s what the overall data seems to suggest for every other operating system reported. Weird.
Here’s an excerpt from the report:
The RTM version of Windows 7, which had the highest percentage of unprotected computers of any platform (shown in Figure 4), also displayed the highest infection rates for unprotected computers, with a CCM of 20.4 for the 32-bit edition and 12.5 for the 64-bit edition. This correlation suggests that a larger population of unprotected users within a platform creates an attractive target for attackers.
This has been argued for some time, particularly during the epic OS wars of the past few decades. Yes, even though it is not the complete answer, there is definitely some truth to idea that the size of market impacts the size of the opportunity for infection and thus will have a direct impact upon the amount of malware that is created.
Just look at the mobile market, which sports a different ranking of market share vs. the desktop, and we can see that the size of ecosystem, not underlying OS, is the most significant indicator of the amount of available malware.
No, it’s not the only factor, of course, but it’s clearly a very significant factor.
Given that unprotected systems/users in Japan faired better than the worldwide average for all protected systems/users, I wonder if there are additional regional, geographic, cultural or socio-economic factors that contribute to how safe or at-risk any particular group of computer users will be?
It would be interesting to determine what the discrepancy was (if any) between the average number of installed applications on infected and unprotected systems vs. that found on protected and uninfected systems. I’m certain that we can learn something from that as well.
In general, it seems to me that people who are security minded will keep up with patch management and employ other good, safe computing practices, including the installation of anti-malware solutions, whereas those who are not so security minded are likely to engage in much riskier behaviors which include going to risky sites, not using malware protection, etc.
That’s my first pass… If anything else stands out over the next few weeks, I’ll follow up with another post.
If there is one lesson that technologists need to understand in order to be successful, it’s that business is ultimately more about people than about process or technology. At the end of the day, how people think, behave and operate will have be the greatest influence on the success of any organization.
With that said, it has been my observation that every single business operates in two contexts or modes. For now, we’ll call them Mode 1 and Mode 2, where Mode 1 is the normal or typical mode of operation, and Mode 2 is the mode of operation in or around the timeframe of “an incident.”
From a technology perspective, the nuances of Mode 1 and Mode 2 are somewhat different depending on whether we are discussing general technology management or information security management. For general technology management, the issues tend to be related to the following:
- Redundancy (system, application, network or site)
- Disaster Recovery and Business Continuity
For information security management, the items in question tend to be related to the following:
- Access Control
- Segregation of Duties
- Logging and Monitoring
- Operational Risk of any kind
Remember, the following is true of every business…
In Mode 1, or normal mode, the business moves along as fast and as seamlessly as possible. This is the mode of getting-things-done, and the goals are to increase business, improve revenue, and get things out of the door. Under regular circumstances, this mode is supported, condoned and many times even sponsored by the organization’s senior management team.
Mode 2 is the mode that gets turned on immediately after "an incident." An incident can be anything related to a system crash or failure, a broader outage, a security breach, or anything that requires PR or other customer communication. It can be anything that affects our IT Operations or Information Security considerations listed earlier. The severity of the issue or incident will dictate how long Mode 2 remains in effect.
As soon as it is recognized that something highly undesirable has occurred, the senior management team – or its duly elected representative – leaps into Mode 2, and tries to drag the entire business with it.
Mode 2 is where security and high availability are really and truly taken seriously.
Generally speaking, most IT operations teams try to operate in Mode 2 as often as possible, but over time, they will get worn down to the point where they hang out in a sort of Mode 1.5 state. Information Security teams, however, have are motivated by more paranoia, and almost always remain in and around Mode 2 . (This tends to annoy other people, who are likely to suggest that the InfoSec folks lack people skills or don’t understand how business needs to operate.)
Despite this general grumpiness about the relentless Mode 2 focus of the InfoSec team, as soon as there is a security incident – or even a near miss – the senior team jumps into a Mode 2 context, and starts trying to find out why the business hasn’t been in that mode the whole time!
Nothing is more frustrating than being blamed for something happening that you saw coming but were not allowed to reasonably address, when the blame is coming from the very people who could have facilitated the solutions.
Some points about corporate mode switching:
- The larger the organization, the harder it is for the senior team to get everyone out of their normal mode and into Mode 2 during an emergency.
- After a while, no one is fooled by the switches between Mode 1 and Mode 2 – not the employees and not the customers.
- Systems engineers and administrators that have grown weary with the futility of trying to do the right thing, ultimately stop trying, and just do whatever they can get away with. This increases the number of incidents, but they no longer feel the torture of the inevitable blame.
- We should be very, very afraid when Information Security professionals get worn down by the same futility, because the stakes are higher.
While it is generally accepted that people are the weak link in any security model (or any operational technology model, for that matter), it is rarely recognized that the low-level employees are not the biggest problem. This is because the employees on the bottom part of the org chart can only pose a risk to the operational or security posture of the typical business if the folks in the upper portion of the org chart allow them to by the corporate culture that is created, nurtured and enforced.
Still think this is not a regular situation in the corporate world? Then take some time to look at the next set of breach notifications or outage notifications, and see if you can identify organizations that have temporarily escalated to Mode 2.
And, trust me, you’ll have plenty of opportunities to look at some high profile outage and security breach notification in 2013…
I finally got a chance to deploy a Meraki MR12 wireless access point.
These are some sweet devices. I wasn’t that happy with the PoE brick that it comes with for power, but getting it setup has been quite pleasant.
As an enterprise device, it supports all sorts of options for authentication, including RADIUS authentication, and LDAP.
I still have to get the SNMP monitoring configured, even though I was able to get the MAC-based authentication via RADIUS configured without much stress.
It is a fully cloud managed solution, and the management is quite robust. For technologists who are interested, Meraki has provided several ways for you to get your hands on one.
I would advise you to take advantage of this offer while it lasts. I’m definitely going to take a look at some of their other offerings over the next couple of months.
A holistic approach to information security needs to address a corporate strategy for buying or building solutions. Such a strategy will have an impact on how a company looks at staffing and technology investments.
There are two basic ways to look at major investments of information technology and information security: you can buy or you can build.
Option A: The BUY model
In this model, an organization selects industry standard tools and technology, and aims to hire above-average to guru-type employees who will integrate the technology into the corporate environment. The staff is reasonably interchangeable in this model, although technology costs are on the higher end for implementation. Ongoing maintenance costs are average.
Option B: The BUILD model
In this model, an organization hires the best and brightest, and uses software and/or hardware components to build custom solutions for the organization. This affords the greatest flexibility of tools, but requires higher staffing costs, and support and maintenance are tied to the staff for the full lifecycle of the solution. Staffing changes are a bit more traumatic to the organization, and knowledge transfer is more important than in the BUY model.
Both approaches have merit, but they impact the company’s cost structure in different ways. Option A puts more emphasis on purchasing the right tools. Option B puts more emphasis on hiring strong technologists who will build the right tools and frameworks for the organization. Either way, you always need to have good, dedicated people, but the specific skill sets required in our staff will differ for the BUY model vs. the BUILD model.
There is no *best* option, especially when you include factors like availability of skills, time to hire, competition from other companies, etc. Not only is it valid to use either option, but it is valid to use a combination of both options. A major constraint is the existing staff, and this often heavily influences the direction that an organization ultimately takes. Once the company has settled upon the model that will define the general direction of their investment strategy, they need to do two other things:
1. Identify the security risks facing the organization and prioritize their remediation.
2. Start simple with every investment, then evaluate for possible expansion after it has been implemented.
Identify Risks & Prioritize
It is almost impossible to resolve issues that are not known to exist, and it is extremely difficult to set priorities and make wise choices for the investment of time or money, so the identification of risks must be performed. It must also be updated regularly. A stale risk profile is in many ways much worse than a non-existent risk profile, as it can lead to a false sense of security.
Round One: Simple Solutions
Once risks have been identified and prioritized, there should be every attempt made to implement a solution that balances simplicity and thoroughness. It doesn’t matter whether the solution is being built or purchased, the goal is to get something useful in place that will address the key risks identified, but which is expected to be potentially replaced or expanded within 12-15 months.
Ideally, the initial deployment should take no more than 4-6 weeks, and should cover a minimum of 80% of the initially understood needs of the organization for the risk it is intended to address. Getting something in place quickly and cheaply will be of immediate benefit to the organization, and it will help expose which features are really needed by the organization vs. those which were only nice-to-haves.
For organizations employing the BUY model, it makes it much easier for them to evaluate the merits of the vendor feature lists that will be vying for the corporate budget.
For organizations employing the BUILD model, they can quickly get to work on another “Round One” project, and start putting the necessary team, budget and executive support in place for any “Round Two” projects which have been identified.
For SMBs that have a limited hierarchy and are not used to any sort of formality in solution procurement, embracing this strategy can be a very effective way to add some needed process maturity without becoming overly bureaucratic. The value of a size appropriate cost/benefit analysis, and the introduction of some process discipline into the planning, procurement and deployment methodology cannot be overstated for SMBs.
Remember: Enhance the security posture of your organization by assessing the needs and make-up of your business in order to select and implement the appropriate investment strategy. Then, you can begin to mitigate your technology-focused business risks quickly and cost-effectively.
You can think of it as Agile Security, if you really want a buzzword to work with, but it is just as effective even without a cute name. Don’t delay – get started today.
In recent years, it has become popular sport to blame information technology (IT) departments and IT leaders for failures – real or imagined – which adversely impact business operations. Even some technology trade journals seem unable to get through a single issue without finding some point upon which to lambast a CTO or CIO for not “stepping up to the plate”, or adding value, or some other business sin.
This trend was clearly seen in two recent articles on InformationWeek (6 Ways IT Still Fails The Business and 5 Ways Business Still Fails IT), the first of which generated a firestorm of responses. Sadly, even with all the worthy rebuttals, there were key points that I felt went unstated.
Of the many “IT is a failure” complaints which are regularly made, I propose to challenge two of the most popular:
- The IT department is too focused on the technology and not on business initiatives or innovation.
- The IT department is just not effective for the business
(not skilled, not motivated, too costly, too slow, etc.)
Technology for Technology’s Sake
If you take a good look where organizations are spending a lot of time and effort today, it is likely on “Big Data,” Social Media, Mobile Computing, and Cloud Computing. Now, take a moment and ask yourself, was it IT that asked for spending on these initiates, or did the requests originate elsewhere in the organization?
Who was it that decided that it would be a good idea to allow consumer devices onto the corporate network, in the first place? Given the negative feedback from the vast majority of my friends and colleagues, I’ll bet that it wasn’t the IT department.
Who was it that determined that social media was vital to the business? I’m guessing that the answer is not IT.
These are just a few of most recent examples of embracing technology for technology’s sake. And, they make good examples because there are very few organizations that have taken the time to analyze and document the anticipated benefit these technologies are supposed to bring them. Most of these initiates were not pursued because of a compelling cost-benefit analysis, but because a senior executive read something on an airplane or heard something while on the golf course. Yet, it is the IT team that gets blamed for a myopic focus on technology.
There may have been a time where IT was focused on implementing technology for its inherent coolness, but that hasn’t been a problem for at least a decade. Virtualization, VoIP, Blade Servers, Gigabit (and 10Gbit) networking have all been implemented for real benefits, including business flexibility, better cost management, improved security, and streamlined operations. Of course, those investments required brutal cost-benefit analysis and substantial vendor negotiations, not just a “make it so” mandate from one or two executives.
Frankly, the truth is that most IT departments are spending too little time focused on the technologies they are being asked to rapidly evaluate and deploy. Due to the sheer number of projects that must be managed simultaneously, most IT departments have little time to spend in proper planning, much less proper deployment. As I mentioned a little over a year ago, technology is getting more complicated, not less complicated, and this means that more time should be devoted to planning and deploying robust architectures, yet the opposite continues to occur – and IT gets all the blame.
Organizations don’t want to make the necessary investments in security or long-term technology operations, but they just know that giving everyone an iPad is going to add revenue to the business in some magical way.
This complaint is the one that raises my eyebrow the most. If IT is really worthless, then why do organizations put up with them? Who is really to blame for an organization having a lame and ineffective IT team? Do the people in IT hire themselves? Are they the ones that set their own job descriptions, and then show up and start paying themselves? Does an organization just wake up one day and find that it has an IT staff which descended from the sky and took up residence, but cannot be removed?
It is as ridiculous for an organization to complain about its inadequate and ineffective IT team or IT leader, as it is for a person to complain that his or her arm is not doing what the rest of the body wants or needs. Short-term failure can be blamed on a person or a team, but long-term failure of any department is ultimately a reflection of the senior leadership of the organization. This is why, in sports, coaches and general managers get fired for extended team failure – even to a greater extent than players get traded. (Yes, salary dynamics are a factor, of course, and not every firing is a fair or accurate one, but those leaders are paid big bucks to make things work, and they pay the price when they cannot).
Most organizations end up with the IT department that they deserve. Companies are either unwilling to pay for what they need, or they fail to seek the right skill sets, or they fail to cultivate an environment where growth, training and mentoring are readily available. Many organizations fail to provide sufficient time or resources to accomplish things properly, then they wonder why they can’t hold on to people who are interested in doing things properly.
In my experience, organizations that have a culture of good planning and good communication, tend to have good alignment between all their departments. Similarly, those companies which have very fluid and shifting business “plans,” and which make adjustments by the seat of their pants – with little in the way of good communication – tend to have a lot of conflict between departments and department heads.
Lots of literature over the past decade has been focused on telling IT just how it should behave to be more successful in the business. Some of that has, admittedly, been useful. Yet, if even a third of that literature, had sought to teach business leaders about their role and responsibility in having good IT teams, there would have been even more significant gains for organizations. Organizations that don’t want IT to be a cost center, should stop treating IT like a cost center. Organizations that want IT to lead innovation, should create a culture where IT can lead or contribute significantly to innovation.
In sports, owners have learned to build their teams around the skill sets of the players that they have, or go out and find the players that they want, so they can build the type of team they want. It is quite silly to deliberately hire people who can only build cars, then call them inadequate and delinquent because you really want to build your business around producing airplanes.
Look at how many super hyped IT outsourcing deals fail – because the problem was NOT necessarily with IT, but with the business in general. Outsourcing what is not well managed or understood does not suddenly make it better managed or understood.
Organizations hurt themselves when:
…they are unwilling to take the time to understand any of the implications of the technologies they plan to deploy.
…they are unwilling to listen to the people they have hired to manage the technology they use.
…they don’t take the time to integrate all of their teams and resources into every business initiative in a holistic fashion.
…they think that blaming IT actually solves a problem or makes them look superior in any way.
If an organization’s IT department and IT leader are not up to par, maybe senior management should spend some time determining if they have identified what par is, and if they have communicated that adequately to anyone else, like HR or IT itself.
Business leaders: It’s time for you to step up to the plate and get involved in making the organization you say you want. Blaming others for situations and outcomes which are ultimately yours to manage is nothing less than an acknowledgement of your own poor leadership.
IT leaders: It’s time for you to take control of your career, and provide a better career path for your team members by taking advantage of your unique placement within your organization. From your vantage point, you can see how everything that the business is doing ties together, and you can anticipate ways to add value and reduce risk. Don’t allow this competitive advantage to be wasted. Remember: accountability without authority (or suitable influence) is simply a fool’s errand, and you have no time for that.
Industry Pundits: Yes, it makes you more popular with businesses to bash IT mercilessly, but let’s be real: IT failure is senior executive failure. Maybe you should take the time to tell them that on occasion. It will be better for everyone involved.