One of the cool things about the Android platform are the number of tools available for really extensive automation.
So far, I have used the following apps:
I started out with Tasker, but jumped on Llama, as it was easier to accomplish what I wanted at the time – location-based changes on my phone and tablet. When I upgraded to a MotoDroid, I found that the native SmartActions application was even better at handling the functions I was doing, plus some new ones I wanted to do, than Llama was.
Several software updates later, when Motorola totally borked the SmartActions application, I took a look at MacroDroid, and loved it. I’ve used it longer than any of the others at this point.
Tasker is still a more powerful tool, but MacroDroid is easier to implement, and that counts for a lot to me. Unfortunately, it’s not perfect. It doesn’t really handle IF/THEN/ELSE type scenarios, making it necessary to multiple profiles/tasks in order to manage multiple outcomes.
For example, say that you want to change your volume on certain days at certain times, for certain durations? This is easy enough to do in most any of the automation applications. The problem is that you need to set another configuration for setting things back to normal *outside* that window. Tasker handles this automatically, for at least some types of configurations, but many of the other tools don’t even do this.
That’s a pain as your automation needs grow.
Looking around, I found AutoMagic, and one of its biggest assets is that it is easier to implement than the others. I can agree with that. The flowchart UI makes it easier to see and build your configuration.
Too bad that they don’t have all the functions that some of the other tools have. If Tasker or MacroDroid or Llama adopted a flow-based interface tomorrow, they would totally own this market, because they have the power that most people need.
There needs to be a focus by the developers on streamlining choices as much as possible when making rules. For example, if I make a rule for setting the volume, why does AutoMagic make me pick each volume type as a separate action? (Ringer, Phone Call, Notification, etc). Really? Same for Tasker: Select Audio Action forces me to pick silent mode, or change all the settings individually. What about making all of them low? (like, say, to level 1). Sigh.
Likewise, why can’t I pick more than one trigger for a task in MacroDroid? Or, if I want a trigger to be based upon what network I am connected to, why can’t I pick multiple SSIDs from a list? It’s kind of annoying to have to make more than one rule to define being in a location that is determined by SSID, just because the location has more than one SSID that could be used. (Or, for that matter, I might want to treat two different places the very same way for some reason.)
It would be nice if it was easy to clone a task, too.
At the end of the day, I’m going to go back to Tasker. I’ve already paid for it, so that’s better than paying for something else (again), only to have a different limitation.
These Tasker tutorials should help: http://www.pocketables.com/2013/03/overview-of-pocketables-tasker-articles.html
I was just reading the ComputerWorld article on some upcoming features of Windows Server 2012 R2, and I see some really intriguing improvements:
In Windows Server 2012 R2, the PowerShell command-line scripting language introduces a feature called Desired State Configuration, or DSC. It uses a declarative syntax to define a configuration for a server and then uses PowerShell remoting to apply that desired configuration to a group of servers all at once.
I’ve been waiting to be able to do something like this for years!
The Hyper-V advances are great too, especially since it looks like Live Migrations between 2012 RTM and 2012 R2 will be supported, and that RDP will be available to even non-networked VMs. And, finally, we will be able to resize VHDX drives on the fly! Here’s another link that provides more details about upcoming Hyper-V features (in a dreaded slideshow).
Lastly, for now, are the storage improvements including those related to de-duplication.
Windows Server 2012 R2 continues to build on Microsoft’s recent cloud integration focus, and should offer a unified experience no matter where your servers are (on premise, public cloud, private cloud, etc).
I’m definitely looking forward to this update, and will be checking out the preview early next week.
So…. I finally had a chance to finish reading the latest full edition of Microsoft’s latest Security Intelligence Report.
There’s a lot of really good info in the report. The executive summary also does a good job of highlighting key points. That said, I had a couple of observations of my own that others might find interesting.
Overall, the data led me to conclude that people who keep their operating systems up to date – whether we are talking versions or patches/service-packs – are more likely to pay attention to other aspects of security, such as malware protection.
While this isn’t necessarily an unexpected conclusion, it’s good to see the charts and stats which lend support for it. This might help us to convince both consumers and corporations that this is important!
Operating System Anomalies
I found it interesting that the percentage of systems with up-to-date anti-malware solutions was found to be higher on the x64 editions of Windows than on their 32-bit counterparts. Based on anecdotal evidence, I would have expected that more people running x64 Windows would have elected to forgo malware protection software.
Another puzzler for me was why 64-bit Vista was worse than 32-bit Vista for both protected and unprotected systems. I’d love to hear the explanation for that one, as I can’t imagine any reason why that should be.
The final interesting discrepancy (again with Vista) is that Vista SP2 numbers for both 32-bit and 64-bit editions are better than the corresponding Windows 7 RTM numbers. No good reasons for why this should be. The most careful users tend to be the ones who keep up with the latest OS – that has been my experience, and that’s what the overall data seems to suggest for every other operating system reported. Weird.
Here’s an excerpt from the report:
The RTM version of Windows 7, which had the highest percentage of unprotected computers of any platform (shown in Figure 4), also displayed the highest infection rates for unprotected computers, with a CCM of 20.4 for the 32-bit edition and 12.5 for the 64-bit edition. This correlation suggests that a larger population of unprotected users within a platform creates an attractive target for attackers.
This has been argued for some time, particularly during the epic OS wars of the past few decades. Yes, even though it is not the complete answer, there is definitely some truth to idea that the size of market impacts the size of the opportunity for infection and thus will have a direct impact upon the amount of malware that is created.
Just look at the mobile market, which sports a different ranking of market share vs. the desktop, and we can see that the size of ecosystem, not underlying OS, is the most significant indicator of the amount of available malware.
No, it’s not the only factor, of course, but it’s clearly a very significant factor.
Given that unprotected systems/users in Japan faired better than the worldwide average for all protected systems/users, I wonder if there are additional regional, geographic, cultural or socio-economic factors that contribute to how safe or at-risk any particular group of computer users will be?
It would be interesting to determine what the discrepancy was (if any) between the average number of installed applications on infected and unprotected systems vs. that found on protected and uninfected systems. I’m certain that we can learn something from that as well.
In general, it seems to me that people who are security minded will keep up with patch management and employ other good, safe computing practices, including the installation of anti-malware solutions, whereas those who are not so security minded are likely to engage in much riskier behaviors which include going to risky sites, not using malware protection, etc.
That’s my first pass… If anything else stands out over the next few weeks, I’ll follow up with another post.
If there is one lesson that technologists need to understand in order to be successful, it’s that business is ultimately more about people than about process or technology. At the end of the day, how people think, behave and operate will have be the greatest influence on the success of any organization.
With that said, it has been my observation that every single business operates in two contexts or modes. For now, we’ll call them Mode 1 and Mode 2, where Mode 1 is the normal or typical mode of operation, and Mode 2 is the mode of operation in or around the timeframe of “an incident.”
From a technology perspective, the nuances of Mode 1 and Mode 2 are somewhat different depending on whether we are discussing general technology management or information security management. For general technology management, the issues tend to be related to the following:
- Redundancy (system, application, network or site)
- Disaster Recovery and Business Continuity
For information security management, the items in question tend to be related to the following:
- Access Control
- Segregation of Duties
- Logging and Monitoring
- Operational Risk of any kind
Remember, the following is true of every business…
In Mode 1, or normal mode, the business moves along as fast and as seamlessly as possible. This is the mode of getting-things-done, and the goals are to increase business, improve revenue, and get things out of the door. Under regular circumstances, this mode is supported, condoned and many times even sponsored by the organization’s senior management team.
Mode 2 is the mode that gets turned on immediately after "an incident." An incident can be anything related to a system crash or failure, a broader outage, a security breach, or anything that requires PR or other customer communication. It can be anything that affects our IT Operations or Information Security considerations listed earlier. The severity of the issue or incident will dictate how long Mode 2 remains in effect.
As soon as it is recognized that something highly undesirable has occurred, the senior management team – or its duly elected representative – leaps into Mode 2, and tries to drag the entire business with it.
Mode 2 is where security and high availability are really and truly taken seriously.
Generally speaking, most IT operations teams try to operate in Mode 2 as often as possible, but over time, they will get worn down to the point where they hang out in a sort of Mode 1.5 state. Information Security teams, however, have are motivated by more paranoia, and almost always remain in and around Mode 2 . (This tends to annoy other people, who are likely to suggest that the InfoSec folks lack people skills or don’t understand how business needs to operate.)
Despite this general grumpiness about the relentless Mode 2 focus of the InfoSec team, as soon as there is a security incident – or even a near miss – the senior team jumps into a Mode 2 context, and starts trying to find out why the business hasn’t been in that mode the whole time!
Nothing is more frustrating than being blamed for something happening that you saw coming but were not allowed to reasonably address, when the blame is coming from the very people who could have facilitated the solutions.
Some points about corporate mode switching:
- The larger the organization, the harder it is for the senior team to get everyone out of their normal mode and into Mode 2 during an emergency.
- After a while, no one is fooled by the switches between Mode 1 and Mode 2 – not the employees and not the customers.
- Systems engineers and administrators that have grown weary with the futility of trying to do the right thing, ultimately stop trying, and just do whatever they can get away with. This increases the number of incidents, but they no longer feel the torture of the inevitable blame.
- We should be very, very afraid when Information Security professionals get worn down by the same futility, because the stakes are higher.
While it is generally accepted that people are the weak link in any security model (or any operational technology model, for that matter), it is rarely recognized that the low-level employees are not the biggest problem. This is because the employees on the bottom part of the org chart can only pose a risk to the operational or security posture of the typical business if the folks in the upper portion of the org chart allow them to by the corporate culture that is created, nurtured and enforced.
Still think this is not a regular situation in the corporate world? Then take some time to look at the next set of breach notifications or outage notifications, and see if you can identify organizations that have temporarily escalated to Mode 2.
And, trust me, you’ll have plenty of opportunities to look at some high profile outage and security breach notification in 2013…
I finally got a chance to deploy a Meraki MR12 wireless access point.
These are some sweet devices. I wasn’t that happy with the PoE brick that it comes with for power, but getting it setup has been quite pleasant.
As an enterprise device, it supports all sorts of options for authentication, including RADIUS authentication, and LDAP.
I still have to get the SNMP monitoring configured, even though I was able to get the MAC-based authentication via RADIUS configured without much stress.
It is a fully cloud managed solution, and the management is quite robust. For technologists who are interested, Meraki has provided several ways for you to get your hands on one.
I would advise you to take advantage of this offer while it lasts. I’m definitely going to take a look at some of their other offerings over the next couple of months.