There is likely no one who debates today that data is money, but what people might wonder about is where the money is in SAM data. To be able to answer that we need to look on the two layers of data present in today’s SAM tool:
The first layers contain units from the following: desktops, laptops, servers, users, applications, usage, virtualization, entitlements, product usage rights and much more. This is foundation data. It doesn’t do much on its own, but when you use it the right way you get …
… a second layer of data. This is a combination of two or more units from the first layer: license compliance, security compliance, legal compliance, unused applications, computers under company standard, virtualization of clients and servers and much more. This is where the money is in SAM data. Let’s look on some examples:
Combine applications with users/computers, match it against your unique entitlements with product usage right specific algorithms and suddenly you have license compliance data. Almost all of the top 20 software vendors are increasing their audit departments and/or are pushing them to third party audit firms. Being incompliant in today’s highly virtualized server IT environments can mean a potential initial license fee that is equal to x100 of your initial investment.
But being under-licensed is not your only problem. Being over-licensed often makes organizations bleed 10-30% more money on their software budget than they actually need. So basically, either you are perfectly compliant (0 spare licenses, 0 missing licenses) or you are risking/bleeding money one way or another. This is where the unused application data becomes relevant. You can read more about unused applications and the importance of good application usage data in our previous blog post about Active vs Total Software Usage.
Combine computers with data about current OS security updates, anti-virus situation, installed open source products etc., and suddenly you have security compliance data. The cost for cyber-attacks in 2015 alone was estimated to 400 billion USD, with an average cost of 11 million USD per successful attack. It is not hard to find the value in security related data.
To have good data, some key elements are of course necessary. You need:
A) a good mining tool (an agent or similar)
B) a good storage solution (a database of some kind)
C) a processing intelligence (algorithms, machine learning etc.) that processes isolated data into correlated data (money data).
Up to this point a lot of the content in this blog might have been obvious. But here is the point that makes most organizations fail today: The amount of awesome, money-saving data your organization is having is meaningless, if you can’t access it.
One of the hardest challenges for employers today is to find the right people for the right job. When you do find them, you want to get the most out of them, which in most cases is to analyze and act on the data relevant to their field. And this is the core of the problem: Too many SAM experts today are wasting their precious time on accessing data. Not analyzing and acting on it. If your experts are drowning in administrative task you’re not utilizing their full potential and your ROI goes down. Plain and simple.
So, what is the route to the problem? Basically, one simple thing: User Interfaces/Experience. The important data gets stuck behind a bad user interface, or – which is considerable worse – mediocre data presented in a non-user-friendly interface.
If you would ask a lot of SAM experts today which tool is best for managing SQL Server licensing, I bet that a substantial portion would say Excel, which is a user-interface only (all data has to be manually fed into it/exported from other systems). I bet all the SAM experts who answers Excel are all tired to feed in data manually into an expensive SAM-tool, look which servers has the SQL Server installed, manually map the licenses to those server in a static way and then in the end look down on wrong numbers anyway, since the user-interface doesn’t have any intelligence to balance the installation data and entitlement data correctly and/or present it in a user-friendly way.
At Xensam we believe in layers of data that will suit the specific situation and/or user. Let’s take our solution for SQL Server licensing as an example.
At the first layer, you can simply see whether or not your server estate is complaint on a data center/cluster/host/VM level. A simple Yes or No based on the markets most advanced algorithms.
At the second layer, you can see how it is licensed, which metric is used, are the SA or non-SA licenses used?
At the third layer, you can see everything. How many original cores are on the host? What is the required core count? Is the environment Dynamically Provisioned and therefore either need Enterprise SA on the host or stacking licenses to cover for the number of virtual machines that exceeds the license count on the host… you see the point.
This is how Xensam believes that you most efficiently present the data: At the first layer, answer the question with a “Yes” or “No”. At the second layer, give the shortest possible answer to “how?”. At the third layer, let the user analyze the answer to question “how?”
1) Data today is money. You want your money to work for you.
2) For the money to be able to work for you, you need to be able to work with your data.
3) For you to be able to work with your data, you need to be able to access it.
That’s the key challenge with all big data enterprise software today; accessibility. It doesn’t matter what data you have. It matters what data you can access.