CodeExcellence's Code Quality Governance Blog

Four Myths About HANA — and One Great Truth

Posted by on June 26th, 2013 at 4:17am

Truth be told, I have not always been a fan of “list” blog posts. As an Analyst, I found them reductive. As a Consultant, I had trouble considering them focused enough to be billable. As a Senior Executive, however, I came to believe that well-constructed examples could be valuable tools in guiding decision matrices. So, with my CIO/CXO hat firmly tied on, I have decided to set the record straight for SAP’s in-memory initiative, HANA.

This is a simple management sanity checklist to separate fact from hype and distinguish mere ignorance from outright lies. Could someone produce a comprehensive 25,000-word treatise on the state of HANA? Sure, but I don’t have the time to write it, and you certainly don’t have the time to read it! Besides, isn’t that the work for which we typically overpay consultants?

Let’s start with the Myths:

1. There’s nothing really new about HANA
Critics like to point out that in-memory computing is not new, and they’re right. Then again, very few things are ever completely new. We all stand on the shoulders of giants. Marketing folks have struggled mightily with the knowledge that tech is essentially an iterative business. In a profound triumph of hope over reality, they seem to manage to label every product introduced both unique and innovative. The fact is HANA isn’t even SAP’s first in-memory product. Business Warehouse Accelerator was released back in 2008. If the underlying tech in HANA isn’t new, then what is? The answer lies in the architecture and the architectural shifts enabled by HANA. These are honest-to-goodness game-changers.

To date, every other enterprise class architecture has required significant compromises relative to decision support systems. The HANA framework allocates equal attention to depth, breadth, velocity, simplicity, and temporality. As anyone who has ever been on the receiving end of forcibly-compromised users’ whippings can testify, this truly is new. Ultimately, it comes down to the issue of decision velocity. The ability to affect the vector and rate of speed in decision processes, to add dynamic BI where it was previously impractical/impossible, is the greatest payback an investment in HANA architecture can offer.

2. HANA provides immediate benefits to all SAP installations
For every HANA naysayer, there are an equal number of cheerleaders (quick, somebody codify that as Hecht’s Law of Hype). There are blog posts out there that paint such a positive portrait of HANA’s power, that your garden-variety perennial optimists have to blush. SAP’s determination to drag decades of old infrastructure into a brave new world notwithstanding, lots of legacy architectures can only derive negligible benefits from a conversion to HANA.

Which brings us to the fine art of managing enterprise IT investments. You know the saying: “If it ain’t broke, don’t fix it”. This laudable, time-honored strategy of preserving the status quo at all cost (or better still — at very little cost) is really a game of IT chicken. Business parks around the globe are littered with the carcasses of companies that held onto to obsolete architectures “just a little bit longer”, until they ended up with a truly untenable environment. It is actually not an issue with HANA, but some IT departments expect miracles delivered at regular intervals. HANA is a flexible framework that excels across different disciplines. It is still up to you to paint the big picture.

3. HANA uses proprietary hardware
This one is a straight-up lie. Among the best aspects of HANA is its ability to facilitate the implementation of myriad products and services, from SAP or others, using basic, standard, off-the-shelf, hardware. The core components for HANA are Intel CPU-based servers combined with certified SSD storage and RAM. SAP has partnered with most of the leading server manufacturers to deliver certified systems that operate within warranty. Initial instantiation can be delivered pre-configured on an appliance directly from a hardware vendor, with the licenses included. Such systems run the gamut from 2 CPU, 2U rack servers up to the latest, biggest, baddest mainframes. And, yes, Virginia, people still buy mainframes (they just don’t talk about it).

4. HANA has no impact on the Big Data space
“Big Data” has been hyped to a point of just this side of uselessness, making arbitrary definitions imposed in the early days seem quaint. This is a great example of power jockeys who demand the ability to master exabytes of data in a single stack, when in truth, the aggregate records of their entire enterprise haven’t quite crossed the petabyte threshold. Such folks may be tempted to reject the HANA solution, claiming it addresses only the “lower end of Big Data”, but this isn’t an especially pragmatic approach. With a comfortable 10X compression ratio and the ability to handle 160TB of data in-memory effectively, HANA is actually more than sufficient for the majority of current “Big Data” initiatives, whether you’re in an enterprise or government environment. The greatest hurdle every “Big Data” project has to clear before the process can begin in earnest is defining appropriate data sets. Of course, it’s also critically important to ask the proper questions upfront. Once your project is well-defined, HANA-enabled apps are a smart place to start.

Last but not least, here’s one great TRUTH about HANA: It plays well with others!

Depending on the architecture you choose, integration may be little more than a toothless marketing term. With HANA however, it is elevated past an aspiration, to something achievable. Re-uniting data stored on tape drives with the 21st century becomes possible. Integrating systems without rekeying is no longer outside the realm of reality.

The great-unanswered question becomes: how painful will it be to get all of this stuff to work and play together? Despite the temptation to integrate in haste and repent for eternity (remember, we’re still talking about IT), those responsible for results need to pay attention to code and data quality. Garbage in, garbage out. HANA is NOT a garbage disposal. So whether you’re migrating to ABAP for HANA, or the integrating an obscure custom app from the late ’70s, you’ll need to get rid of any trashy data and code, and develop a governance policy to keep them gone. Start with a simple to use, pre-defined model for both code and data governance. Not only will doing so preserve the value of your existing investments, it will ensure a smoother, lower cost migration to the ultimate goodness that HANA offers.

Finally, keep in mind that it’s still early days for HANA. It is already a highly functional, viable architecture. As with any leap forward though, there are sure to be some growing pains ahead. The biggest challenge facing SAP is the same one confronting your organization: remaining focused on what it can become, rather than getting mired in making minor improvements. There’s no denying that HANA delivers the raw speed, power, and flexibility to make things better in most shops right this minute, but realizing its full potential over the long term will ultimately depend on the availability of effective code and data cleansing tools and methodologies you can use prior to migration. In other words, get ready: the best is yet to come.

Leave a Reply

Your email address will not be published. Required fields are marked *