KDnuggets Home » News » 2014 » May » Opinions, Interviews » Interview: Walter Maguire, Chief Field Technologist on HP Big Data Strategy and HAVEn ( 16:n16 )

Interview: Walter Maguire, Chief Field Technologist on HP Big Data Strategy and HAVEn


We discuss how HP views Big Data, capabilities of HP HAVEn, leveraging Big Data for improving customer experience, Analytics challenges, outsourcing criteria and current trends.



Walter Maguire Walter Maguire has twenty-seven years of experience in analytics and data technologies. He practiced data science before it had a name, worked with big data when "big" meant a megabyte, and supported the movement which brought data management and analytic technologies from back-office, skunk works operations to core competencies for the largest companies in the world.

In 2010, Walt became the first hire west of Denver for Vertica, makers of the Vertica Analytics software platform for real-time analytics of structured and unstructured data. Since then, he has helped build the HP Vertica customer base and team in the Western USA. He has worked with a wide variety of SQL and NOSQL technologies as well as tools and techniques for analytics. Now as Chief Field Technologist with HP Vertica, Walt has the unique pleasure of addressing customer needs with the continuing evolution of Vertica and HAVEn, the HP Big Data strategy which links hardware, software, services, and business transformation consulting for successful execution.

Here is my interview with him:

Anmol Rajpurohit Q1. Can you please describe HP Big Data strategy? What are the focus areas of HP Big Data Infrastructure Transformation?

HP LogoWalter Maguire: At HP, we see Big Data as transformational. Businesses who master it can fundamentally shift their business model to tremendous competitive advantage – they can disrupt their competition. So we created HAVEn (Hadoop file storage and processing, Autonomy IDOL, Vertica Analytics platform, ArcSight Enterprise security Manager and applications) to help organizations be the disruptor instead of the disrupted. This represents the combination of our expertise as the No. 1 IT vendor in the world, the breadth of our offerings, platform openness, and the partner ecosystem we bring with it. Today, HAVEn represents a technology portfolio which solves Big Data problems, but it also represents a strategic framework for us to continue evolving it as a platform.

AR: Q2. What capabilities of HP HAVEn are your personal favorite? How do you contrast HAVEn from its competition?

Haven hp big dataThe aspect of HAVEn which excites me the most is its openness. It allows organizations worldwide to select from a best-of-breed portfolio of big data technologies – right now – and solve today’s problems. As we continue to build the platform it will continue to evolve in parallel with Big Data challenges, so it’ll solve tomorrow’s problems as well.

I would contrast HAVEn from the competition first by the way HP is approaching it – we’re not asking organizations to wait two years for us to build the technology. It’s available today. Also, HAVEn is open – companies only buy what they need, and we don’t require a particular distribution of Hadoop – we work with all of them.

AR: Q3. How do you differentiate Big Data and "Big BI"? Which one is preferable and why?

WM: Big BI is doing the same type of thing as today (reports, dashboards, etc.), just with a larger volume of data. It’s important and useful, but it represents an incremental advance in analytics. And sometimes that’s just what a business needs. Big Data, on the other hand, incorporates the idea of bringing together data which has never been available before – Tweets, machine logs, geospatial device data, call center audio, etc. – and deriving meaning from the joined data which would’ve otherwise been impossible. This represents a huge opportunity, but can be an ambitious effort. The choice a company makes ought to be determined by where they’re likely to find the most value for their investment.

AR: Q4. What role can Big Data play in delivering a highly-engaging customer experience? How do you think it impacts the future of customer engagement?

Customer Experience puzzle
WM: The mastery of Big Data represents nothing less than a game changing opportunity for organizations to improve customer experience. By bringing together disparate data across an increasing number of input channels with less latency, they can optimize customer contact points – both across their communications and in real-time.
There are multiplying returns to doing this as the cost of relating to a customer decreases, while at the same time increasing customer engagement and satisfaction by delivering the right interaction in the right way at the right moment. We’ve heard this success story repeated time and time again from our customers.

I expect the future to bring increasingly optimized customer interactions, with more consistency across communication channels. This will require technology to continue to optimize what I call “analytic throughput” – or the ability of an organization to digest new data, derive an insight, and put that insight to work.
This is a very active area of research and development for HP.

AR: Q5. Assessing the current state, what do you consider as the biggest technical challenges in harnessing Analytics for business goals?

AnalyticsWM: I see two major challenges today.

First is the challenge of building a technology portfolio which addresses the full spectrum of needs to building Big Data Analytics from ingesting the data, to transforming it, to exploring it and building models for explanation and prediction, to putting those models to work effectively for the business. Many organizations have some of the pieces, but not all of them. And once they get them, building the analytics practice takes time, commitment and focus.

Second is the human bottleneck. Once we build a technology architecture capable of dealing with Big Data in all its forms, the data scientist becomes a potential bottleneck in the process. This isn’t necessarily a bad thing – after all, we need to exercise rational thought to identify the variables which might predict customer churn and test them. So even if we can perform every other step of the analytic process in minutes or seconds – the time to build a model is constrained by the data scientist. While not strictly a technology issue, we clearly need to continue to advance the state of the art in ways that provide the data scientist with ways to lever their expertise.

AR: Q6. What are the key factors a firm must consider before selecting a Big Data outsourcing vendor?

WM: The first factor to consider is whether it should be outsourced at all. I recommend organizations ask themselves whether Big Data is strategic to their business, and to factor that in to an outsourcing decision. It may not be a good idea to let an outsourcer build (and manage) your competitive advantage. If outsourcing makes sense, the next factor to evaluate would be the vendor’s real-world big data experience. Big Data hasn’t been around very long, but the practice of analytics certainly has, so that experience is a plus. Also, Big Data analytics often borrow heavily from the agile development methodology – so I’d recommend looking for a vendor comfortable with that approach.

AR: Q7. You have had a long and successful career in Data Science and Analytics. How do you see the evolution of this field over your past career? Which current trends are of the most interest to you?

TrendsWM: The more things change, the more they stay the same! The tasks we perform today in the practice of data science are almost identical to those I first performed as an undergraduate years ago. We just have to do them faster, better and cheaper. Taking six months to answer an analytical question is not acceptable today.

I’m particularly excited to see some of the young Big Data technologies mature. Today, we’re in the same place we were with the web in about 1997, which is to say it’s all very exciting and there’s a lot going on, but there’s no “killer app” yet. Hadoop is a great example. It’s very promising, and it tackles certain specific problems very well. But it isn’t yet clear just where Hadoop is going to end up. Will it commoditize distributed processing, or something else? It’s got tremendous potential, but like most young innovations, it’s still searching for a killer app.

AR: Q8. On a personal note, are there any good books that you’re reading lately, and would like to recommend? The Singularity Is Near

WM: I finally got around to reading “The Singularity is Near” by Ray Kurzweil. It’s a great read, and can be a helpful way of looking at the rate of technology change. It’s a useful way for me to look back at changes in analytics and Big Data over the last twenty-five years or so and distill some of the large scale patterns. I like his optimism, although for me the jury is still out as to whether a technological singularity based on AI would be a net positive. Either way, I’d definitely recommend it.

Related: