Splunk’s “Hunk” makes dealing with Hadoop big data easy

Splunk’s “Hunk” makes dealing with Hadoop big data easy Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)

Earlier this week we got to speak with Sanjay Mehta, VP of Product Marketing, and Clint Sharp, Senior Product Manager, Big Data; concerning the (beta) launch of a new product, “Hunk”, making exploring data stored in Hadoop easy.

In fact, one of the taglines randomly generated on the company’s website is “taking the ‘sh’ out of IT”, solutions which work for clients in the real-world is a clear priority.

Sanjay gives an interview of the company’s background: “What we are all about is making machine-generated data accessible using a valuable to different kinds of users in an organisation.”

To re-affirm their current products are a trusted solution used by many businesses already, Sanjay states: “We have a flagship product called Splunk Enterprise, which is deployed to around 5,600 customers around the world.”

So how is Splunk’s current, industry-leading platform being used today? He explains: “It’s used across a whole variety of industries; from telecoms, to financial services, to online retail, public sector, health care and so on…”

He continues: “How organisations tend to use the platform is for IT management, application management, or for security to protect against unknown threats and for website intelligence.”

The premise of the application sounded exactly like a standard analytics platform; for which I wanted to know what the differences were of Splunk’s new ‘Hunk’ offering, and why readers should choose their solution over any other.

Sanjay clarifies: “What we are announcing this week is a standalone product for enabling users or organisations of Hadoop; to easily explore, analyse, and visualise data.”

It was made very clear, the company has the experience to create a reliable, scalable solution: “We’ve had lots of experience with unstructured data, or big data, real-time masses of data. Our flagship product brings in tens of terabytes of data a day, at the multi-petabytes scale at rest.”

Customers were asking for a solution such as Splunk Enterprise which could work for Hadoop; the company last year released a way of copying data from Hadoop into Splunk but this caused its own issues and failed to be a capable solution.

“Those customers said ‘look, we have got to the point where our data is quite simply too big to move from Hadoop, can you offer your technology natively?’ That’s why we are announcing this product.”

But are organisations aware of the potential these products can have? Sanjay talks about findings from research firm Gartner. Analysts have seen big data going through what they call a ‘Trough of Disillusionment’ whereby it’s essentially “make or break”.

Splunk hopes their services will take away the “disillusionment” and prove to businesses the promise of the technology.

Of course developers will want to know how they can integrate the services offered by ‘Hunk’ into their own applications.

Sanjay explains how this works: “All of this technology is exposed through an API, then we have SDK’s in; Java, Javascript, Ruby, Python, PHP, C# and the goal is to offer these pre-packaged capabilities of analysing Hadoop data through our product to developers.”

As product manager, Clint Sharp took me on a more technical tour of the software and explained how easy it is to integrate Hunk with Hadoop: “Getting applications up and running on top of Hadoop can be complicated, so we set out with a goal of making this ridiculously easy to set-up.”

In the demo Clint took me on the basis of how to configure the product, and for a solution as powerful as Hunk, they really have made installation a breeze even for those with little experience.

Within seconds (it seemed) Clint was inputting variables and pulling in real-time results en masse.

The data was retrieved in JSON format, but Clint assures me Splunk is capable of handling “the most gritty, nasty data out there.”

One of the most impressive parts of the demonstration was how quickly and clearly data can be visualised into all manners of your choice charts, you can really explore results in great detail.

Some of the most important innovations are on the backend however, Clint explains: “Being able to go out, and get quick queries of the data and being able to that ad-hoc analytics on the data raw, Hadoop is incredibly unique in this space.”

How this differentiates from competitors he says: “All those key value pairs we’re showing you are all extracted completely at read-time, so we’re not pre-calculating anything; pre-parsing, reading into tables… which is very great for productivity.”

He explains: “We can go and look at any data set; change the way we’re looking at it, change the way we extract structure at any time.”

Of course these kinds of innovations gives Splunk a great competitive advantage, others have tried – and for the most part failed – to offer the same level of service.

“What’s unique about Splunk is we’re not forcing you into tables and rows and columns. Even with something like Hive, I have to go define a way to look at my data before I can start to look at it. With Splunk I can just log-in and start querying.”

Will you be checking out Splunk’s new “Hunk” software?

View Comments
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *