Big data or in-memory computing?

1 min read

Brian Tinham, WM's technology editor, asks what is big data and how can it help manufacturers

Are you looking at 'big data'? Who isn't? Datasets are all big these days, aren't they? So it's just another label. Well not quite. Yes, cynics could be forgiven for suggesting that the term is nothing more than hype, coined by an IT industry more interested in recapturing potential buyers' interest than defining anything new. But there is a little more to it. So what is big data? Strictly, it is managing a collection of datasets so massive that they have hitherto defied intelligent interrogation fast enough, using conventional database systems. Enthusiasts wax lyrical: "Big data provides companies with a great opportunity to improve their processes, productivity and customer service," writes one. "Data can be readily shared across departments and platforms, and collated to provide information that is both meaningful and valuable," another. Easy to confuse with in-memory computing? Yes. But the latter is about storing very large quantities of data, effectively in ready-to-run RAM, rather than loading it as tables and cubes to a hard drive. Essentially, it entails using data compression techniques (not caching) to transform storage and processing times. Users – even multiple concurrent users – delete, at a stroke, database access latency and performance bottlenecks. Pundits cite up to one million-fold improvements in data access times. Given that most manufacturers are not concerned with anything quite as complex as decoding the human genome, it is the latter, rather than the former, that should be attracting most interest. Indeed, the most frequently cited goal is, perhaps unsurprisingly, enabling more advanced, super-fast and on-demand business intelligence (BI), analytics and reporting – making the inaccessible accessible. But that's not all. Dan Matthews, illustrious CTO of global enterprise software developer IFS, says his company is using in-memory computing to understand, in real time, the traffic between users and servers, and discover, for example, the most frequently searched-for products. Similarly, he suggests that being able to 'see' changing tolerance data statistics as they happen in production could be a powerful tool for quickly mitigating problems and directing investigation. But one could equally enthuse over correlating quality or design data against customer satisfaction, as expressed through call centres. Or running MRP even faster, and perhaps improving the sophistication of its output. Consider, for example, issuing purchase and works orders against real time information that impacts lead times, particularly where build sequences are bedevilled by variable constraints, dependencies and supply chains... Whatever next?