An Overview of Massive Parallel Processing Technology

Timothy Valihora is a specialist in IBM InfoSphere Information Server. Mr. Valihora has been trained in data conversion, massive parallel processing (MPP), and other avenues of information technology.

For years, technology-driven businesses have relied on parallel processing to attain faster results. Parallel processing involves running the same task on more than one microprocessor to spread out the workload and speed up computations. For high workloads, MPP proves even more efficient…

MPP sees workloads distributed across even more computers. While two or three computers can handle average-sized computations, MPP comes into play when users need to crunch large-scale computations. Oftentimes, companies employ banks of dozens or even hundreds of computers for such work.

There are several approaches to MPP. In grid computing, users harness banks, or grids, of computers based in multiple locations to work toward the same goal. The collective grid of computers is referred to as a distributed system. In another approach, known as a computer cluster, banks of computers are closer in proximity, such as in two or three laboratories within the same building or campus.

Given its application, MPP is usually applied to expensive, high-end computers specially tuned for processing, network, and storage performance.

How Violin Flash Arrays Increase Performance

Timothy Valihora works as a management consultant and the president of TVMG Consulting, Inc.

In his role an IIS Parallel extender performance tuner – Mr. Valihora has utilized many different storage devices.

Persistent RAM:
Flash Arrays provide a convenient way for businesses to store their customer data. Like any other type of technology, some arrays operate faster, and thus more efficiently, than others. Violin Memory provides storage solutions that strike a balance between affordability and performance, and Violin flash arrays are no exception.

Violin’s flash storage platform simplifies storage by compressing data and removing the requirement of having a spindle writing to a physical medium. In tandem, applications perform faster, simultaneously saving businesses valuable storage capacity while offering them better performance. The company achieves this efficiency by cutting back on the power consumed by older storage devices – including solid state drives, as well as optimizing processing throughput.

In recent benchmark tests, Mr. Valihora was able to prove that an IBM InfoSphere DataStage PX job which was designed to sort 1 billion rows of data, performed 255% better on Flash (Violin) than Isilon Fast NFS.