An Overview of Massive Parallel Processing Technology

Timothy Valihora is a specialist in IBM InfoSphere Information Server. Mr. Valihora has been trained in data conversion, massive parallel processing (MPP), and other avenues of information technology.

For years, technology-driven businesses have relied on parallel processing to attain faster results. Parallel processing involves running the same task on more than one microprocessor to spread out the workload and speed up computations. For high workloads, MPP proves even more efficient…

MPP sees workloads distributed across even more computers. While two or three computers can handle average-sized computations, MPP comes into play when users need to crunch large-scale computations. Oftentimes, companies employ banks of dozens or even hundreds of computers for such work.

There are several approaches to MPP. In grid computing, users harness banks, or grids, of computers based in multiple locations to work toward the same goal. The collective grid of computers is referred to as a distributed system. In another approach, known as a computer cluster, banks of computers are closer in proximity, such as in two or three laboratories within the same building or campus.

Given its application, MPP is usually applied to expensive, high-end computers specially tuned for processing, network, and storage performance.

Leave a comment