Memory access is so much faster than disk I/O that many of us expect to gain striking performance advantages by merely deploying a distributed in-memory cluster and start reading data from it. However, sometimes we overlook the fact that a network interconnects cluster nodes with our applications, and it can quickly diminish the positive effects of having an in-memory cluster if a lot of data gets transferred continuously over the wire.
With that being said, using proper data access patterns provided by distributed in-memory technologies can negate the effect of the network latency. In this article, we’re using the APIs of Apache Ignite’s in-memory computing platform to see how the performance of our application changes if we put less pressure on the communication channels. The ultimate goal is to be able to deploy horizontally scalable in-memory clusters that can tap into the pool of RAM and CPUs spread across all machines with minimal impact of the network.
Memory access is so much faster than disk I/O that many of us expect to gain striking performance advantages by merely deploying a distributed in-memory cluster and start reading data from it. However, sometimes we overlook the fact that a network interconnects cluster nodes with our applications, and it can quickly diminish the positive effects of having an in-memory cluster if a lot of data gets transferred continuously over the wire.
With that being said, using proper data access patterns provided by distributed in-memory technologies can negate the effect of the network latency. In this article, we’re using the APIs of Apache Ignite’s in-memory computing platform to see how the performance of our application changes if we put less pressure on the communication channels. The ultimate goal is to be able to deploy horizontally scalable in-memory clusters that can tap into the pool of RAM and CPUs spread across all machines with minimal impact of the network.&nbs […]