Main Article Content
Load balancing is the process of distributing the load among various nodes of a distributed system to improve both resource utilization and job response time while also avoiding a situation where some of the nodes are heavily loaded while other nodes are idle or doing very little work Good load balancing makes cloud computing more efficient and improves user satisfaction. This article introduces a better load balance model for the cloud based on the cloud partitioning concept with a switch mechanism to choose different strategies for different situations. The load balancing model given in this article is aimed at the cloud which has numerous nodes with distributed computing resources in many different geographic locations. Thus, this model divides the cloud into several cloud heterogeneous partitions. When the environment is very large and complex, these divisions simplify the load balancing. The cloud has a main controller that chooses the suitable partitions for arriving jobs while the balancer for each cloud partition chooses the best load balancing strategy. Some of the classical load balancing methods are similar to the allocation method in the operating system, for example, the Round Robin algorithm and the First Come First Served rules.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.