In the Quest of Trade-off between Job Parallelism and Throughput in Hadoop: A Stochastic Learning Approach for Parameter Tuning on the Fly
With the emergence of the concept of big data, Hadoop MapReduce has been the de facto standard programming model for processing a large amount of data stored on the different cluster nodes in a distributed manner. It is known that the implementation of MapReduce operation with the default configuration yields a low number of parallel running jobs. In fact, poor resource utilization and overall low performance are usually induced by the default configuration. Although a myriad of works has been carried out in the literature for optimally configuring Hadoop MapReduce, the absolute vast majority of those works only consider offline and static configuration. Those approaches are clearly ineffective as the load might change during execution requiring tuning again the configuration parameters. In this work, we rather focus on dynamical and adaptively configuring Hadoop MapReduce by changing the system level Maximum Application Master Resource in Percent (MARP) parameter on the fly.We show that adaptively tuning the MARP parameter yields a good trade-off between job parallelism and throughput. To achieve this, an optimal design which we call Adaptive Parameter Tuning of Hadoop (APTH) based on a novel variant of the Tsetlin Automata is devised. Comprehensive experimental results show that the resources are optimally and appropriately utilized, resulting in better job parallelism and throughput. Furthermore, it is found that our APTH approach spends 47% less time for job execution as compared to the default configuration.
Adaptive, big data, Hadoop, Job-parallelism, Throughput, Tsetlin Automata, Resource Utilization, Tuning