Toward Edge-Assisted Video Content Intelligent Caching With Long Short-Term Memory Learning
Nowadays video content has contributed to the majority of Internet traffic, which brings great challenge to the network infrastructure. Fortunately, the emergence of edge computing has provided a promising way to reduce the video load on the network by caching contents closer to users.But caching replacement algorithm is essential for the cache efficiency considering the limited cache space under existing edge-assisted network architecture. To investigate the challenges and opportunities inside, we first measure the performance of five state-of-the-art caching algorithms based on three real-world datasets. Our observation shows that state-of-the-art caching replacement algorithms suffer from following weaknesses: 1) the rule-based replacement approachs (e.g., LFU,LRU) cannot adapt under different scenarios; 2) data-driven forecast approaches only work efficiently on specific scenarios or datasets, as the extracted features working on one dataset may not work on another one. Motivated by these observations and edgeassisted computation capacity, we then propose an edge-assisted intelligent caching replacement framework LSTM-C based on deep Long Short-Term Memory network, which contains two types of modules: 1) four basic modules manage the coordination among content requests, content replace, cache space, service management; 2) three learning-based modules enable the online deep learning to provide intelligent caching strategy. Supported by this design, LSTM-C learns the pattern of content popularity at long and short time scales as well as determines the cache replacement policy. Most important, LSTM-C represents the request pattern with built-in memory cells, thus requires no data pre-processing, pre-programmed model or additional information. Our experiment results show that LSTM-C outperforms state-of-the-art methods in cache hit rate on three real-traces of video requests. When the cache size is limited, LSTM-C outperforms baselines by 20%â?¼32% in cache hit rate. We also show that the training and predicting time of one iteration are 8.6 ms and 300 Âµs on average respectively, which are fast enough for online operations.
Edge-assisted caching replacement, intelligent content caching, long short term memory