Machine Learning Algorithms for Efficient Storage Management in Resource-Limited Systems: Techniques and Applications
Keywords:
Machine Learning, Resource-Constrained SystemsAbstract
In systems with limited resources, the spreading of data across many domains complicates storage management. These systems need innovative storage and performance optimization given their modest processing power, memory, and energy. ML might help with storage management in environments limited in resources. ML techniques for restricted resources-based storage management are compared in this paper. We use SVMs and KNN to categorize and find frequently accessed data. This enables commonly used data retention in caching systems thereby facilitating faster retrieval and less resource use. Furthermore lowering dimensionality are unsupervised learning techniques such PCA and K-Means clustering. Systems with limited resources need these data storage cutting techniques without sacrificing integrity.
Dynamic storage might be facilitated by reinforcement learning. From prior data and system usage, RL systems might learn storage allocation. By interacting with the surroundings and collecting data, the RL agent modifies its storage allocation algorithms in real time to meet demands for workload. Storage management requires predictive analytics used either in unsupervised or supervised learning. Past access and use of resources point to storage needs. By means of data flow and proactive resource allocation, this helps to stabilize systems and minimize storage congestion.
References
1. A. Pathak, Y. Zeng, Y. Hu, P. Mohapatra, and T. uhdara Das, "Wireless Network Information Processing for Energy-Efficient Resource Management in Cloud RAN," IEEE Transactions on Wireless Communications, vol. 13, no. 8, pp. 4204-4217, Aug. 2014. [doi: 10.1109/TWC.2014.2338232]
2. M. A. Jalali, A. H. GANDOMI, M. H. Tajdini, S. ONN, and H. PIRZADA, "Resource Allocation in Fog Computing for Internet of Things: A Review," IEEE Access, vol. 6, pp. 57009-57028, 2018. [doi: 10.1109/ACCESS.2018.2869212]
3. V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Sutskever, and J. Dean, "Playing games with deep reinforcement learning," arXiv preprint arXiv:1312.5905, 2013.
4. Y. Mao, C. Youn, J. Zhang, K. Srinivasan, R. Khanna, and M. Swami, "A Survey on Cloud Computing for Internet-of-Things: Architecture, Challenges, and Applications," IEEE Internet of Things Journal, vol. 4, no. 2, pp. 1646-1664, April 2017. [doi: 10.1109/JIOT.2017.2664423]
5. Z. Zhou, M. Chen, X. Li, X. Mao, J. Zhang, and S. Pan, "Federated Learning for Edge Computing in Mobile IoT," IEEE Communications Magazine, vol. 58, no. 1, pp. 126-132, Jan. 2020. [doi: 10.1109/MCOM.2019.1900234]
6. H. Guo, Y. Shen, T. Zhao, Y. Mao, J. Zhang, and S. Pan, "Lightweight Deep Learning for Resource-Constrained IoT Devices," IEEE Access, vol. 7, pp. 140377-140388, 2019. [doi: 10.1109/ACCESS.2019.2947222]
7. A. Ghasemi and S. Sheikoleslami, "A Survey on Deep Learning Techniques for Network Intrusion Detection," IEEE Communications Surveys & Tutorials, vol. 21, no. 4, pp. 2743-2771, Fourthquarter 2019. [doi: 10.1109/COMS.2019.0873927]
8. M. Carmean, P. Yan, and E. DeBenedictis, "Compressing Neural Networks with Pruning," IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 24, no. 4, pp. 1898-1908, April 2016. [doi: 10.1109/TVLSI.2015.2498492]
9. J. Wu, C. Leng, Y. Wang, Q. Hu, and J. Cheng, "Quantized Ternary Neural Networks," arXiv preprint arXiv:1808.00202, 2018.
10. G. Hinton, O. Dean, S. Shan, and D. Engle, "Distilling the Knowledge in a Neural Network," arXiv preprint arXiv:1503.02531, 2015.
11. B. Li, H. Cai, X. Wang, Y. Zhu, and L. Song, "Latency Optimization for DNN-based Image Classification on Edge Devices," arXiv preprint arXiv:1712.05638, 2017.
12. J. Koneˇcnỳ, H. Ramsauer, M. Schwarz, A. Rippel, and P. Vanhoucke, "Full Convolutional Architectures for Semantic Segmentation," *arXiv preprint arXiv.