TY - GEN
T1 - Optimized Cache Placement in Edge Computing Using Machine Learning-Based Predictive Models
AU - Mustofa, Yasin Aril
AU - Syarif, Syafruddin
AU - Niswar, Muhammad
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Cache placement in edge computing is crucial in optimizing data retrieval efficiency, particularly in environments characterized by fluctuating content demand and constrained storage capacity. This study proposes a machine learning-driven approach to enhance cache placement decisions, employing Logistic Regression and Support Vector Machines as predictive models. To refine classification accuracy and computational efficiency, the models are optimized through gradient-based techniques such as Stochastic Gradient Descent, Mini-Batch Gradient Descent, Momentum Gradient Descent, Adam, and Newton's Method. The methodology involves processing real-world access data, where key features such as response time, access frequency, and file size inform caching decisions. Experimental evaluations reveal that Newton's Method delivers the highest classification accuracy 97.5% with rapid convergence, while Momentum Gradient Descent achieves the highest cache hit rate 13% at a cache size of 150 MB. Furthermore, the proposed machine learning models significantly surpass traditional caching strategies such as Least Recently Used and Least Frequently Used, achieving up to a 100% improvement in cache hit rate. These findings underscore the effectiveness of adaptive, data-driven caching techniques in enhancing network performance and reducing latency in real-time edge computing environments.
AB - Cache placement in edge computing is crucial in optimizing data retrieval efficiency, particularly in environments characterized by fluctuating content demand and constrained storage capacity. This study proposes a machine learning-driven approach to enhance cache placement decisions, employing Logistic Regression and Support Vector Machines as predictive models. To refine classification accuracy and computational efficiency, the models are optimized through gradient-based techniques such as Stochastic Gradient Descent, Mini-Batch Gradient Descent, Momentum Gradient Descent, Adam, and Newton's Method. The methodology involves processing real-world access data, where key features such as response time, access frequency, and file size inform caching decisions. Experimental evaluations reveal that Newton's Method delivers the highest classification accuracy 97.5% with rapid convergence, while Momentum Gradient Descent achieves the highest cache hit rate 13% at a cache size of 150 MB. Furthermore, the proposed machine learning models significantly surpass traditional caching strategies such as Least Recently Used and Least Frequently Used, achieving up to a 100% improvement in cache hit rate. These findings underscore the effectiveness of adaptive, data-driven caching techniques in enhancing network performance and reducing latency in real-time edge computing environments.
KW - Cache Hit Rate
KW - Cache Placement
KW - Edge Computing
KW - Machine Learning
KW - Optimization Algorithms
UR - https://www.scopus.com/pages/publications/105007728047
U2 - 10.1109/ICOEI65986.2025.11013214
DO - 10.1109/ICOEI65986.2025.11013214
M3 - Conference contribution
AN - SCOPUS:105007728047
T3 - Proceedings of 8th International Conference on Trends in Electronics and Informatics, ICOEI 2025
SP - 1132
EP - 1137
BT - Proceedings of 8th International Conference on Trends in Electronics and Informatics, ICOEI 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 8th International Conference on Trends in Electronics and Informatics, ICOEI 2025
Y2 - 24 April 2025 through 26 April 2025
ER -