METHODS FOR SECURING AND PROCESSING PERSONALIZED DATA IN ADAPTIVE CONTENT GENERATION SYSTEMS
DOI:
https://doi.org/10.30857/2786-5371.2026.1.6Keywords:
adaptive content generation, personalized data, differential privacy, federated learning, encryption, security architectureAbstract
Purpose. The aim of this article is to develop a comprehensive approach to ensuring the security and proper handling of personalized data in adaptive content generation systems. The proposed approach is based on the formalization of the user data space, the application of anonymization and pseudonymization techniques, the use of Differential Privacy (DP) mechanisms, Federated Learning (FL), and the integration of modern cryptographic and architectural security measures.
Methodology. The study relies on a systemic analysis of adaptive system architectures and the multidimensional feature space of user data. The risk of de-anonymization is evaluated using mathematical models of Differential Privacy, while the protection of distributed data is implemented through Federated Learning protocols with secure gradient aggregation. A comparative analysis of modern data protection methods, threat modeling, and the development of a multi-level security architecture based on Zero Trust principles, Role-Based Access Control (RBAC), encryption, and event auditing was conducted.
Findings. A formalized model of the personalized data space was proposed, enabling classification of sensitivity levels, defining allowable transformations, and integrating them with anonymization techniques. The effectiveness of ε-Differential Privacy for controlling de-anonymization risk during model training was demonstrated. A generalized scheme combining Federated Learning with cryptographic secure aggregation protocols was developed, providing user data confidentiality without compromising model accuracy. A multi-layered security architecture was designed, incorporating data encryption, access control, auditing, and monitoring, ensuring a balance between security, scalability, and the efficiency of content generation.
Originality. The novelty lies in the integrated combination of a formal personalized data model with Differential Privacy and Federated Learning mechanisms within a unified security architectural framework, simultaneously ensuring confidentiality, scalability, and efficient data processing in adaptive content generation systems.
Practical value. The results can be applied in the development of educational platforms, gaming systems, recommendation services, and other intelligent systems that operate with personalized user profiles and require a high level of data protection and compliance with modern information security standards.
Downloads
References
Завгородній В. В., Завгородня Г. А., Валявська Н. О., Адаменко В. С., Дороговцев Є. В., Несмачний П. В. Метод автоматичної генерації контенту на основі процедурних алгоритмів. Вчені записки Таврійського національного університету імені В. І. Вернадського. Серія: Технічні науки. 2022. Т. 33 (72), № 1. С. 91–96. DOI: https://doi.org/10.32838/2663-5941/2022.1/15.
Завгородня Г. А., Завгородній В. В. Використання алгоритмів машинного навчання для динамічної адаптації складності комп’ютерних ігор. Таврійський науковий вісник. Серія: Технічні науки. 2025. № 1(5). С. 156–163. DOI: https://doi.org/10.32782/tnv-tech.2025.5.1.16.
Завгородня Г. А., Завгородній В. В. Розробка масштабованої розподіленої архітектури для масових багатокористувацьких онлайн-систем. Вісник Херсонського національного технічного університету. 2025. № 4(95), Ч. 3. С. 99–106. DOI: https://doi.org/10.35546/kntu2078-4481.2025.4.3.11.
Завгородня Г. А., Завгородній В. В. Моделювання поведінки гравця через нейромережеві агенти. Вчені записки ТНУ імені В. І. Вернадського. Серія: Технічні науки. 2025. Т. 36 (75), № 5, Ч. 2. С. 141–145. DOI: https://doi.org/10.32782/2663-5941/2025.6.2/20.
Rocher L., Hendrickx J. M., de Montjoye Y.-A. Estimating the success of re-identifications in incomplete datasets using generative models. Nature Communications. 2019. Vol. 10. Article 3069. DOI: https://doi.org/10.1038/s41467-019-10933-3.
Abadi M., Chu A., Goodfellow I. et al. Deep learning with differential privacy. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (ACM CCS). 2016. P. 308–318. DOI: https://doi.org/10.1145/2976749.2978318.
Kairouz P., McMahan H. B., Avent B. et al. Advances and open problems in federated learning. Foundations and Trends in Machine Learning. 2021. Vol. 14, No. 1–2. P. 1–210. DOI: https://doi.org/10.1561/2200000083.
Bonawitz K., Ivanov V., Kreuter B. et al. Practical secure aggregation for privacy-preserving machine learning. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS '17). 2017. P. 1175–1191. DOI: https://doi.org/10.1145/3133956.3133982.
Li T., Sahu A. K., Talwalkar A., Smith V. Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine. 2020. Vol. 37, No. 3. P. 50–60. DOI: https://doi.org/10.1109/MSP.2020.2975749.
Geyer R. C., Klein T., Nabi M. Differentially private federated learning: A client level perspective. arXiv:1712.07557. 2017. DOI: https://doi.org/10.48550/arXiv.1712.07557.
Rose S., Borchert O., Mitchell S., Connelly S. Zero Trust Architecture. NIST Special Publication 800-207. Gaithersburg, MD: National Institute of Standards and Technology, 2020. DOI: https://doi.org/10.6028/NIST.SP.800-207.
Gosselin R., Vieu L., Loukil F., Benoit A. Privacy and security in federated learning: A survey. Appl. Sci. 2022. No. 12(19). Art. 9901. DOI: https://doi.org/10.3390/app12199901.
Shokri R., Stronati M., Song C., Shmatikov V. Membership inference attacks against machine learning models. 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 2017. P. 3–18. DOI: https://doi.org/10.1109/SP.2017.41.
Veale M., Binns R., Edwards L. Algorithms that remember: Model inversion attacks and data protection law. Philos Trans A Math Phys Eng Sci. 2018. No. 376 (2133). Art. 20180083. DOI: https://doi.org/10.1098/rsta.2018.0083.
Barański S. A Survey on Privacy-Preserving Machine Learning Inference. TASK Quarterly. 2024. Vol. 28, No. 2. DOI: https://doi.org/10.34808/tq2024/28.2/b.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Ганна ЗАВГОРОДНЯ, Валерій ЗАВГОРОДНІЙ, Андрій САВЧЕНКО, Андрій ЛЕМЕШКО

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.