• Login
  • Register
  • Search

Multi-party Collaboration Based on Federated Machine Learning Construction of Emergency Warning Model and Research on Privacy Protection Mechanism

Guanchao Peng, Ruiliang Ma

Abstract


Aiming at the problem of data silos in traditional centralized warning model, this paper proposes a multi-party cooperative emergency warning model construction method based on federated machine learning. This method utilizes federated learning to realize multi-party data cooperative training while protecting data privacy of all parties. The research focuses on exploring the privacy protection mechanism suitable for emergency warning scenarios, including diff erential privacy and homomorphic encryption[4]technologies. Through experimental verifi cation, the proposed model significantly improves the early warning accuracy under the premise of ensuring privacy security.

Keywords


Federated machine learning; Multi-party collaboration; Emergency warning; Privacy protection; Diff erential privacy[3]

Full Text:

PDF

Included Database


References


[1] McMahan, H. B., Moore, E., Ramage, D., Hampson, S., & Arcas, B. A. Y. (2017). Communication-efficient learning of deep networks from decentralized data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), 1273–1282.

[2] Yang, Q., Liu, Y., Chen, T., & Tong, Y. (2019). Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology, 10(2), 1–19.

[3] Dwork, C. (2006). Differential privacy. Proceedings of the 33rd International Conference on Automata, Languages and Programming (ICALP), 1–12.

[4] Gentry, C. (2009). A fully homomorphic encryption scheme. Ph.D. Thesis, Stanford University.

[5] Konny, J., McMahan, H. B., Yu, F. X., Richtarik, P., Suresh, A. T., & Bacon, D. (2016). Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492.

[6] Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735–1780.

[7] Gers, F. A., Schmidhuber, J., & Cummins, F. (2000). Learning to forget: Continual prediction with LSTM. Neural Computation, 12(10), 2451–2471.

[8] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1), 1929–1958.




DOI: http://dx.doi.org/10.18686/ahe.v8i9.13876

Refbacks