The amount of computation is relatively small because of the small amount of local data. However, the process of the algorithm performs a continuous loop to improve the accuracy of the model. I am very sorry that I haven’t built the full algorithm yet, so I didn’t simulate it. Thank you very much for your comment!
ZHANG, Jun
January 19, 2022 3:32 pm
Please elaborate on the proposed solution for problem 2. How to “calculate the confidence intervals with a 95% confidence level”? And what is the privacy guarantee provided in this system, any innovation from you on this aspect?
I build on fedavg by performing a confidence filter on the models before they are averaged. Traditional federated learning algorithms only provide the privacy that the algorithm itself has, I look up the privacy issues that this algorithm would have and propose solutions to these issues.
CAO, Xuanyu
January 19, 2022 3:18 pm
Interesting. I think the title is too broad. You mainly address the privacy aspect of FL.
There are numerous methods for privacy-preserving distributed learning, e.g., noise insertion, crytographic approaches, etc. Why do you choose the method in the poster? Do you invent it or is it an existing approach?
Secure Aggregation is an existing approach, and others are my own methods. Secure Aggregation is a kind of crytographic approach, but I didn’t consider more options… Thank you very much for asking this question, I will try and test more and compare in the near future, thank you!
The second image is the process of federated algorithm. The center transmits the initial model to each device, and then the device trains the model with local data. Since the amount of local data is too small to train a good model, and the data is time-sensitive, the trained model will be transmitted to the center and integrated with other devices models using fedavg algorithm. The center then distributes the integrated models to the devices for a new round of federated learning.
An interesting work. What is the computational complexity for your algorithm? Is there any preliminary simulation results?
The amount of computation is relatively small because of the small amount of local data. However, the process of the algorithm performs a continuous loop to improve the accuracy of the model. I am very sorry that I haven’t built the full algorithm yet, so I didn’t simulate it. Thank you very much for your comment!
Please elaborate on the proposed solution for problem 2. How to “calculate the confidence intervals with a 95% confidence level”? And what is the privacy guarantee provided in this system, any innovation from you on this aspect?
I build on fedavg by performing a confidence filter on the models before they are averaged. Traditional federated learning algorithms only provide the privacy that the algorithm itself has, I look up the privacy issues that this algorithm would have and propose solutions to these issues.
Interesting. I think the title is too broad. You mainly address the privacy aspect of FL.
There are numerous methods for privacy-preserving distributed learning, e.g., noise insertion, crytographic approaches, etc. Why do you choose the method in the poster? Do you invent it or is it an existing approach?
Secure Aggregation is an existing approach, and others are my own methods. Secure Aggregation is a kind of crytographic approach, but I didn’t consider more options… Thank you very much for asking this question, I will try and test more and compare in the near future, thank you!
Could you elaborate on the algorithm?
The second image is the process of federated algorithm. The center transmits the initial model to each device, and then the device trains the model with local data. Since the amount of local data is too small to train a good model, and the data is time-sensitive, the trained model will be transmitted to the center and integrated with other devices models using fedavg algorithm. The center then distributes the integrated models to the devices for a new round of federated learning.