SS02a-21 Edge AI

Notify of
Inline Feedbacks
View all comments
SHEN, Shanpu
January 19, 2022 3:37 pm

An interesting work. What is the computational complexity for your algorithm? Is there any preliminary simulation results?

January 19, 2022 4:15 pm
Reply to  SHEN, Shanpu

The amount of computation is relatively small because of the small amount of local data. However, the process of the algorithm performs a continuous loop to improve the accuracy of the model. I am very sorry that I haven’t built the full algorithm yet, so I didn’t simulate it. Thank you very much for your comment!

January 19, 2022 3:32 pm

Please elaborate on the proposed solution for problem 2. How to “calculate the confidence intervals with a 95% confidence level”? And what is the privacy guarantee provided in this system, any innovation from you on this aspect?

January 19, 2022 4:10 pm
Reply to  ZHANG, Jun

I build on fedavg by performing a confidence filter on the models before they are averaged. Traditional federated learning algorithms only provide the privacy that the algorithm itself has, I look up the privacy issues that this algorithm would have and propose solutions to these issues.

CAO, Xuanyu
January 19, 2022 3:18 pm

Interesting. I think the title is too broad. You mainly address the privacy aspect of FL.

There are numerous methods for privacy-preserving distributed learning, e.g., noise insertion, crytographic approaches, etc. Why do you choose the method in the poster? Do you invent it or is it an existing approach?

January 19, 2022 3:50 pm
Reply to  CAO, Xuanyu

Secure Aggregation is an existing approach, and others are my own methods. Secure Aggregation is a kind of crytographic approach, but I didn’t consider more options… Thank you very much for asking this question, I will try and test more and compare in the near future, thank you!

SINGH, Dilsher
January 19, 2022 2:25 pm

Could you elaborate on the algorithm?

January 19, 2022 2:58 pm
Reply to  SINGH, Dilsher

The second image is the process of federated algorithm. The center transmits the initial model to each device, and then the device trains the model with local data. Since the amount of local data is too small to train a good model, and the data is time-sensitive, the trained model will be transmitted to the center and integrated with other devices models using fedavg algorithm. The center then distributes the integrated models to the devices for a new round of federated learning.