Current Research Interests in Federated Learning

Trustworthy Federated Learning

Trustworthy federated learning research aims to ensure the reliability, security, fairness, and transparency of decentralized machine learning systems. Key challenges include designing trust mechanisms to evaluate client contributions, establishing secure communication channels, ensuring fairness in model training, enhancing robustness against attacks, creating incentive mechanisms for client cooperation, developing interpretable models, enabling accountability and auditing, and complying with ethical guidelines. 

Privacy Preserving in Federated Learning

Privacy preservation in federated learning is a crucial research area focused on protecting sensitive user data during the collaborative machine learning process. Key challenges include developing effective privacy-preserving techniques, mitigating data leakage, ensuring scalability, handling heterogeneous data, maintaining robustness against adversarial attacks, balancing performance trade-offs, and complying with privacy regulations. 

One-shot Federated Learning

One-shot federated learning is a streamlined approach to federated learning where clients and the central server engage in a single communication round to create a global model. This method minimizes communication overhead and accelerates the learning process but presents unique challenges in maintaining model quality, privacy preservation, client selection, and handling heterogeneity. 

Learning Efficiency of Federated Learning

The learning efficiency of federated learning is a crucial research area focused on optimizing performance, resource utilization, and communication in distributed machine learning systems. Key challenges include minimizing communication overhead, applying model compression techniques, incorporating adaptive learning rates, optimizing client selection strategies, managing and allocating resources, handling client heterogeneity, supporting asynchronous model updates, and improving convergence properties. 

Non-IID Data in Federated Learning

Non-independent and identically distributed (non-IID) data in federated learning pose significant challenges to the performance and generalization of distributed machine learning systems. Key research questions focus on understanding the impact of non-IID data on model convergence, designing algorithms to handle such data, quantifying heterogeneity, optimizing client selection, and incorporating personalization and fairness techniques.

Federated Learning Applications

Federated learning has numerous applications across various domains, with researchers exploring its potential for collaborative machine learning while preserving data privacy. Examples include healthcare for training diagnostic models, natural language processing for improving language models, smart cities for data-driven optimization, finance for fraud detection and credit risk prediction, and the Internet of Things for local data processing. These applications demonstrate federated learning’s versatility and its ability to enable secure, privacy-preserving machine learning in diverse real-world scenarios.

Distribution Shifting In Federated Learning

Federated Learning focus on tackling the pervasive but often overlooked issue of distribution shift across decentralized nodes. 

Multi-Modal Federated Learning

Federated Learning and Multi-Modal Data Fusion, a nascent yet promising area that aims to leverage heterogeneous data types—such as text, images, and sensor data—for decentralized machine learning. Specifically, I focus on the development of robust algorithms capable of aggregating multi-modal local models into a cohesive global model, thus optimizing performance while preserving privacy.

Long-term Research Interests in Multi-discipline AI

I am interested in apply AI technologies (Computer vision, Natural language processing, Data analytics, etc.) to assist/support non-AI background researchers from multi-discipline to find the best solution for their research topics. And target joint grant applications and joint supervision opportunities to cultivate Ph.D. students with comprehensive abilities who can solve research problems using State-of-the-art Technologies. Areas may include health, agriculture, astronomy, and engineering. I am always open to research questions from Multi-discipline and always stay hungry, and stay foolish to the knowledge I do not know.

One Research Question

Do Deep Neural Network models have a memory to remember things like humans rather than just comparing features?

Scroll to Top