
6 days ago
Federated Fine-Tuning of Large Language Models for Intelligent Automotive Systems with Low...
Large Language Models (LLMs) in intelligent automotive systems offer significant benefits, such as enhancing natural language understanding, improving user interaction, and enabling more intelligent decision-making. However, this integration also faces important challenges, including data heterogeneity, limited computational resources, and the critical need to safeguard user privacy. Federated Learning (FL) offers a promising solution by enabling decentralized training across distributed data sources without compromising privacy. This paper proposes a novel FL framework for in-vehicle systems, addressing key challenges such as data heterogeneity and limited computational resources. Our method introduces a robust aggregation algorithm based on the L2 norm between LLM increments, effectively mitigating data inconsistencies and enhancing model generalization. Moreover, by integrating Low-Rank Adaptation (LoRA) within parameter-efficient fine-tuning, the framework reduces computational and communication overhead while preserving privacy. Comprehensive experiments validate that the proposed method outperforms state-of-the-art FL methods, achieving a Vicuna score of 8.17, a harmless answer rate of 68.65% (Advbenchmark), and an MTBenchmark average score of 3.74. These results highlight the potential of the proposed FL-based LLM with the LoRA framework in revolutionizing intelligent automotive systems through enhanced adaptability and privacy preservation.
Federated Fine-Tuning of Large Language Models for Intelligent Automotive Systems with Low-Rank Adaptation
Jinhua Chen, Hosei University, Japan; Franck Junior Aboya Messou, Shilong Zhang, Hosei University; Tong Liu, Hosei University, Japan; Keping Yu, Hosei University; Dusit Niyato, Nanyang Technological University
No comments yet. Be the first to say something!