Skip to content

hanruiqian/Awesome-Federated-LLM-Related-Works

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 

Repository files navigation

Awesome-Federated-LLM-Related-Works

Table of Content

Black-Box Prompting

  • [2023/10] EFFICIENT FEDERATED PROMPT TUNING FOR BLACK-BOX LARGE PRE-TRAINED MODELS Zihao Lin et al. arxiv. [paper]

    • This work proposes Fed-BBPT, which federatedly trains prompt generators for users to employ PTMs without requiring knowledge of model architectures or parameters.
  • [2023/10] FEDBPT: EFFICIENT FEDERATED BLACK-BOX PROMPT TUNING FOR LARGE LANGUAGE MODELS Jingwei Sun et al. arxiv. [paper]

  • [2023/09] LANGUAGE MODELS AS BLACK-BOX OPTIMIZERS FOR VISION-LANGUAGE MODELS Shihong Liu et al. arxiv. [paper]

  • [2023/08] Gradient-Free Textual Inversion Zhengcong Fei et al. arxiv. [paper]

  • [2023/06] Learning to Learn from APIs: Black-Box Data-Free Meta-Learning Zixuan Hu et al. IJCAI 2023. [paper][code]

  • [2023/03] BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning Changdae Oh et al. CVPR2023. [paper] [code]

    • This work proposes BlackVIP, which efficiently adapts the PTM without knowledge about model architectures and parameters.
  • [2022/01] Black-Box Prompt Learning for Pre-trained Language Models Shizhe Diao et al. TMLR. [paper][code]

  • [2022/01] Black-Box Tuning for Language-Model-as-a-Service Tianxiang Sun et al. ICML 2022. [paper]

  • [2022/01] Black-box Prompt Tuning for Vision-Language Model as a Service Lang Yu et al. IJCAI 2023. [paper][code]

Hallucination/Knowledge Enhanced/Chain of Thought

  • [2023/05] Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback Baolin Peng et al. arxiv. [paper][code]

  • [2023/10] ZERO-RESOURCE HALLUCINATION PREVENTION FOR LARGE LANGUAGE MODELS Junyu Luo et al. arxiv. [paper][code]

  • [2023/09] CRITIC: LARGE LANGUAGE MODELS CAN SELF-CORRECT WITH TOOL-INTERACTIVE CRITIQUING Zhibin Gou et al. arxiv. [paper][code]

  • [2023/09] MAKING LARGE LANGUAGE MODELS BETTER REASONERS WITH ALIGNMENT Peiyi Wang et al. arxiv. [paper]

  • [2023/08] Sci-CoT: Leveraging Large Language Models for Enhanced Knowledge Distillation in Small Models for Scientific QA Yuhan Ma et al. arxiv. [paper]

  • [2023/06] Unifying Large Language Models and Knowledge Graphs: A Roadmap Shirui Pan et al. arxiv. [paper]

  • [2023/05] COOK: Empowering General-Purpose Language Models with Modular and Collaborative Knowledge Shangbin Feng et al. arxiv. [paper]

  • [2023/05] Augmented Large Language Models with Parametric Knowledge Guiding Ziyang Luo et al. arxiv. [paper]

  • [2023/05] THINK-ON-GRAPH: DEEP AND RESPONSIBLE REASONING OF LARGE LANGUAGE MODEL ON KNOWLEDGE GRAPH Jiashuo Sun et al. arxiv. [paper][code]

  • [2023/05] Federated Prompting and Chain-of-Thought Reasoning for Improving LLMs Answering Xiangyang Liu et al. arxiv. [paper]

In-Context Learning Related

  • [2023/05] Can We Edit Factual Knowledge by In-Context Learning? Ce Zheng et al. IJCAI 2023. [paper][code]

Privacy

  • [2023/05] Can Public Large Language Models Help Private Cross-device Federated Learning? Boxin Wang et al. arxiv. [paper]

Distillation

  • [2023/08] Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes Cheng-Yu Hsieh et al. arxiv. [paper][code]

  • [2023/08] UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition Wenxuan Zhou et al. arxiv. [paper][page][code]

  • [2023/05] Knowledge-Augmented Reasoning Distillation for Small Language Models in Knowledge-Intensive Tasks Minki Kang et al. arxiv. [paper]

Prompting/Fine-tuning/Instruction Tuning

  • [2023/10] Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation Chen Dun et al. arxiv. [paper]

  • [2023/08] Efficient Model Personalization in Federated Learning via Client-Specific Prompt Generation Fu-En Yang et al. ICCV 2023. [paper]

  • [2023/08] FedDAT: An Approach for Foundation Model Finetuning in Multi-Modal Heterogeneous Federated Learning Haokun Chen et al. arxiv. [paper]

  • [2023/08] FedLogic: Interpretable Federated Multi-Domain Chain-of-Thought Prompt Selection for Large Language Models Pengwei Xing et al. arxiv. [paper]

  • [2023/08] SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models Sara Babakniya et al. arxiv. [paper]

  • [2023/07] Low-Parameter Federated Learning with Large Language Models Jingang Jiang et al. arxiv. [paper]

  • [2023/05] Towards Building the Federated GPT: Federated Instruction Tuning Jianyi Zhang et al. arxiv. [paper]

  • [2023/05] Instruction Tuned Models are Quick Learners Himanshu Gupta et al. arxiv. [paper]

  • [2023/02] Offsite-Tuning: Transfer Learning without Full Model Guangxuan Xiao et al. arxiv. [paper]

Multi-Modal

  • [2023/10] MMICL: EMPOWERING VISION-LANGUAGE MODEL WITH MULTI-MODAL IN-CONTEXT LEARNING Haozhe Zhao et al. arxiv. [paper][code]

  • [2023/05] DC-CCL: Device-Cloud Collaborative Controlled Learning for Large Vision Models Yucheng Ding et al. arxiv. [paper]

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published