This project is an implementation of the paper: Parameter-Efficient Transfer Learning for NLP, Houlsby [Google], ICML 2019.
-
Updated
Mar 17, 2024 - Python
This project is an implementation of the paper: Parameter-Efficient Transfer Learning for NLP, Houlsby [Google], ICML 2019.
llama-2 model finetuned to generate docker commands
Factuality check of the SemRep Predications
Generative AI nano degree program
For enjoyable brain activity during holiday season in winter '23
CausalLM for python docstrings documentation
This is the implementation of low rank adaptation (LoRA) which is a subset of parameter efficient fine tuning (PEFT).
Multilingual spellings correction and Question Answering Large Language Models
Parameter- and Energy-Efficient Fine-Tuning
Fine-Tuning Google's Vision Transformer LoRA technique. Two different LoRA adapters are tuned for separate classification (food and human actions). A simple Gradio interface is implemented to run the inference.
Fine-tuned FLAN T-5 using Instruction Fine-Tuning (Full), LoRA-based PEFT, and RLHF with PPO
La recherche ouverte sur le traitement automatique du langage, dédiée à la matière fiscale 🔬
🚂 Fine tuning large language models
This is the repo for prompt tuning a language model to improve the given prompt (vague).
This project leverages FLAN-T5 from Hugging Face to perform dialogue summarization, fine-tuning with ROUGE, and detoxifying summaries using PPO and PEFT.
Tutorials on how to use language models
This repository is dedicated to small projects and some theoretical material that I used to get into NLP and LLM in a practical and efficient way.
Add a description, image, and links to the peft topic page so that developers can more easily learn about it.
To associate your repository with the peft topic, visit your repo's landing page and select "manage topics."