.dev_scripts | ||
examples/pytorch/llm | ||
requirements | ||
swift | ||
tests | ||
.gitignore | ||
.pre-commit-config_local.yaml | ||
.pre-commit-config.yaml | ||
LICENSE | ||
MANIFEST.in | ||
README.md | ||
requirements.txt | ||
setup.cfg | ||
setup.py |
SWIFT(Scalable lightWeight Infrastructure for Fine-Tuning)
Introduction
SWIFT (Scalable lightWeight Infrastructure for Fine-Tuning) is an extensible lightweight tool designed for model fine-tuning. Its core focus lies in implementing various efficient fine-tuning methods, including parameter-efficient, memory-efficient, and time-efficient approaches. SWIFT is capable of supporting efficient fine-tuning of models from diverse domains within the ModelScope community, with primary emphasis on vision-based task models and large language models in NLP. Additionally, SWIFT is fully compatible with Peft, enabling users to directly utilize Peft’s interface for fine-tuning models stored in the ModelScope hub.
Supported methods:
-
Prompt Tuning: Visual Prompt Tuning
-
All the tuners supported by Peft can be used here.
Supported features:
- All the model-id passed into SWIFT will be downloaded from the model hub of ModelScope to increase the speed.
- Tuners provided by SWIFT can be used together, which means you can use multiple tuners at a time.
Check the usable LLM examples here: SFT and Inference examples
git clone https://github.com/modelscope/swift.git
cd swift/examples/pytorch/llm
# sft
bash run_sft.sh
# inference
bash run_infer.sh
-
Supported models: baichuan-7b, baichuan-13b, chatglm2-6b, llama2-7b, llama2-13b, openbuddy-llama2-13b, ...
-
Supported datasets: alpaca-en, alpaca-zh, ...
-
Supported sft methods: lora, full, ...
-
Continuously updated...
Getting started
SWIFT supports multiple tuners, and also tuners provided by Peft. To use the these tuners, please call:
from swift import Swift
model = Swift.prepare_model(model, config, extra_state_keys=['...'])
The code above gives you a model with a randomly initialized tuner. The input model is an instance of torch.nn.Module
, config is a subclass instance of SwiftConfig
or PeftConfig
. extra_state_keys is
the extra module weights(like the linear head) to be trained and stored in the output dir.
If you want to use multiple tuners simultaneously, please call:
from swift import Swift, LoRAConfig, PromptConfig
model = Swift.prepare_model(model, {'lora': LoRAConfig(...), 'prompt': PromptConfig(...)})
You can all save_pretrained
and push_to_hub
after finetuning:
from swift import push_to_hub
model.save_pretrained('some-output-folder')
push_to_hub('my-group/some-repo-id-modelscope', 'some-output-folder', token='some-ms-token')
Assume my-group/some-repo-id-modelscope
is the model-id in the hub, and some-ms-token
is the token for uploading.
Using the model-id to do later inference:
from swift import Swift
model = Swift.from_pretrained(model, 'my-group/some-repo-id-modelscope')
Here shows a runnable example:
import os
import tempfile
# Please install modelscope by `pip install modelscope`
from modelscope import Model
from swift import LoRAConfig, SwiftModel, Swift, push_to_hub
tmp_dir = tempfile.TemporaryDirectory().name
if not os.path.exists(tmp_dir):
os.makedirs(tmp_dir)
model = Model.from_pretrained('modelscope/Llama-2-7b-ms', device_map='auto')
lora_config = LoRAConfig(target_modules=['q_proj', 'k_proj', 'v_proj'])
model: SwiftModel = Swift.prepare_model(model, lora_config)
# Do some finetuning here
model.save_pretrained(tmp_dir)
push_to_hub('my-group/swift_llama2', output_dir=tmp_dir)
model = Model.from_pretrained('modelscope/Llama-2-7b-ms', device_map='auto')
model = SwiftModel.from_pretrained(model, 'my-group/swift_llama2', device_map='auto')
This is a example that uses transformers for model creation uses SWIFT for efficient tuning.
from swift import Swift, LoRAConfig, AdapterConfig, PromptConfig
from transformers import AutoModelForImageClassification
# init vit model
model = AutoModelForImageClassification.from_pretrained("google/vit-base-patch16-224")
# init lora tuner config
lora_config = LoRAConfig(
r=10, # the rank of the LoRA module
target_modules=['query', 'key', 'value'], # the modules to be replaced with the end of the module name
merge_weights=False # whether to merge weights
)
# init adapter tuner config
adapter_config = AdapterConfig(
dim=768, # the dimension of the hidden states
hidden_pos=0, # the position of the hidden state to passed into the adapter
target_modules=r'.*attention.output.dense$', # the modules to be replaced with regular expression
adapter_length=10 # the length of the adapter length
)
# init prompt tuner config
prompt_config = PromptConfig(
dim=768, # the dimension of the hidden states
target_modules=r'.*layer\.\d+$', # the modules to be replaced with regular expression
embedding_pos=0, # the position of the embedding tensor
prompt_length=10, # the length of the prompt tokens
attach_front=False # Whether prompt is attached in front of the embedding
)
# create model with swift. In practice, you can use any of these tuners or a combination of them.
model = Swift.prepare_model(model, {"lora_tuner": lora_config, "adapter_tuner": adapter_config, "prompt_tuner": prompt_config})
# get the trainable parameters of model
model.get_trainable_parameters()
# 'trainable params: 838,776 || all params: 87,406,432 || trainable%: 0.9596273189597764'
You can use the features offered by Peft in SWIFT:
from swift import LoraConfig, Swift
from peft import TaskType
lora_config = LoraConfig(target_modules=['query', 'key', 'value'], task_type=TaskType.CAUSAL_LM)
model_wrapped = Swift.prepare_model(model, lora_config)
# or call from_pretrained to load weights in the modelhub
model_wrapped = Swift.from_pretrained(model, 'some-id-in-the-modelscope-modelhub')
or:
from swift import LoraConfig, get_peft_model, PeftModel
from peft import TaskType
lora_config = LoraConfig(target_modules=['query', 'key', 'value'], task_type=TaskType.CAUSAL_LM)
model_wrapped = get_peft_model(model, lora_config)
# or call from_pretrained to load weights in the modelhub
model_wrapped = PeftModel.from_pretrained(model, 'some-id-in-the-modelscope-modelhub')
The saving strategy between Swift tuners and Peft tuners are slightly different. You can name a tuner of a SWIFT by:
model = Swift.prepare_model(model, {'default': LoRAConfig(...)})
model.save_pretrained('./output')
In the output dir, you will have a dir structure like this:
output
|-- default
|-- adapter_config.json
|-- adapter_model.bin
|-- adapter_config.json
|-- adapter_model.bin
The config/weights stored in the output dir is the config of extra_state_keys
and the weights of it. This is different from Peft, which stores the weights and config of the default
tuner.
Installation
SWIFT is running in Python environment. Please make sure your python version is higher than 3.8.
Please install SWIFT by the pip
command:
pip install swift -U
If you want to install SWIFT by source code, please run:
git clone https://github.com/modelscope/swift.git
cd swift
pip install -e .
If you are using source code, please remember install requirements by:
pip install -r requirements/framework.txt
SWIFT requires torch>=1.13.
We also recommend to use SWIFT in our docker image:
docker pull registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.7.1-py38-torch2.0.1-tf1.15.5-1.8.0
Learn More
-
ModelScope Library is the model library of ModelScope project, which contains a large number of popular models.
License
This project is licensed under the Apache License (Version 2.0).