大型语言模型(llm)已经彻底改变了自然语言处理领域。随着这些模型在规模和复杂性上的增长,推理的计算需求也显著增加。为了应对这一挑战利用多个gpu变得至关重要。
因此,这篇文章将在多个gpu上同时进行推理,内容主要包括:介绍accelerate库、简单的方法和工作代码示例,以及使用多个gpu进行性能基准测试
本文将使用多个3090将llama2-7b的推理扩展在多个gpu上
基本示例我们首先介绍一个简单的示例来演示使用accelerate进行多gpu“消息传递”。
from accelerate import accelerator from accelerate.utils import gather_object accelerator = accelerator() # each gpu creates a string message=[ fhello this is gpu {accelerator.process_index} ] # collect the messages from all gpus messages=gather_object(message) # output the messages only on the main process with accelerator.print() accelerator.print(messages)
输出如下:
['hello this is gpu 0', 'hello this is gpu 1', 'hello this is gpu 2', 'hello this is gpu 3', 'hello this is gpu 4']
多gpu推理下面是一个简单的、非批处理的推理方法。代码很简单,因为accelerate库已经帮我们做了很多工作,我们直接使用就可以:
from accelerate import accelerator from accelerate.utils import gather_object from transformers import automodelforcausallm, autotokenizer from statistics import mean import torch, time, json accelerator = accelerator() # 10*10 prompts. source: https://www.penguin.co.uk/articles/2022/04/best-first-lines-in-books prompts_all=[the king is dead. long live the queen.,once there were four children whose names were peter, susan, edmund, and lucy.,the story so far: in the beginning, the universe was created.,it was a bright cold day in april, and the clocks were striking thirteen.,it is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.,the sweat wis lashing oafay sick boy; he wis trembling.,124 was spiteful. full of baby's venom.,as gregor samsa awoke one morning from uneasy dreams he found himself transformed in his bed into a gigantic insect.,i write this sitting in the kitchen sink.,we were somewhere around barstow on the edge of the desert when the drugs began to take hold., ] * 10 # load a base model and tokenizer model_path=models/llama2-7b model = automodelforcausallm.from_pretrained(model_path,device_map={: accelerator.process_index},torch_dtype=torch.bfloat16, ) tokenizer = autotokenizer.from_pretrained(model_path) # sync gpus and start the timer accelerator.wait_for_everyone() start=time.time() # divide the prompt list onto the available gpus with accelerator.split_between_processes(prompts_all) as prompts:# store output of generations in dictresults=dict(outputs=[], num_tokens=0) # have each gpu do inference, prompt by promptfor prompt in prompts:prompt_tokenized=tokenizer(prompt, return_tensors=pt).to(cuda)output_tokenized = model.generate(**prompt_tokenized, max_new_tokens=100)[0] # remove prompt from output output_tokenized=output_tokenized[len(prompt_tokenized[input_ids][0]):] # store outputs and number of tokens in result{}results[outputs].append( tokenizer.decode(output_tokenized) )results[num_tokens] += len(output_tokenized) results=[ results ] # transform to list, otherwise gather_object() will not collect correctly # collect results from all the gpus results_gathered=gather_object(results) if accelerator.is_main_process:timediff=time.time()-startnum_tokens=sum([r[num_tokens] for r in results_gathered ]) print(ftokens/sec: {num_tokens//timediff}, time {timediff}, total tokens {num_tokens}, total prompts {len(prompts_all)})
使用多个gpu会导致一些通信开销:性能在4个gpu时呈线性增长,然后在这种特定设置中趋于稳定。当然这里的性能取决于许多参数,如模型大小和量化、提示长度、生成的令牌数量和采样策略,所以我们只讨论一般的情况
1 gpu: 44个token /秒,时间:225.5s
2个gpu:每秒处理88个token,总共用时112.9秒
3个gpu:每秒处理128个令牌,总共耗时77.6秒
4 gpu: 137个token /秒,时间:72.7s
5个gpu:每秒处理119个token,总共需要83.8秒的时间
在多gpu上进行批处理现实世界中,我们可以使用批处理推理来加快速度。这会减少gpu之间的通讯,加快推理速度。我们只需要增加prepare_prompts函数将一批数据而不是单条数据输入到模型即可:
from accelerate import accelerator from accelerate.utils import gather_object from transformers import automodelforcausallm, autotokenizer from statistics import mean import torch, time, json accelerator = accelerator() def write_pretty_json(file_path, data):import jsonwith open(file_path, w) as write_file:json.dump(data, write_file, indent=4) # 10*10 prompts. source: https://www.penguin.co.uk/articles/2022/04/best-first-lines-in-books prompts_all=[the king is dead. long live the queen.,once there were four children whose names were peter, susan, edmund, and lucy.,the story so far: in the beginning, the universe was created.,it was a bright cold day in april, and the clocks were striking thirteen.,it is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.,the sweat wis lashing oafay sick boy; he wis trembling.,124 was spiteful. full of baby's venom.,as gregor samsa awoke one morning from uneasy dreams he found himself transformed in his bed into a gigantic insect.,i write this sitting in the kitchen sink.,we were somewhere around barstow on the edge of the desert when the drugs began to take hold., ] * 10 # load a base model and tokenizer model_path=models/llama2-7b model = automodelforcausallm.from_pretrained(model_path,device_map={: accelerator.process_index},torch_dtype=torch.bfloat16, ) tokenizer = autotokenizer.from_pretrained(model_path) tokenizer.pad_token = tokenizer.eos_token # batch, left pad (for inference), and tokenize def prepare_prompts(prompts, tokenizer, batch_size=16):batches=[prompts[i:i + batch_size] for i in range(0, len(prompts), batch_size)]batches_tok=[]tokenizer.padding_side=left for prompt_batch in batches:batches_tok.append(tokenizer(prompt_batch, return_tensors=pt, padding='longest', truncatinotallow=false, pad_to_multiple_of=8,add_special_tokens=false).to(cuda) )tokenizer.padding_side=rightreturn batches_tok # sync gpus and start the timer accelerator.wait_for_everyone() start=time.time() # divide the prompt list onto the available gpus with accelerator.split_between_processes(prompts_all) as prompts:results=dict(outputs=[], num_tokens=0) # have each gpu do inference in batchesprompt_batches=prepare_prompts(prompts, tokenizer, batch_size=16) for prompts_tokenized in prompt_batches:outputs_tokenized=model.generate(**prompts_tokenized, max_new_tokens=100) # remove prompt from gen. tokensoutputs_tokenized=[ tok_out[len(tok_in):] for tok_in, tok_out in zip(prompts_tokenized[input_ids], outputs_tokenized) ] # count and decode gen. tokens num_tokens=sum([ len(t) for t in outputs_tokenized ])outputs=tokenizer.batch_decode(outputs_tokenized) # store in results{} to be gathered by accelerateresults[outputs].extend(outputs)results[num_tokens] += num_tokens results=[ results ] # transform to list, otherwise gather_object() will not collect correctly # collect results from all the gpus results_gathered=gather_object(results) if accelerator.is_main_process:timediff=time.time()-startnum_tokens=sum([r[num_tokens] for r in results_gathered ]) print(ftokens/sec: {num_tokens//timediff}, time elapsed: {timediff}, num_tokens {num_tokens})
可以看到批处理会大大加快速度。
需要重写的内容是:1个gpu:520个令牌/秒,时间:19.2秒
两张gpu的算力为每秒900个代币,计算时间为11.1秒
3 gpu: 1205个token /秒,时间:8.2s
四张gpu:1655个令牌/秒,所需时间为6.0秒
5个gpu: 每秒1658个令牌,时间:6.0秒
总结截止到本文为止,llama.cpp,ctransformer还不支持多gpu推理,好像llama.cpp在6月有个多gpu的merge,但是我没看到官方更新,所以这里暂时确定不支持多gpu。如果有小伙伴确认可以支持多gpu请留言。
huggingface的accelerate包则为我们使用多gpu提供了一个很方便的选择,使用多个gpu推理可以显着提高性能,但gpu之间通信的开销随着gpu数量的增加而显著增加。
以上就是使用accelerate库在多gpu上进行llm推理的详细内容。