复杂数学推理是评价大语言模型推理能力的重要指标,目前常用的数学推理数据集样本量有限且问题多样性不足,导致大语言模型存在 [逆转诅咒] 的现象,即一个训练于「a 是 b」的语言模型无法推广到「b 是 a」[1]。此现象在数学推理任务中的具体形式是:即给定一个数学问题,语言模型擅于用正向推理解答问题但缺乏逆向推理解决问题的能力。逆向推理在数学问题中十分常见,如下 2 个例子。
1. 经典问题 - 鸡兔同笼正向推理:笼子里有 23 只鸡和 12 只兔,问笼子里有多少个头和多少只脚?逆向推理:有若干只鸡兔同在一个笼子里,从上面数,有 35 个头,从下面数,有 94 只脚。问笼中各有多少只鸡和兔?2. gsm8k 问题正向推理: james buys 5 packs of beef that are 4 pounds each. the price of beef is $5.50 per pound. how much did he pay?逆向推理: james buys x packs of beef that are 4 pounds each. the price of beef is $5.50 per pound. how much did he pay? if we know the answer to the above question is 110, what is the value of unknown variable x?为了提升模型的正向和逆向推理能力,剑桥、港科大、华为的研究者基于两个常用的数学数据集(gsm8k 和 math)提出了 metamathqa 数据集:一个覆盖面广、质量高的数学推理数据集。metamathqa 由 395k 个大语言模型生成的正向逆向数学问答对组成。他们在 metamathqa 数据集上基于 llama-2 微调得到专注于数学推理 (正向和逆向) 的大语言模型 metamath,在数学推理数据集上达到了 sota。metamathqa 数据集和不同规模的 metamath 模型已开源供研究人员使用。
项目地址:https://meta-math.github.io/论文地址:https://arxiv.org/abs/2309.12284数据地址:https://huggingface.co/datasets/meta-math/metamathqa模型地址:https://huggingface.co/meta-math代码地址:https://github.com/meta-math/metamath在gsm8k-backward数据集中,我们构建了一个逆向推理的实验。实验结果显示,与sft、rft和wizardmath等方法相比,目前的方法在逆向推理问题上表现较差。与之相反,metamath模型在正向和反向推理方面都取得了出色的性能
方法
metamathqa 包含了四种数据增强的方法:
1. answer augmentation (答案增强):给定问题,通过大语言模型生成能得到正确结果的思维链作为数据增广。
question: james buys 5 packs of beef that are 4 pounds each. the price of beef is $5.50 per pound. how much did he pay?answer: james buys 5 packs of beef that are 4 pounds each, so he buys a total of 5 * 4 = 20 pounds of beef. the price of beef is $5.50 per pound, so he pays 20 * $5.50 = $110. the answer is: 110.2. rephrasing question (问题改写增强):给定元问题,通过大语言模型重写问题并生成得到正确结果的思维链作为数据增广。
question: what is the total amount that james paid when he purchased 5 packs of beef, each weighing 4 pounds, at a price of $5.50 per pound?answer: each pack of beef weighs 4 pounds, so 5 packs weigh 4 * 5 = 20 pounds in total. the price per pound of beef is $5.50, so the total cost for 20 pounds is 20 * $5.50 = $110. therefore, james paid a total of $110. the answer is: 110.3. fobar question (fobar 逆向问题增强):给定元问题,通过掩码条件中的数字为 x,给定原有答案并反推 x 来产生逆向问题,并基于该逆向问题生成正确的思维链过程来进行数据增广(反推样例:“if we know the answer to the above question is 110, what is the value of unknown variable x?”)。
question: james buys x packs of beef that are 4 pounds each. the price of beef is $5.50 per pound. how much did he pay? if we know the answer to the above question is 110, what is the value of unknown variable x?answer: james buys x packs of beef that are 4 pounds each, so he buys a total of 4x pounds of beef. the price of beef is $5.50 per pound, so the total cost of the beef is 5.50 * 4x = 22x. we are given that the total cost is $110, so we can write: 22x = 110. dividing both sides by 22, we get: x = 5. the value of x is 5.4. self-verification question (self-verification 逆向问题增强):在 fobar 的基础上,通过大语言模型改写逆向问题部分为陈述的语句来进行数据增广(改写样例:“how much did he pay?” (with the answer 110) 被改写成 “he paid 110”)。
question: james buys x packs of beef that are 4 pounds each. the price of beef is $5.50 per pound. he paid 110. what is the value of unknown variable x?answer: to solve this problem, we need to determine the value of x, which represents the number of packs of beef that james bought. each pack of beef weighs 4 pounds and costs $5.50 per pound. the total amount james paid is $110. we can set up the equation as follows: number of packs of beef * weight per pack * price per pound = total amount paid; x * 4 * $5.50 = $110; let's simplify and solve for x: 4x * $5.50 = $110; 22x = $110; to isolate x, we divide both sides of the equation by 22: 22x / 22 = $110 / 22; x = $5; the value of x is 5.实验结果
在两个常见的数学推理数据集(gsm8k和math)的实验结果表明,metamath在性能上显著优于已有的开源llm模型,而且不需要借助外部工具(例如代码解释器)。其中,我们的metamath-7b模型在gsm8k上达到了66.5%的准确率,在math上达到了19.8%的准确率,分别比相同规模的最先进模型高出11.6%和9.1%。特别值得一提的是,metamath-70b在gsm8k上达到了82.3%的准确率,超过了gpt-3.5-turbo
根据《表面对齐假设》[2],大型语言模型的能力来自于预训练,而来自下游任务的数据则会激活预训练期间所学习到的语言模型的内在能力。因此,这引发了两个重要问题:(一)哪种类型的数据可以最有效地激活潜在知识,以及(二)为什么一个数据集在这种激活中比另一个数据集更好?
为什么 metamathqa 有用?提高了思维链数据的质量 (perplexity)
根据上图所示,研究人员计算了 llama-2-7b 模型在仅答案数据、gsm8k cot 和 metamathqa 数据集的各个部分上的困惑度。metamathqa 数据集的困惑度明显低于其他两个数据集,这表明它具有较高的易学性,可能更有助于揭示模型的潜在知识
为什么 metamathqa 有用?增加了思维链数据的多样性 (diversity)
通过比较数据的多样性增益和模型的准确率增益,研究人员发现,重新表述、fobar和sv的引入相同数量的增广数据都带来了明显的多样性增益,并显著提高了模型的准确率。相比之下,仅仅使用答案增强会导致准确率明显饱和。在准确率达到饱和后,增加ansaug数据只会带来有限的性能提升
以上就是逆向思维:metamath新数学推理语言模型训练大型模型的详细内容。