活动报名|UCLA通用人工智能实验室:让大语言模型自主复述,打破与人类对话壁垒

595次阅读
没有评论

活动报名|UCLA通用人工智能实验室:让大语言模型自主复述,打破与人类对话壁垒

邓依荷

邓依荷是加州大学洛杉矶分校通用人工智能实验室的三年级博士研究生,导师是顾全全教授。在此之前,她在加州大学洛杉矶分校获得了计算机科学硕士学位以及计算数学学士学位。她的研究兴趣主要包括大语言模型、多模态学习以及提高基础模型的稳健性。她已在 NeurIPS、ACL 和 AAAI机器学习顶会上发表论文,并专注于视觉和语言领域的理论驱动方法的开发。

Yihe is a third-year PhD student at UCLA Artificial General Intelligence Lab, advised by Prof. Quanquan Gu. Before that, Yihe received her MS in Computer Science and BS in Mathematics of Computation from UCLA. Her research interests primarily center around Large Language Models, multi-modal learning, and enhancing the robustness of foundational models. She has published papers on top-tier machine learning venues such as NeurIPS, ACL and AAAI, with a focus on developing theory-guided methods in vision and language domains.

让大语言模型自主复述,打破与人类对话的壁垒


在人际沟通中人们往往会出现误解对方意图的现象。不仅如此,在人类与大型语言模型(LLM)之间的交互中也会出现误解。这种差异可能导致LLM以意想不到的方式解读看似明确的问题,从而产生错误的回答。尽管人们广泛认为问题的质量会显著影响LLM提供的回答质量,但我们仍然缺乏一种系统的方法来帮助LLM更好理解的问题。

在这次演讲中,我将介绍我们的新提示词(prompting)方法,名为“Rephrase and Respond”(RaR),它让LLM重述和扩展人类提出的问题,并在单个对话中提供回应。在广泛的基准任务上,RaR显著提高了不同LLM模型的性能。我还将讨论RaR与流行的“思维链”(CoT)方法在理论和实验上的比较。我们展示了RaR与CoT互补,可以与CoT轻松结合使用以达到更好的性能。


Misunderstandings arise not only in interpersonal communication but also between humans and Large Language Models (LLMs). Such discrepancies can make LLMs interpret seemingly unambiguous questions in unexpected ways, yielding incorrect responses. While it is widely acknowledged that the quality of a prompt, such as a question, significantly impacts the quality of the response provided by LLMs, a systematic method for crafting questions that LLMs can better comprehend is still underdeveloped.


In this talk, I will present our new prompting method named “Rephrase and Respond” (RaR), which allows LLMs to rephrase and expand questions posed by humans and provide responses in a single prompt. Across a wide range of benchmark tasks, RaR significantly improves the performance of different LLM models. I will also discuss the comparison between RaR and the popular Chain-of-Thought (CoT) methods, both theoretically and empirically. RaR is shown to be complementary to CoT and can be combined with CoT to achieve even better performance.


This is joint work with Weitong Zhang, Zixiang Chen and Quanquan Gu.


活动报名|UCLA通用人工智能实验室:让大语言模型自主复述,打破与人类对话壁垒


论文地址

https://arxiv.org/abs/2311.04205 

项目地址

https://uclaml.github.io/Rephrase-and-Respond/

活动时间

11月22日(周三)14:30-15:30

活动形式

线上直播,扫描下方二维码报名或者点击原文报名


活动报名|UCLA通用人工智能实验室:让大语言模型自主复述,打破与人类对话壁垒

👇线上直播交流群👇

活动报名|UCLA通用人工智能实验室:让大语言模型自主复述,打破与人类对话壁垒

 

Read More 

正文完
可以使用微信扫码关注公众号(ID:xzluomor)
post-qrcode
 
评论(没有评论)
Generated by Feedzy