Research from Zoom Communications reports on the Chain of Draft (CoD) technique, surrounded by the Chain of Thought (CoT) or the process of pre-answering thinking, which often leads to better results in various artificial intelligence testing conducted by the LLM group. The study found that the CoD process yielded results similar to or better than CoT, but at a much lower token cost.
The principle of CoD is simple – input a system prompt instructing to think step by step before responding like CoT, but with the emphasis on thinking as concisely as possible. Each step is kept brief. The intriguing aspect of this approach is when run against various test sets, CoD outperformed basic models by a significant margin, on par with CoT, yet using only 7.6% of tokens.
The pre-answering LLM models are costly to run, as many models tend to think excessively, resulting in high running costs. Moreover, the responsiveness to users is not ideal in tasks requiring immediate answers, such as call centers or coding assistance. The proposal of CoD opens up the possibility of seeing high-efficiency pre-answering models in the future, with running costs similar to regular models.
Source – ArXiV
TLDR: Research introduces the Chain of Draft technique as a cost-effective and efficient alternative to the traditional Chain of Thought method in pre-answering artificial intelligence tasks.
Leave a Comment