Inspired by Nezhurina et al. 2024, I employ similar questions to evaluate various leading language models, demonstrating their reasoning capabilities. Thus, this blog will resemble a test report. This test is very subjective. So, if the outcome does not meet your expectations, just take it in stride.
This month has been emotionally intense, marked by a series of intriguing and unfortunate events. Many instances sparked curiosity and inspiration, while others sadly brought about sorrow and anger. It's truly been a month full of diverse experiences.
TextGrad is an innovative autograd engine, particularly tailored for textual gradients. As a robust framework, it facilitates automatic meticulously implements backpropagation using feedback provided by advanced Large Language Models (LLMs), firmly anchored in the gradient metaphor.
Many studies have shown that large language models can stimulate their ability to follow instructions and generalize on more tasks during the fine-tuning stage. However, if we only rely on manual handwritten instruction data, it will consume a lot of human resources, and the quantity is limited.Therefore, it is essential to explore other automatic methods for generating instruction data.
This month at work mainly focused on completing a few tasks: explored more possibilities of using Prompt Chain, using Prompt Chain to write stories, can generate a pretty good story.
Prompt has always been a topic of controversy. While some consider it insignificant and lacking in technical substance, others regard it as the crux of effectively utilizing large language models. Learning how to use Prompt can unlock the vast potential of these models.