Nov 6, Notes on Contextual Retrieval

🤩Oct 30, LLMs cannot Play the Snake Game

The blog introduces a novel method for evaluating LLM performance by having them play the Snake game, assessing their decision-making, planning, and strategy skills. The experiment tested several models, revealing that o1-mini performed best with a score of 11, while Claude models outperformed GPT models. The findings suggest that reinforcement learning significantly enhances LLMs' capabilities in dynamic decision-making tasks. Although preliminary, this approach highlights the potential of game-based assessments for deeper insights into LLM competencies, with recommendations for further testing across more models and scenarios.
Oct 30, LLMs cannot Play the Snake Game

Lazy loaded imageOct 18, Notes on LIGHTRAG

The blog discusses LIGHTRAG, an innovative framework for Retrieval-Augmented Generation (RAG) systems that enhances performance by incorporating graph structures and dual-level retrieval processes. It outlines the challenges faced by traditional RAG systems, such as speed, quality, and understanding limitations, and explains how LightRAG addresses these issues through efficient text indexing and retrieval methods. The framework allows for both specific and abstract queries, improving the ability to handle complex questions and providing tailored responses using a general-purpose LLM.
Oct 18, Notes on LIGHTRAG

Lazy loaded imageOct 12, Notes on Re-Reading & GSM-Symbolic

The blog discusses two contrasting papers on large language models (LLMs): one proposes a "Re-Reading" method to enhance reasoning capabilities, showing consistent improvements in performance, while the other, GSM-Symbolic, critiques LLMs' reasoning abilities, revealing significant performance variance and limitations in mathematical reasoning. The author concludes that it's too early to declare LLMs incapable of reasoning, suggesting that current limitations may evolve.
Oct 12, Notes on Re-Reading & GSM-Symbolic
Sep 25,Notes on Gemini models
Sep 19,Notes on Qwen2.5
Sep 13, Notes on OpenAI o1 series models
Sep 9, test DeepSeek-V2.5 and Reflection-70b
Sep 3, Notes on Anthropic Prompt Tutorial

Lazy loaded imageAug 21, GPT-4o-mini with DSPy MIPRO on MMLU-Pro

This post builds upon my previous blog of GPT-4o-mini's performance on MMLU Pro using BootstrapFewShotWithRandomSearch and BootstrapFewShotWithOptuna. In this continuation, I will examine the newly introduced optimizers, MIPRO and MIPROV2, to assess their optimization capabilities and determine the potential performance enhancements they may bring to GPT-4o-mini.
Aug 21, GPT-4o-mini with DSPy MIPRO on MMLU-Pro
August 19, Summarize Web Page Content with Claude3

Lazy loaded imageAugust 17, Instruction Data Generation

More researchers are recognizing the significance of instruction data during the Supervised Fine-Tuning (SFT) stage. In June, I wrote a blog about data generation, but I believe it was somewhat superficial and insufficient. Since then, many new methods have emerged. Therefore, I aim to cover more papers I've read to discuss instruction data generation and selection.
August 17, Instruction Data Generation