Oct 18, Notes on LIGHTRAG

The blog discusses LIGHTRAG, an innovative framework for Retrieval-Augmented Generation (RAG) systems that enhances performance by incorporating graph structures and dual-level retrieval processes. It outlines the challenges faced by traditional RAG systems, such as speed, quality, and understanding limitations, and explains how LightRAG addresses these issues through efficient text indexing and retrieval methods. The framework allows for both specific and abstract queries, improving the ability to handle complex questions and providing tailored responses using a general-purpose LLM.
Oct 18, Notes on LIGHTRAG

Oct 12, Notes on Re-Reading & GSM-Symbolic

The blog discusses two contrasting papers on large language models (LLMs): one proposes a "Re-Reading" method to enhance reasoning capabilities, showing consistent improvements in performance, while the other, GSM-Symbolic, critiques LLMs' reasoning abilities, revealing significant performance variance and limitations in mathematical reasoning. The author concludes that it's too early to declare LLMs incapable of reasoning, suggesting that current limitations may evolve.
Oct 12, Notes on Re-Reading & GSM-Symbolic
Oct 11,Recap for September
Sep 25,Notes on Gemini models
Sep 23, Markov Decision Process
Sep 19, Bellman Equation
Sep 19,Notes on Qwen2.5
Sep 18, Bayes’ Theorem
Sep 13, Notes on OpenAI o1 series models
Sep 9, test DeepSeek-V2.5 and Reflection-70b
Sep 3, Notes on Anthropic Prompt Tutorial

📬Sep 1, Recap for August

In August, I focused on fine-tuning the Qwen2-7b model and evaluating its performance on our private benchmark consisting of over 200 questions and answers. I evaluated various large language models (LLMs) like GPT-4, Gemini 1.5-Pro, and Llama 3-405b on this benchmark to compare their capabilities in areas such as reasoning, coding, and commonsense.
Sep 1, Recap for August