type
status
date
slug
summary
tags
category
password
icon
Domain & Institution
Author
Priority
Abstract
Creation Date
本文为 Langchain 官方博客“The rise of context engineering”的双语对照版本。 </br> 推荐阅读理由:随着 Agent 的快速发展,”context“ 的概念越来越被重视。无论是 Vibe Coding,还是 Deep Research,提供的 ”context“越具体完整,最后出来的效果都会很棒!LangGraph 作为最著名的智能体框架之一,了解它们是如何看待上下文的重要性的,无论是对我们个人平时 AI 的使用还是企业业务上 AI 的落地,都有很大帮助。

Header image from Dex Horthy on Twitter
来自 Dex Horthy 的 Twitter 头图
Context engineering is building dynamic systems to provide the right information and tools in the right format such that the LLM can plausibly accomplish the task.
上下文工程是构建动态系统,以在正确的格式中提供正确的信息和工具,使得 LLM 能够合理地完成任务。
Most of the time when an agent is not performing reliably the underlying cause is that the appropriate context, instructions and tools have not been communicated to the model.
大多数情况下,当智能体(agent) 表现不稳定时,根本原因是合适恰当的上下文、指令和工具没有传达给模型。
LLM applications are evolving from single prompts to more complex, dynamic agentic systems. As such, context engineering is becoming the most important skill an AI engineer can develop.
LLM 应用正从单一提示发展到更复杂、动态的智能体系统。因此,上下文工程成为 AI 工程师可以发展的最重要技能。
What is context engineering?
什么是上下文工程?
Context engineering is building dynamic systems to provide the right information and tools in the right format such that the LLM can plausibly accomplish the task.
上下文工程是构建动态系统,以在正确的格式中提供正确的信息和工具,使得 LLM 能够合理地完成任务。
This is the definition that I like, which builds upon recent takes on this from Tobi Lutke, Ankur Goyal, and Walden Yan. Let’s break it down.
这是我喜欢的定义,它基于 Tobi Lutke、Ankur Goyal 和 Walden Yan 最近对此的见解。让我们来分解一下。
Context engineering is a system
上下文工程是一个系统
Complex agents likely get context from many sources. Context can come from the developer of the application, the user, previous interactions, tool calls, or other external data. Pulling these all together involves a complex system.
复杂的智能体很可能从多个来源获取上下文。上下文可以来自应用程序的开发者、用户、之前的交互、工具调用或其他外部数据。将这些内容整合起来需要一个复杂的系统。
This system is dynamic
这个系统是动态的
Many of these pieces of context can come in dynamically. As such, the logic for constructing the final prompt needs to be dynamic as well. It is not just a static prompt.
这些上下文中的许多部分是动态变化的。因此,构建最终提示的逻辑也需要是动态的。它不仅仅是一个静态的提示。
You need the right information
你需要正确的信息
A common reason agentic systems don’t perform is they just don’t have the right context. LLMs cannot read minds - you need to give them the right information. Garbage in, garbage out.
一个常见的原因是智能体系统表现不佳,因为它们只是缺乏正确的上下文。LLMs 无法读心术——你需要提供正确的信息。不然,垃圾进,垃圾出。
You need the right tools
你需要正确的工具
It may not always be the case that the LLM will be able to solve the task just based solely on the inputs. In these situations, if you want to empower the LLM to do so, you will want to make sure that it has the right tools. These could be tools to look up more information, take actions, or anything in between. Giving the LLM the right tools is just as important as giving it the right information.
并不总是 LLM 仅凭输入就能完成任务的情况。在这些情况下,如果你想要赋予 LLM 这样的能力,你需要确保它拥有正确的工具。这些工具可以是用来查找更多信息、执行操作或介于两者之间的任何东西。给 LLM 提供正确的工具与提供正确的信息同样重要。
The format matters
格式很重要
Just like communicating with humans, how you communicate with LLMs matters. A short but descriptive error message will go a lot further a large JSON blob. This also applies to tools. What the input parameters to your tools are matters a lot when making sure that LLMs can use them.和与人类交流一样,你与 LLMs 的沟通方式很重要。一条简短但描述性强的报错信息比一大块 JSON 数据要有效得多。这也适用于工具。在确保 LLMs 能够使用它们时,你工具的输入参数是什么非常重要。
Can it plausibly accomplish the task?
它能否合理地完成任务?
This is a great question to be asking as you think about context engineering. It reinforces that LLMs are not mind readers - you need to set them up for success. It also helps separate the failure modes. Is it failing because you haven’t given it the right information or tools? Or does it have all the right information and it just messed up? These failure modes have very different ways to fix them.
当你思考上下文工程时,这是一个很好的问题。它强化了 LLMs 不是心灵感应器的观点——你需要为它们创造成功的机会。它还有助于区分失败的原因。是因为你没有给它提供正确的信息或工具而失败?还是它已经有了所有正确的信息,只是出了错?这些失败的原因有非常不同的解决方法。
Why is context engineering important?
为什么上下文工程很重要
When agentic systems mess up, it’s largely because an LLM messes. Thinking from first principles, LLMs can mess up for two reasons:
当智能体系统出问题时,这主要是因为 LLM 出了问题。从第一性原理出发,LLM 出问题的原因有两个:
- The underlying model just messed up, it isn’t good enough
底层模型本身出了问题,它不够好
- The underlying model was not passed the appropriate context to make a good output
底层模型没有接收到能够生成良好输出的合适上下文
More often than not (especially as the models get better) model mistakes are caused more by the second reason. The context passed to the model may be bad for a few reasons:
通常情况下(尤其是随着模型性能的提升),模型错误更多是由第二个原因造成的。传递给模型的上下文可能存在问题的原因有:
- There is just missing context that the model would need to make the right decision. Models are not mind readers. If you do not give them the right context, they won’t know it exists.
模型需要缺失的上下文才能做出正确决策。模型不会读心术。如果你不提供正确的上下文,它们将不知道其存在。
- The context is formatted poorly. Just like humans, communication is important! How you format data when passing into a model absolutely affects how it responds
上下文格式不佳。就像人类一样,沟通很重要!向模型传递数据时的格式化方式会直接影响其响应效果。
How is context engineering different from prompt engineering?
上下文工程与提示工程有何不同?
Why the shift from “prompts” to “context”? Early on, developers focused on phrasing prompts cleverly to coax better answers. But as applications grow more complex, it’s becoming clear that
providing complete and structured context to the AI is far more important than any magic wording.
为什么从“提示”转向“上下文”?早期,开发者专注于巧妙地措辞提示以获取更好的答案。但随着应用变得更加复杂,提供完整且结构化的上下文给 AI 变得比任何魔法措辞都更为重要。
I would also argue that prompt engineering is a subset of context engineering. Even if you have all the context, how you assemble it in the prompt still absolutely matters. The difference is that you are not architecting your prompt to work well with a single set of input data, but rather to take a set of dynamic data and format it properly.
我还认为,提示工程是上下文工程的一个子集。即使你拥有所有上下文,如何在提示中组织它仍然至关重要。区别在于,你不是设计提示以适应单一的数据输入集,而是要处理动态数据并正确地格式化它。
I would also highlight that a key part of context is often core instructions for how the LLM should behave. This is often a key part of prompt engineering. Would you say that providing clear and detailed instructions for how the agent should behave is context engineering or prompt engineering? I think it’s a bit of both.
我还要强调的是,上下文信息中一个关键部分往往是关于 LLM 应该如何行动的核心指令。这通常是提示工程的关键部分。你会说为智能体如何行动提供清晰详细的指令是上下文工程还是提示工程?我认为两者都有点涉及。
Examples of context engineering
上下文工程的示例
Some basic examples of good context engineering include:
一些良好的上下文工程的示例包括:
- Tool use: Making sure that if an agent needs access to external information, it has tools that can access it. When tools return information, they are formatted in a way that is maximally digestable for LLMs
工具使用:确保如果智能体需要访问外部信息,它有可以访问这些信息的工具。当工具返回信息时,它们以对 LLMs 最大程度可消化格式进行格式化
- Short term memory: If a conversation is going on for a while, creating a summary of the conversation and using that in the future.
短期记忆:如果对话持续了一段时间,创建对话摘要并在未来使用该摘要。
- Long term memory: If a user has expressed preferences in a previous conversation, being able to fetch that information.
长期记忆:如果用户在之前的对话中表达过偏好,能够获取这些信息。
- Prompt Engineering: Instructions for how an agent should behave are clearly enumerated in the prompt.
提示工程:智能体应如何采取行动的具体指令在提示中明确列出。
- Retrieval: Fetching information dynamically and inserting it into the prompt before calling the LLM.
检索:动态获取信息并将其插入提示中,然后再调用 LLM。
How LangGraph enables context engineering
LangGraph如何实现上下文工程
When we built LangGraph, we built it with the goal of making it the most controllable agent framework. This also allows it to perfectly enable context engineering.
当我们构建 LangGraph 时,我们的目标是将其打造成最可控的智能体框架。这也使其能够完美实现上下文工程。
With LangGraph, you can control everything. You decide what steps are run. You decide exactly what goes into your LLM. You decide where you store the outputs. You control everything.
使用 LangGraph,你可以控制一切。你决定哪些步骤运行。你决定确切的内容输入到你的 LLM。你决定输出存储在哪里。你控制一切。
This allows you do all the context engineering you desire. One of the downsides of agent abstractions (which most other agent frameworks emphasize) is that they restrict context engineering. There may be places where you cannot change exactly what goes into the LLM, or exactly what steps are run beforehand.
这让你能够完成所有你想要的上下文工程。智能体抽象(大多数其他代理框架强调的)的一个缺点是它们限制了上下文工程。可能存在一些地方,你无法改变输入到 LLM 的确切内容,或者无法确定在运行之前执行的确切步骤。
Side note: a very good read is Dex Horthy's "12 Factor Agents". A lot of the points there relate to context engineering ("own your prompts", "own your context building", etc). The header image for this blog is also taken from Dex. We really enjoy the way he communicates about what is important in the space.
侧注:Dex Horthy 的《12 Factor Agents》是一本非常值得一读的书。书中的很多观点都与上下文工程相关(如"拥有你的提示"、"拥有你的上下文构建"等)。这篇博客的标题图片也来自 Dex。我们非常欣赏他阐述这个领域重要性的方式。
How LangSmith helps with context engineering
LangSmith 如何助力上下文工程
LangSmith is our LLM application observability and evals solution. One of the key features in LangSmith is the ability to trace your agent calls. Although the term "context engineering" didn't exist when we built LangSmith, it aptly describes what this tracing helps with.
LangSmith 是我们的 LLM 应用可观测性和评估解决方案。LangSmith 的一个关键功能是能够追踪你的智能体调用。虽然我们在构建 LangSmith 时“上下文工程”这个术语还不存在,但它恰当地描述了这项追踪功能所帮助的内容。
LangSmith lets you see all the steps that happen in your agent. This lets you see what steps were run to gather the data that was sent into the LLM.
LangSmith 让你能够看到你的智能体中发生的所有步骤。这让你可以看到为了收集发送到 LLM 的数据而运行了哪些步骤。
LangSmith lets you see the exact inputs and outputs to the LLM. This lets you see exactly what went into the LLM - the data it had and how it was formatted. You can then debug whether that contains all the relevant information that is needed for the task. This includes what tools the LLM has access to - so you can debug whether it's been given the appropriate tools to help with the task at hand.
LangSmith 让你能够看到 LLM 的确切输入和输出。这让你能够清楚地了解输入到 LLM 的内容——它所使用的数据以及数据的格式。然后你可以调试这些输入是否包含了完成任务所需的所有相关信息。这包括 LLM 可以访问哪些工具——这样你就可以调试它是否被赋予了适当的工具来帮助完成当前的任务。
Communication is all you need
沟通就是你需要的全部
A few months ago I wrote a blog called "Communication is all you need". The main point was that communicating to the LLM is hard, and not appreciated enough, and often the root cause of a lot of agent errors. Many of these points have to do with context engineering!
几个月前我写了一篇名为《沟通是你所需要的全部》的博客。主要观点是,与 LLM 沟通很困难,没有得到足够的重视,并且往往是许多智能体运行错误的根本原因。其中许多观点都与上下文工程有关!
Context engineering isn't a new idea - agent builders have been doing it for the past year or two. It's a new term that aptly describes an increasingly important skill. We'll be writing and sharing more on this topic. We think a lot of the tools we've built (LangGraph, LangSmith) are perfectly built to enable context engineering, and so we're excited to see the emphasis on this take off.
上下文工程并非新概念——过去一两年里,智能体开发者一直在实践它。这是一个恰如其分描述日益重要技能的新术语。我们将就这一主题撰写和分享更多内容。我们认为我们构建的工具(LangGraph、LangSmith)非常适合支持上下文工程,因此我们很兴奋地看到这一领域的重视程度正在提升。
- Author:BubbleBrain
- URL:https://chengshengddeng.com/article/rise-of-context-engineering
- Copyright:All articles in this blog, except for special statements, adopt BY-NC-SA agreement. Please indicate the source!
Relate Posts