LangChain 核心模块:Data Conneciton - Document Loaders

使用文档加载器从源中加载数据作为文档。一个文档是一段文字和相关的元数据。

如,有用于加载简单 .txt 文件的文档加载器,用于加载 ArXiv 论文,或者任何网页的文本内容

Document 类

这段代码定义了一个名为Document的类,允许用户与文档的内容进行交互,可以查看文档的段落、摘要,以及使用查找功能来查询文档中的特定字符串。

# 基于BaseModel定义的文档类。
class Document(BaseModel):
"""接口,用于与文档进行交互。""" # 文档的主要内容。
page_content: str
# 用于查找的字符串。
lookup_str: str = ""
# 查找的索引,初次默认为0。
lookup_index = 0
# 用于存储任何与文档相关的元数据。
metadata: dict = Field(default_factory=dict) @property
def paragraphs(self) -> List[str]:
"""页面的段落列表。"""
# 使用"\n\n"将内容分割为多个段落。
return self.page_content.split("\n\n") @property
def summary(self) -> str:
"""页面的摘要(即第一段)。"""
# 返回第一个段落作为摘要。
return self.paragraphs[0] # 这个方法模仿命令行中的查找功能。
def lookup(self, string: str) -> str:
"""在页面中查找一个词,模仿cmd-F功能。"""
# 如果输入的字符串与当前的查找字符串不同,则重置查找字符串和索引。
if string.lower() != self.lookup_str:
self.lookup_str = string.lower()
self.lookup_index = 0
else:
# 如果输入的字符串与当前的查找字符串相同,则查找索引加1。
self.lookup_index += 1
# 找出所有包含查找字符串的段落。
lookups = [p for p in self.paragraphs if self.lookup_str in p.lower()]
# 根据查找结果返回相应的信息。
if len(lookups) == 0:
return "No Results"
elif self.lookup_index >= len(lookups):
return "No More Results"
else:
result_prefix = f"(Result {self.lookup_index + 1}/{len(lookups)})"
return f"{result_prefix} {lookups[self.lookup_index]}"

BaseLoader 类定义

BaseLoader 类定义了如何从不同的数据源加载文档,并提供了一个可选的方法来分割加载的文档。使用这个类作为基础,开发者可以为特定的数据源创建自定义的加载器,并确保所有这些加载器都提供了加载数据的方法。load_and_split方法还提供了一个额外的功能,可以根据需要将加载的文档分割为更小的块。

# 基础加载器类。
class BaseLoader(ABC):
"""基础加载器类定义。""" # 抽象方法,所有子类必须实现此方法。
@abstractmethod
def load(self) -> List[Document]:
"""加载数据并将其转换为文档对象。""" # 该方法可以加载文档,并将其分割为更小的块。
def load_and_split(
self, text_splitter: Optional[TextSplitter] = None
) -> List[Document]:
"""加载文档并分割成块。"""
# 如果没有提供特定的文本分割器,使用默认的字符文本分割器。
if text_splitter is None:
_text_splitter: TextSplitter = RecursiveCharacterTextSplitter()
else:
_text_splitter = text_splitter
# 先加载文档。
docs = self.load()
# 然后使用_text_splitter来分割每一个文档。
return _text_splitter.split_documents(docs)

使用 TextLoader 加载 Txt 文件

from langchain.document_loaders import TextLoader

docs = TextLoader('../tests/state_of_the_union.txt',encoding='utf-8').load()

使用 ArxivLoader 加载 ArXiv 论文

ArxivLoader 类定义

ArxivLoader 类专门用于从Arxiv平台获取文档。用户提供一个搜索查询,然后加载器与Arxiv API交互,以检索与该查询相关的文档列表。这些文档然后以标准的Document格式返回。

# 针对Arxiv平台的加载器类。
class ArxivLoader(BaseLoader):
"""从`Arxiv`加载基于搜索查询的文档。 此加载器负责将Arxiv的原始PDF文档转换为纯文本格式,以便于处理。
""" # 初始化方法。
def __init__(
self,
query: str,
load_max_docs: Optional[int] = 100,
load_all_available_meta: Optional[bool] = False,
):
self.query = query
"""传递给Arxiv API进行搜索的特定查询或关键字。"""
self.load_max_docs = load_max_docs
"""从搜索中检索文档的上限。"""
self.load_all_available_meta = load_all_available_meta
"""决定是否加载与文档关联的所有元数据的标志。""" # 基于查询获取文档的加载方法。
def load(self) -> List[Document]:
arxiv_client = ArxivAPIWrapper(
load_max_docs=self.load_max_docs,
load_all_available_meta=self.load_all_available_meta,
)
docs = arxiv_client.search(self.query)
return docs

ArxivLoader有以下参数:

  • query:用于在ArXiv中查找文档的文本
  • load_max_docs:默认值为100。使用它来限制下载的文档数量。下载所有100个文档需要时间,因此在实验中请使用较小的数字。
  • load_all_available_meta:默认值为False。默认情况下只下载最重要的字段:发布日期(文档发布/最后更新日期)、标题、作者、摘要。如果设置为True,则还会下载其他字段。

GPT-3 论文(Language Models are Few-Shot Learners) 为例,展示如何使用 ArxivLoader

GPT-3 论文的 Arxiv 链接:https://arxiv.org/abs/2005.14165

from langchain.document_loaders import ArxivLoader
query = "2005.14165"
docs = ArxivLoader(query=query, load_max_docs=5).load()
len(docs)
docs[0].metadata # meta-information of the Document
{'Published': '2020-07-22',
'Title': 'Language Models are Few-Shot Learners',
'Authors': 'Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei',
'Summary': "Recent work has demonstrated substantial gains on many NLP tasks and\nbenchmarks by pre-training on a large corpus of text followed by fine-tuning on\na specific task. While typically task-agnostic in architecture, this method\nstill requires task-specific fine-tuning datasets of thousands or tens of\nthousands of examples. By contrast, humans can generally perform a new language\ntask from only a few examples or from simple instructions - something which\ncurrent NLP systems still largely struggle to do. Here we show that scaling up\nlanguage models greatly improves task-agnostic, few-shot performance, sometimes\neven reaching competitiveness with prior state-of-the-art fine-tuning\napproaches. Specifically, we train GPT-3, an autoregressive language model with\n175 billion parameters, 10x more than any previous non-sparse language model,\nand test its performance in the few-shot setting. For all tasks, GPT-3 is\napplied without any gradient updates or fine-tuning, with tasks and few-shot\ndemonstrations specified purely via text interaction with the model. GPT-3\nachieves strong performance on many NLP datasets, including translation,\nquestion-answering, and cloze tasks, as well as several tasks that require\non-the-fly reasoning or domain adaptation, such as unscrambling words, using a\nnovel word in a sentence, or performing 3-digit arithmetic. At the same time,\nwe also identify some datasets where GPT-3's few-shot learning still struggles,\nas well as some datasets where GPT-3 faces methodological issues related to\ntraining on large web corpora. Finally, we find that GPT-3 can generate samples\nof news articles which human evaluators have difficulty distinguishing from\narticles written by humans. We discuss broader societal impacts of this finding\nand of GPT-3 in general."}

使用 UnstructuredURLLoader 加载网页内容

使用非结构化分区函数(Unstructured)来检测MIME类型并将文件路由到适当的分区器(partitioner)。

支持两种模式运行加载程序:"single"和"elements"。如果使用"single"模式,文档将作为单个langchain Document对象返回。如果使用"elements"模式,非结构化库将把文档拆分成标题和叙述文本等元素。您可以在mode后面传入其他非结构化kwargs以应用不同的非结构化设置。

UnstructuredURLLoader 主要参数:

  • urls:待加载网页 URL 列表
  • continue_on_failure:默认True,某个URL加载失败后,是否继续
  • mode:默认single

以 ReAct 网页为例(https://react-lm.github.io/) 展示使用

from langchain.document_loaders import UnstructuredURLLoader
urls = [
"https://react-lm.github.io/",
]
loader = UnstructuredURLLoader(urls=urls)
data = loader.load()
data[0].metadata
print(data[0].page_content)

输出:

{'source': 'https://react-lm.github.io/'}
ReAct: Synergizing Reasoning and Acting in Language Models Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao [Paper] [Code] [Blogpost] [BibTex] Language models are getting better at reasoning (e.g. chain-of-thought prompting) and acting (e.g. WebGPT, SayCan, ACT-1), but these two directions have remained separate.
ReAct asks, what if these two fundamental capabilities are combined? Abstract While large language models (LLMs) have demonstrated impressive capabilities across tasks in language understanding and interactive decision making, their abilities for reasoning (e.g. chain-of-thought prompting) and acting (e.g. action plan generation) have primarily been studied as separate topics. In this paper, we explore the use of LLMs to generate both reasoning traces and task-specific actions in an interleaved manner, allowing for greater synergy between the two: reasoning traces help the model induce, track, and update action plans as well as handle exceptions, while actions allow it to interface with external sources, such as knowledge bases or environments, to gather additional information. We apply our approach, named ReAct, to a diverse set of language and decision making tasks and demonstrate its effectiveness over state-of-the-art baselines, as well as improved human interpretability and trustworthiness over methods without reasoning or acting components. Concretely, on question answering (HotpotQA) and fact verification (Fever), ReAct overcomes issues of hallucination and error propagation prevalent in chain-of-thought reasoning by interacting with a simple Wikipedia API, and generates human-like task-solving trajectories that are more interpretable than baselines without reasoning traces. On two interactive decision making benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and reinforcement learning methods by an absolute success rate of 34% and 10% respectively, while being prompted with only one or two in-context examples. ReAct Prompting A ReAct prompt consists of few-shot task-solving trajectories, with human-written text reasoning traces and actions, as well as environment observations in response to actions (see examples in paper appendix!)
ReAct prompting is intuitive and flexible to design, and achieves state-of-the-art few-shot performances across a variety of tasks, from question answering to online shopping! HotpotQA Example The reason-only baseline (i.e. chain-of-thought) suffers from misinformation (in red) as it is not grounded to external environments to obtain and update knowledge, and has to rely on limited internal knowledge.
The act-only baseline suffers from the lack of reasoning, unable to synthesize the final answer despite having the same actions and observation as ReAct in this case.
In contrast, ReAct solves the task with a interpretable and factual trajectory. ALFWorld Example For decision making tasks, we design human trajectories with sparse reasoning traces, letting the LM decide when to think vs. act.
ReAct isn't perfect --- below is a failure example on ALFWorld. However, ReAct format allows easy human inspection and behavior correction by changing a couple of model thoughts, an exciting novel approach to human alignment! ReAct Finetuning: Initial Results Prompting has limited context window and learning support.
Initial finetuning results on HotpotQA using ReAct prompting trajectories suggest:
(1) ReAct is the best fintuning format across model sizes;
(2) ReAct finetuned smaller models outperform prompted larger models!
loader = UnstructuredURLLoader(urls=urls, mode="elements")
new_data = loader.load()
new_data[0].page_content

输出:

'ReAct: Synergizing Reasoning and Acting in Language Models'

LangChain基础篇 (04)的更多相关文章

  1. iOS系列 基础篇 04 探究视图生命周期

    iOS系列 基础篇 04 探究视图生命周期 视图是应用的一个重要的组成部份,功能的实现与其息息相关,而视图控制器控制着视图,其重要性在整个应用中不言而喻. 以视图的四种状态为基础,我们来系统了解一下视 ...

  2. Java多线程系列--“基础篇”04之 synchronized关键字

    概要 本章,会对synchronized关键字进行介绍.涉及到的内容包括:1. synchronized原理2. synchronized基本规则3. synchronized方法 和 synchro ...

  3. python 基础篇 04(列表 元组 常规操作)

    本节主要内容:1. 列表2. 列表的增删改查3. 列表的嵌套4. 元组和元组嵌套5. range 一. 列表1.1 列表的介绍列表是python的基础数据类型之一 ,其他编程语言也有类似的数据类型. ...

  4. MySQL基础篇(04):存储过程和视图,用法和特性详解

    本文源码:GitHub·点这里 || GitEE·点这里 一.存储过程 1.概念简介 存储程序是被存储在服务器中的组合SQL语句,经编译创建并保存在数据库中,用户可通过存储过程的名字调用执行.存储过程 ...

  5. Java基础篇(04):日期与时间API用法详解

    本文源码:GitHub·点这里 || GitEE·点这里 一.时间和日期 在系统开发中,日期与时间作为重要的业务因素,起到十分关键的作用,例如同一个时间节点下的数据生成,基于时间范围的各种数据统计和分 ...

  6. Java多线程系列 基础篇04 线程中断

    1. 中断线程 中断可以理解为线程的一个标志位属性,它表示一个运行中的线程是否被其他线程进行了中断操作,其他线程通过调用该线程的interrupt()方法对其进行中断操作,线程通过检查自身是否被中断来 ...

  7. Scala基础篇-04 try表达式

    1.try表达式 定义 try{} catch{} finally{} //例子 try{ Integer.parseInt("dog") }catch { }finally { ...

  8. mysql学习之基础篇04

    五种基本子句查询 查询是mysql中最重要的一环,我们今天就来说一下select的五种子句中的where条件查询: 首先我们先建立一张商品表:goods 由于商品数目太多,我就不一一列举了. 在这里我 ...

  9. 【Spark机器学习速成宝典】基础篇04数据类型(Python版)

    目录 Vector LabeledPoint Matrix 使用C4.5算法生成决策树 使用CART算法生成决策树 预剪枝和后剪枝 应用:遇到连续与缺失值怎么办? 多变量决策树 Python代码(sk ...

  10. [ASP.NET Core开发实战]基础篇04 主机

    主机定义 主机是封闭应用资源的对象. 设置主机 主机通常由 Program 类中的代码配置.生成和运行. HTTP项目(ASP.NET Core项目)创建泛型主机: public class Prog ...

随机推荐

  1. go 编译超时解决

    转载请注明出处: 在编译go项目时,遇到依赖下载超时,异常输出如下: CGO_ENABLED=0 GOOS=linux GO111MODULE=on go build -a -ldflags '-ex ...

  2. CSS 样式百分比

    1.宽高百分比 元素宽度/高度百分比是基于父级元素的width/height,不包含padding,border 注意:高度百分比一定要求父元素有设置height属性,只设置 min-height 虽 ...

  3. px2rem 实现vue rem 自适应/

    npm install postcss-px2rem px2rem-loader --save 新建js 文件rem.js // rem等比适配配置文件 // 基准大小 const baseSize ...

  4. LeetCode题集-5 - 最长回文子串(一)

    题目:给你一个字符串 s,找到 s 中最长的回文子串. 这一题作为中等难度,常规解法对于大多数人应该都没有难度.但是其中也有超难的解决办法,下面我们就一起由易到难,循序渐进地来解这道题. 01.暴力破 ...

  5. 多线程编程入门Thread_Task_async_await简单秒懂

    ` using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; u ...

  6. 0. RyuJIT Tutorials - RyuJIT 的历史和架构

    目录 上一篇:无 下一篇:待更新 正文 RyuJIT - 即 .NET 的 JIT 编译器,负责将 IL 代码编译为最终用于执行的机器代码. 本系列为 RyuJIT 教程,将分为多篇进行更新发布,旨在 ...

  7. C# 获取两经纬度之间的距离

    C# 获取两经纬度之间的距离 迷恋自留地 //地球半径,单位米 private const double EARTH_RADIUS = 6378137; /// <summary> /// ...

  8. uni-app小程序(快手)日志打印坑位记录

    前情 uni-app是我比较喜欢的跨平台框架,它能开发小程序/H5/APP(安卓/iOS),重要的是对前端开发友好,自带的IDE让开发体验也挺棒的,公司项目就是主推uni-app. 坑位 最近在开发一 ...

  9. 深度学习环境搭建(Windows11)

    深度学习环境的搭建(Windows11) 偶然重装了系统,在此记录下环境的恢复 基本深度学习环境的搭建,包括Anaconda+CUDA+cuDNN+Pytorch+TensorRT的安装与配置. ps ...

  10. CW信号的正交解调

    1.CW信号   CW可以叫做等幅电报,它通过电键控制发信机产生短信号"."(点)和长信号"--"(划),并利用其不同组合表示不同的字符,从而组成单词和句子. ...