使用OpenAI模型体验GraphRag——以《边城》为例

在使用SiliconCloud之前,先使用OpenAI的模型看看GraphRag的效果。

GraphRAG是一种基于AI的内容理解和搜索能力,利用LLMs,解析数据以创建知识图谱,并对用户提供的私有数据集回答用户问题的方法。

GitHub地址:https://github.com/microsoft/graphrag

官网:https://microsoft.github.io/graphrag

现在正式开始体验GraphRag吧。

温馨提示

GraphRag Token的消费量比较大,刚开始体验可以不按照官方的配置,改用字数少一点的文本以及换成gpt-4o-mini模型。

以沈从文的《边城》为例。

创建一个Python虚拟环境,安装GraphRag:

pip install graphrag

安装好了之后:

mkdir biancheng
mkdir input

就是创建两个文件夹,也可以手动操作,然后将《边城》txt文件放到input文件夹下,如下所示:

开始初始化:

python -m graphrag.index --init --root ./biancheng

完成后,会出现一些文件,如下所示:

在.env文件中输入OpenAI Api Key,如下所示:

在settings.yaml文件中做一些配置,在这里我的配置如下:

encoding_model: cl100k_base
skip_workflows: []
llm:
api_key: ${GRAPHRAG_API_KEY}
type: openai_chat # or azure_openai_chat
model: gpt-4o-mini
model_supports_json: true # recommended if this is available for your model.
# max_tokens: 4000
# request_timeout: 180.0
# api_base: https://<instance>.openai.azure.com
# api_version: 2024-02-15-preview
# organization: <organization_id>
# deployment_name: <azure_model_deployment_name>
# tokens_per_minute: 150_000 # set a leaky bucket throttle
# requests_per_minute: 10_000 # set a leaky bucket throttle
# max_retries: 10
# max_retry_wait: 10.0
# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
# concurrent_requests: 25 # the number of parallel inflight requests that may be made
# temperature: 0 # temperature for sampling
# top_p: 1 # top-p sampling
# n: 1 # Number of completions to generate parallelization:
stagger: 0.3
# num_threads: 50 # the number of threads to use for parallel processing async_mode: threaded # or asyncio embeddings:
## parallelization: override the global parallelization settings for embeddings
async_mode: threaded # or asyncio
llm:
api_key: ${GRAPHRAG_API_KEY}
type: openai_embedding # or azure_openai_embedding
model: text-embedding-3-small
# api_base: https://<instance>.openai.azure.com
# api_version: 2024-02-15-preview
# organization: <organization_id>
# deployment_name: <azure_model_deployment_name>
# tokens_per_minute: 150_000 # set a leaky bucket throttle
# requests_per_minute: 10_000 # set a leaky bucket throttle
# max_retries: 10
# max_retry_wait: 10.0
# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
# concurrent_requests: 25 # the number of parallel inflight requests that may be made
# batch_size: 16 # the number of documents to send in a single request
# batch_max_tokens: 8191 # the maximum number of tokens to send in a single request
# target: required # or optional chunks:
size: 1200
overlap: 100
group_by_columns: [id] # by default, we don't allow chunks to cross documents input:
type: file # or blob
file_type: text # or csv
base_dir: "input"
file_encoding: utf-8
file_pattern: ".*\\.txt$" cache:
type: file # or blob
base_dir: "cache"
# connection_string: <azure_blob_storage_connection_string>
# container_name: <azure_blob_storage_container_name> storage:
type: file # or blob
base_dir: "output/${timestamp}/artifacts"
# connection_string: <azure_blob_storage_connection_string>
# container_name: <azure_blob_storage_container_name> reporting:
type: file # or console, blob
base_dir: "output/${timestamp}/reports"
# connection_string: <azure_blob_storage_connection_string>
# container_name: <azure_blob_storage_container_name> entity_extraction:
## llm: override the global llm settings for this task
## parallelization: override the global parallelization settings for this task
## async_mode: override the global async_mode settings for this task
prompt: "prompts/entity_extraction.txt"
entity_types: [organization,person,geo,event]
max_gleanings: 1 summarize_descriptions:
## llm: override the global llm settings for this task
## parallelization: override the global parallelization settings for this task
## async_mode: override the global async_mode settings for this task
prompt: "prompts/summarize_descriptions.txt"
max_length: 500 claim_extraction:
## llm: override the global llm settings for this task
## parallelization: override the global parallelization settings for this task
## async_mode: override the global async_mode settings for this task
# enabled: true
prompt: "prompts/claim_extraction.txt"
description: "Any claims or facts that could be relevant to information discovery."
max_gleanings: 1 community_reports:
## llm: override the global llm settings for this task
## parallelization: override the global parallelization settings for this task
## async_mode: override the global async_mode settings for this task
prompt: "prompts/community_report.txt"
max_length: 2000
max_input_length: 8000 cluster_graph:
max_cluster_size: 10 embed_graph:
enabled: false # if true, will generate node2vec embeddings for nodes
# num_walks: 10
# walk_length: 40
# window_size: 2
# iterations: 3
# random_seed: 597832 umap:
enabled: false # if true, will generate UMAP embeddings for nodes snapshots:
graphml: true
raw_entities: false
top_level_nodes: false local_search:
# text_unit_prop: 0.5
# community_prop: 0.1
# conversation_history_max_turns: 5
# top_k_mapped_entities: 10
# top_k_relationships: 10
# llm_temperature: 0 # temperature for sampling
# llm_top_p: 1 # top-p sampling
# llm_n: 1 # Number of completions to generate
# max_tokens: 12000 global_search:
# llm_temperature: 0 # temperature for sampling
# llm_top_p: 1 # top-p sampling
# llm_n: 1 # Number of completions to generate
# max_tokens: 12000
# data_max_tokens: 12000
# map_max_tokens: 1000
# reduce_max_tokens: 2000
# concurrency: 32

为了节约成本,把模型换成了gpt-4o-mini:

为了后面在Gephi等软件中查看graphml文件,这里改成了true:

这样就配置好了,现在开始索引化:

python -m graphrag.index --root ./biancheng

索引化完成截图:

现在可以查看一下生成的节点和边:

现在就可以开始查询了。

先来全局查询:

python -m graphrag.query --root ./biancheng --method global "这篇小说讲了什么主题?"

再来局部查询:

python -m graphrag.query --root ./biancheng --method local "翠翠在白鸡关发生了什么?"

《边城》的字数大约在5万到6万字之间,查看成本:

只花了0.18美元,gpt-4o-mini性价比还是很高的。

使用SiliconCloud尝试GraphRag——以《三国演义》为例

虽然使用OpenAI的模型效果很好,但是在国内使用OpenAI会有一些限制,可能很多人还没有OpenAI Api Key,而且可能暂时也没法弄到,因此可以选择SiliconCloud做替代,SiliconCloud同时提供了兼容OpenAI格式的对话模型与嵌入模型,并有多款先进开源大模型可用,刚注册SiliconCloud会送一些额度,感兴趣就可以马上上手尝试。

在使用SiliconCloud尝试GraphRag时,为了快速把流程跑通,尝试换一个小一点的文本,先以《嫦娥奔月》的故事为例,进行说明。

步骤跟之前的步骤一样,就是在配置的时候,要改一些地方。

首先将Api Key改成SiliconCloud的Api Key:

settings中需要更改的地方。

首先是对话模型部分:

这里我选用的是meta-llama/Meta-Llama-3.1-70B-Instruct模型,关于模型名字怎么写,参考SiliconCloud的文档,文档地址:https://docs.siliconflow.cn/docs/getting-started

接下来是嵌入模型部分:

这里使用的嵌入模型是BAAI/bge-large-en-v1.5,使用BAAI/bge-large-zh-v1.5我这里会出错,大家也可以试一下,目前不知道什么原因。

嵌入模型名称该怎么写也是见文档:

开始索引化:

查看节点:

查看边:

全局提问:

python -m graphrag.query --root ./change1 --method global "这篇故事讲了什么主题?"

局部提问:

python -m graphrag.query --root ./change1 --method local "嫦娥送了什么礼物给天帝?"

现在把流程跑通了,可以尝试《三国演义》了!!

使用同样的设置,三国字数比较多,比较慢,耐心等待:

流程完成:

查看节点:

查看边:

全局提问:

python -m graphrag.query --root ./sanguo --method global "三国讲了什么故事?"

局部提问:

python -m graphrag.query --root ./sanguo --method local "赤壁之战是怎么打败曹操的?"

使用本地模型尝试GraphRag

本地尝试GraphRag可以使用Ollama的对话模型,由于Ollama的嵌入模型没有兼容OpenAI的格式,所以嵌入模型可以使用LM Studio。

配置:

encoding_model: cl100k_base
skip_workflows: []
llm:
api_key: ${GRAPHRAG_API_KEY}
type: openai_chat # or azure_openai_chat
model: llama3.1:70b
model_supports_json: true # recommended if this is available for your model.
# max_tokens: 4000
# request_timeout: 180.0
api_base: http://localhost:11434/v1
# api_version: 2024-02-15-preview
# organization: <organization_id>
# deployment_name: <azure_model_deployment_name>
# tokens_per_minute: 150_000 # set a leaky bucket throttle
# requests_per_minute: 10_000 # set a leaky bucket throttle
# max_retries: 10
# max_retry_wait: 10.0
# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
# concurrent_requests: 25 # the number of parallel inflight requests that may be made
# temperature: 0 # temperature for sampling
# top_p: 1 # top-p sampling
# n: 1 # Number of completions to generate parallelization:
stagger: 0.3
# num_threads: 50 # the number of threads to use for parallel processing async_mode: threaded # or asyncio embeddings:
## parallelization: override the global parallelization settings for embeddings
async_mode: threaded # or asyncio
llm:
api_key: ${GRAPHRAG_API_KEY}
type: openai_embedding # or azure_openai_embedding
model: nomic-ai/nomic-embed-text-v1.5-GGUF/nomic-embed-text-v1.5.Q2_K.gguf
api_base: http://localhost:1234/v1
# api_version: 2024-02-15-preview
# organization: <organization_id>
# deployment_name: <azure_model_deployment_name>
# tokens_per_minute: 150_000 # set a leaky bucket throttle
# requests_per_minute: 10_000 # set a leaky bucket throttle
# max_retries: 10
# max_retry_wait: 10.0
# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
# concurrent_requests: 25 # the number of parallel inflight requests that may be made
# batch_size: 16 # the number of documents to send in a single request
# batch_max_tokens: 8191 # the maximum number of tokens to send in a single request
# target: required # or optional chunks:
size: 300
overlap: 100
group_by_columns: [id] # by default, we don't allow chunks to cross documents input:
type: file # or blob
file_type: text # or csv
base_dir: "input"
file_encoding: utf-8
file_pattern: ".*\\.txt$" cache:
type: file # or blob
base_dir: "cache"
# connection_string: <azure_blob_storage_connection_string>
# container_name: <azure_blob_storage_container_name> storage:
type: file # or blob
base_dir: "output/${timestamp}/artifacts"
# connection_string: <azure_blob_storage_connection_string>
# container_name: <azure_blob_storage_container_name> reporting:
type: file # or console, blob
base_dir: "output/${timestamp}/reports"
# connection_string: <azure_blob_storage_connection_string>
# container_name: <azure_blob_storage_container_name> entity_extraction:
## llm: override the global llm settings for this task
## parallelization: override the global parallelization settings for this task
## async_mode: override the global async_mode settings for this task
prompt: "prompts/entity_extraction.txt"
entity_types: [organization,person,geo,event]
max_gleanings: 1 summarize_descriptions:
## llm: override the global llm settings for this task
## parallelization: override the global parallelization settings for this task
## async_mode: override the global async_mode settings for this task
prompt: "prompts/summarize_descriptions.txt"
max_length: 500 claim_extraction:
## llm: override the global llm settings for this task
## parallelization: override the global parallelization settings for this task
## async_mode: override the global async_mode settings for this task
# enabled: true
prompt: "prompts/claim_extraction.txt"
description: "Any claims or facts that could be relevant to information discovery."
max_gleanings: 1 community_reports:
## llm: override the global llm settings for this task
## parallelization: override the global parallelization settings for this task
## async_mode: override the global async_mode settings for this task
prompt: "prompts/community_report.txt"
max_length: 2000
max_input_length: 8000 cluster_graph:
max_cluster_size: 10 embed_graph:
enabled: false # if true, will generate node2vec embeddings for nodes
# num_walks: 10
# walk_length: 40
# window_size: 2
# iterations: 3
# random_seed: 597832 umap:
enabled: false # if true, will generate UMAP embeddings for nodes snapshots:
graphml: false
raw_entities: false
top_level_nodes: false local_search:
# text_unit_prop: 0.5
# community_prop: 0.1
# conversation_history_max_turns: 5
# top_k_mapped_entities: 10
# top_k_relationships: 10
# llm_temperature: 0 # temperature for sampling
# llm_top_p: 1 # top-p sampling
# llm_n: 1 # Number of completions to generate
# max_tokens: 12000 global_search:
# llm_temperature: 0 # temperature for sampling
# llm_top_p: 1 # top-p sampling
# llm_n: 1 # Number of completions to generate
# max_tokens: 12000
# data_max_tokens: 12000
# map_max_tokens: 1000
# reduce_max_tokens: 2000
# concurrency: 32

理论上跑的起来,但是我的电脑配置不行,跑不了稍微大一点的模型,没法实测。

混合使用

可以接入在线的对话模型Api,嵌入模型用本地的,但是SiliconCloud目前嵌入模型免费使用,也可以直接使用SiliconCloud的嵌入模型。

为了测试有哪些模型能把GraphRag流程跑通,但有些厂商只提供对话模型没有提供嵌入模型或者提供的嵌入模型也不兼容OpenAI格式该怎么办?

可以使用两个Key,一个Key是SiliconCloud用于使用嵌入模型,一个Key是其它厂商的,用于使用对话模型。

比如可以这样设置:

配置文件可以这样写:

encoding_model: cl100k_base
skip_workflows: []
llm:
api_key: ${Other_API_KEY}
type: openai_chat # or azure_openai_chat
model: glm-4-air
model_supports_json: true # recommended if this is available for your model.
# max_tokens: 4000
# request_timeout: 180.0
api_base: https://open.bigmodel.cn/api/paas/v4
# api_version: 2024-02-15-preview
# organization: <organization_id>
# deployment_name: <azure_model_deployment_name>
# tokens_per_minute: 150_000 # set a leaky bucket throttle
# requests_per_minute: 10_000 # set a leaky bucket throttle
# max_retries: 10
# max_retry_wait: 10.0
# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
# concurrent_requests: 25 # the number of parallel inflight requests that may be made
# temperature: 0 # temperature for sampling
# top_p: 1 # top-p sampling
# n: 1 # Number of completions to generate parallelization:
stagger: 0.3
# num_threads: 50 # the number of threads to use for parallel processing async_mode: threaded # or asyncio embeddings:
## parallelization: override the global parallelization settings for embeddings
async_mode: threaded # or asyncio
llm:
api_key: ${GRAPHRAG_API_KEY}
type: openai_embedding # or azure_openai_embedding
model: BAAI/bge-large-en-v1.5
api_base: https://api.siliconflow.cn/v1
# api_version: 2024-02-15-preview
# organization: <organization_id>
# deployment_name: <azure_model_deployment_name>
# tokens_per_minute: 150_000 # set a leaky bucket throttle
# requests_per_minute: 10_000 # set a leaky bucket throttle
# max_retries: 10
# max_retry_wait: 10.0
# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
# concurrent_requests: 25 # the number of parallel inflight requests that may be made
# batch_size: 16 # the number of documents to send in a single request
# batch_max_tokens: 8191 # the maximum number of tokens to send in a single request
# target: required # or optional chunks:
size: 300
overlap: 100
group_by_columns: [id] # by default, we don't allow chunks to cross documents input:
type: file # or blob
file_type: text # or csv
base_dir: "input"
file_encoding: utf-8
file_pattern: ".*\\.txt$" cache:
type: file # or blob
base_dir: "cache"
# connection_string: <azure_blob_storage_connection_string>
# container_name: <azure_blob_storage_container_name> storage:
type: file # or blob
base_dir: "output/${timestamp}/artifacts"
# connection_string: <azure_blob_storage_connection_string>
# container_name: <azure_blob_storage_container_name> reporting:
type: file # or console, blob
base_dir: "output/${timestamp}/reports"
# connection_string: <azure_blob_storage_connection_string>
# container_name: <azure_blob_storage_container_name> entity_extraction:
## llm: override the global llm settings for this task
## parallelization: override the global parallelization settings for this task
## async_mode: override the global async_mode settings for this task
prompt: "prompts/entity_extraction.txt"
entity_types: [organization,person,geo,event]
max_gleanings: 1 summarize_descriptions:
## llm: override the global llm settings for this task
## parallelization: override the global parallelization settings for this task
## async_mode: override the global async_mode settings for this task
prompt: "prompts/summarize_descriptions.txt"
max_length: 500 claim_extraction:
## llm: override the global llm settings for this task
## parallelization: override the global parallelization settings for this task
## async_mode: override the global async_mode settings for this task
# enabled: true
prompt: "prompts/claim_extraction.txt"
description: "Any claims or facts that could be relevant to information discovery."
max_gleanings: 1 community_reports:
## llm: override the global llm settings for this task
## parallelization: override the global parallelization settings for this task
## async_mode: override the global async_mode settings for this task
prompt: "prompts/community_report.txt"
max_length: 2000
max_input_length: 8000 cluster_graph:
max_cluster_size: 10 embed_graph:
enabled: false # if true, will generate node2vec embeddings for nodes
# num_walks: 10
# walk_length: 40
# window_size: 2
# iterations: 3
# random_seed: 597832 umap:
enabled: false # if true, will generate UMAP embeddings for nodes snapshots:
graphml: true
raw_entities: false
top_level_nodes: false local_search:
# text_unit_prop: 0.5
# community_prop: 0.1
# conversation_history_max_turns: 5
# top_k_mapped_entities: 10
# top_k_relationships: 10
# llm_temperature: 0 # temperature for sampling
# llm_top_p: 1 # top-p sampling
# llm_n: 1 # Number of completions to generate
# max_tokens: 12000 global_search:
# llm_temperature: 0 # temperature for sampling
# llm_top_p: 1 # top-p sampling
# llm_n: 1 # Number of completions to generate
# max_tokens: 12000
# data_max_tokens: 12000
# map_max_tokens: 1000
# reduce_max_tokens: 2000
# concurrency: 32

我尝试了多个大模型,经过我这个简单的测试,能把GraphRag流程跑通的(只是跑通,回答效果不一定好)的有如下这些:

温馨提示

GraphRag Token消耗量很大,请注意额度!!

对于一个两千多字的文本,一次GraphRag基本上就要耗费十多万的Token:

参考

1、https://siliconflow.cn/zh-cn/siliconcloud

2、https://github.com/microsoft/graphrag/discussions/321

3、https://github.com/microsoft/graphrag/issues/374

4、https://www.youtube.com/watch?v=BLyGDTNdad0

使用SiliconCloud尝试GraphRag——以《三国演义》为例(手把手教程,适合小白)的更多相关文章

  1. 2单表CRUD综合样例开发教程

    东软集团股份有限公司 基础软件事业部 单表CRUD综合样例开发教程 东软机密 tui 更改履历 版本号 更改时间 更改的 图表和章节号 状态 更改简要描述 更改申 请编号 更改人 批准人 V1.0 2 ...

  2. 最全SpringMVC具体演示样例实战教程

    一.SpringMVC基础入门,创建一个HelloWorld程序 1.首先.导入SpringMVC须要的jar包. 2.加入Web.xml配置文件里关于SpringMVC的配置 <!--conf ...

  3. 《Matlab实用案例》系列Matlab从入门到精通实用100例案例教程目录(持续更新)

    目录 1. 专栏简介 2. 专栏地址 3. 专栏目录 1. 专栏简介 2. 专栏地址 「 刘一哥与GIS的故事 」之<Matlab使用案例> 3. 专栏目录 [MATLAB统计分析与应用1 ...

  4. 阅读源码很重要,以logback为例,分享一个小白都能学会的读源码方法

    作为一个程序员,经常需要读一些开源项目的源码.同时呢,读源码对我们也有很多好处: 1.提升自己 阅读优秀的代码,第一可以提升我们自身的编码水平,第二可以开拓我们写代码的思路,第三还可能让我们拿到大厂 ...

  5. 全面总结: Golang 调用 C/C++,例子式教程

    作者:林冠宏 / 指尖下的幽灵 掘金:https://juejin.im/user/587f0dfe128fe100570ce2d8 博客:http://www.cnblogs.com/linguan ...

  6. Golang 调用 C/C++,例子式教程

    大部分人学习或者使用某样东西,喜欢在直观上看到动手后的结果,才会有继续下去的兴趣. 前言: Golang 调用 C/C++ 的教程网上很多,就我目前所看到的,个人见解就是比较乱,坑也很多.希望本文能在 ...

  7. 教你如何在Drcom下使用路由器上校园网(以广东工业大学、极路由1S HC5661A为例)

    免责声明: 在根据本教程进行实际操作时,如因您操作失误导致出现的一切意外,包括但不限于路由器变砖.故障.数据丢失等情况,概不负责: 该技术仅供学习交流,请勿将此技术应用于任何商业行为,所产生的法律责任 ...

  8. OPGL+GLFW+GLEW配置详细步骤

    转载自:https://blog.csdn.net/weixin_40921421/article/details/80211813 本文设计的工具包: 链接:https://pan.baidu.co ...

  9. Python word_cloud 样例 标签云系列(三)

    转载地址:https://zhuanlan.zhihu.com/p/20436642word_cloud/examples at master · amueller/word_cloud · GitH ...

  10. 机器学习案例学习【每周一例】之 Titanic: Machine Learning from Disaster

     下面一文章就总结几点关键: 1.要学会观察,尤其是输入数据的特征提取时,看各输入数据和输出的关系,用绘图看! 2.训练后,看测试数据和训练数据误差,确定是否过拟合还是欠拟合: 3.欠拟合的话,说明模 ...

随机推荐

  1. String忽略大小写方法compareToIgnoreCase源码及Comparator自定义比较器

    String忽略大小写方法compareToIgnoreCase源码及Comparator自定义比较器 //源码 public int compareToIgnoreCase(String str) ...

  2. Jx9 虚拟机

    一.Jx9 虚拟机的生命周期 加载 Jx9 脚本 jx9_compile() 或 jx9_compile_file(),加载编译成功后,Jx9 引擎将自动创建一个实例 (jx9_vm) 并且返回指向此 ...

  3. vue cli4.0项目引入typescript

    现有的项目是采用vue cli4.0脚手架生成的,现在想要引入typescript. 1.执行安装命令 npm install --save-dev typescript npm install -- ...

  4. openGauss集群主库出现流复制延迟告警

    问题描述:环境是openGauss 5.0集群,在一次意外重启数据库之后.收到了一个主库的主从延迟告警,只有从库才能出现延迟,主库怎么会出现了告警延迟 告警信息: Status: Resolved H ...

  5. 坚果云与floccus实现Chrome书签国内跨设备、跨平台同步

      本文介绍基于floccus插件与坚果云协同使用的方法,对浏览器的书签进行实时在线同步的操作.   在工作与学习中,我们时常希望在不同浏览器之间实现书签的同步:而一些传统的浏览器书签同步方案,或多或 ...

  6. eclipse取消默认工作空间的两种方法

    对于eclipse的默认的工作空间,如果不需要正常切换workspace的用户很方便,打开eclipse便自动进入默认的工作空间.而如果用户经常在多个workspace之间切换的话,启动eclipse ...

  7. Masked Popcount 题解

    背景 罚了一发,太菜了.为什么我终于有时间的时候她要考试? 题意 给你 \(n,m\),问 \(\sum_{i=0}^{n}popcount(i \&m)\). 其中 \(\&\) 代 ...

  8. 30K Star,最全面的PDF处理开源项目,你也可以拥有一个本地的PDF处理大全

    大家好,我是程序猿DD 今天给大家推荐一个日常大概率能用上的开源项目:Stirling PDF 开源地址:https://github.com/Stirling-Tools/Stirling-PDF ...

  9. SUM_ACM-Codeforces Round 941 (Div. 2)

    A Card Exchange https://codeforces.com/contest/1966/problem/A 思路:找规律,如果b>a,输出a,如果a中有大于等于b个数,输出b-1 ...

  10. Python版RNA-seq分析教程:DEseq2差异表达基因分析

    Bulk RNA-seq 分析的一个重要任务是分析差异表达基因,我们可以用 omicverse包 来完成这个任务.在omicverse中,除了最简单的ttest外,在这里,我们介绍一种类似R语言中的D ...