前言

最近学习了一新概念,叫科学发现和科技发明,科学发现是高于科技发明的,而这个说法我觉得还是挺有道理的,我们总说中国的科技不如欧美,但我们实际感觉上,不论建筑,硬件还是软件,理论,我们都已经高于欧美了,那为什么还说我们不如欧美呢?

科学发现是高于科技发明就很好的解释了这个问题,即,我们的在线支付,建筑行业等等,这些都是科技发明,而不是科学发现,而科学发现是引领科技发明的,而欧美在科学发现上远远领先我们,科技发明上虽然领先的不多,但也有很多大幅领先的,比如chatgpt。

说这些的主要目的是想说明,软件开发也是科技发明,所以这个行业的高手,再高的水平,也就那么回事。

也就是说,即便你是清北的,一旦你进入科技发明的队伍,那也就那么回事了。

神经网络并不难,我的这个系列文章就证明了,你完全不会python,完全没学过算法,一样可以在短时间内学会。我个人感觉,一周到一个月之内,都能学会。

所以,会的不必高人一等的看别人,不会的也不用觉得人家是高水平。

本文内容

本文主要介绍结合神经网络进行机器人开发。

准备工作

运行代码前,我们需要先下载nltk包。

首先安装nltk的包。

pip install nltk

然后下载nltk工具,编写一个py文件,写代码如下:

import nltk
nltk.download()

然后使用管理员打开cmd,运行这个py文件。

C:\Project\python_test\github\PythonTest\venv\Scripts\python.exe C:\Project\python_test\github\PythonTest\robot_nltk\dlnltk.py

然后弹出界面如下,修改保存地址:



PS:有资料说可以直接运行 nltk.download('punkt') ,下载我们需要的指定的包,但我没下载成功,我还是全部下载了。

# nltk.download('punkt') #是 NLTK (Natural Language Toolkit) 库中的一个命令,用来下载名为 'punkt' 的资源,通常用于 分词(Tokenization)
# nltk.download('popular') #命令会下载 NLTK 中大部分常用的资源,比punkt的资源更多

代码编写

编写model

首先编写一个NeuralNet(model.py)如下:

import torch.nn as nn
class NeuralNet(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(NeuralNet, self).__init__()
self.l1 = nn.Linear(input_size, hidden_size)
self.l2 = nn.Linear(hidden_size, hidden_size)
self.l3 = nn.Linear(hidden_size, num_classes)
self.relu = nn.ReLU() def forward(self, x):
out = self.l1(x)
out = self.relu(out)
out = self.l2(out)
out = self.relu(out)
out = self.l3(out)
# no activation and no softmax at the end
return out

然后编写一个工具nltk_utils.py如下:

import numpy as np
import nltk from nltk.stem.porter import PorterStemmer
stemmer = PorterStemmer() def tokenize(sentence):
return nltk.word_tokenize(sentence) def stem(word):
return stemmer.stem(word.lower()) def bag_of_words(tokenized_sentence, words):
sentence_words = [stem(word) for word in tokenized_sentence]
bag = np.zeros(len(words), dtype=np.float32)
for idx, w in enumerate(words):
if w in sentence_words:
bag[idx] = 1 return bag a="How long does shipping take?"
print(a)
a = tokenize(a)
print(a)

这个文件可以直接运行,测试工具内函数的应用。

词干化和token化

词干化就是把单词提取成词干。逻辑如下:

words =["0rganize","organizes", "organizing"]
stemmed_words =[stem(w) for w in words]
print(stemmed_words)

过程如下图:

token化就是把单词转换成token。

下面这段代码就是测试token化。

a="How long does shipping take?"
print(a)
a = tokenize(a)
print(a)

token化的逻辑大致如下:

编写测试数据

编写json文件intents.json(英文版)

{
"intents": [
{
"tag": "greeting",
"patterns": [
"Hi",
"Hey",
"How are you",
"Is anyone there?",
"Hello",
"Good day"
],
"responses": [
"Hey :-)",
"Hello, thanks for visiting",
"Hi there, what can I do for you?",
"Hi there, how can I help?"
]
},
{
"tag": "goodbye",
"patterns": ["Bye", "See you later", "Goodbye"],
"responses": [
"See you later, thanks for visiting",
"Have a nice day",
"Bye! Come back again soon."
]
},
{
"tag": "thanks",
"patterns": ["Thanks", "Thank you", "That's helpful", "Thank's a lot!"],
"responses": ["Happy to help!", "Any time!", "My pleasure"]
},
{
"tag": "items",
"patterns": [
"Which items do you have?",
"What kinds of items are there?",
"What do you sell?"
],
"responses": [
"We sell coffee and tea",
"We have coffee and tea"
]
},
{
"tag": "payments",
"patterns": [
"Do you take credit cards?",
"Do you accept Mastercard?",
"Can I pay with Paypal?",
"Are you cash only?"
],
"responses": [
"We accept VISA, Mastercard and Paypal",
"We accept most major credit cards, and Paypal"
]
},
{
"tag": "delivery",
"patterns": [
"How long does delivery take?",
"How long does shipping take?",
"When do I get my delivery?"
],
"responses": [
"Delivery takes 2-4 days",
"Shipping takes 2-4 days"
]
},
{
"tag": "funny",
"patterns": [
"Tell me a joke!",
"Tell me something funny!",
"Do you know a joke?"
],
"responses": [
"Why did the hipster burn his mouth? He drank the coffee before it was cool.",
"What did the buffalo say when his son left for college? Bison."
]
}
]
}

intents_cn.json中文版数据。

{
"intents": [
{
"tag": "greeting",
"patterns": [
"你好",
"嗨",
"您好",
"有谁在吗?",
"你好呀",
"早上好",
"下午好",
"晚上好"
],
"responses": [
"你好!有什么我可以帮忙的吗?",
"您好!感谢您的光临。",
"嗨!有什么我可以为您效劳的吗?",
"早上好!今天怎么样?"
]
},
{
"tag": "goodbye",
"patterns": [
"再见",
"拜拜",
"下次见",
"保重",
"晚安"
],
"responses": [
"再见!希望很快能再次见到你。",
"拜拜!祝你有个愉快的一天。",
"保重!下次见。",
"晚安,祝你做个好梦!"
]
},
{
"tag": "thanks",
"patterns": [
"谢谢",
"感谢",
"多谢",
"非常感谢"
],
"responses": [
"不客气!很高兴能帮到你。",
"没问题!随时为您服务。",
"别客气!希望能帮到您。",
"很高兴能帮忙!"
]
},
{
"tag": "help",
"patterns": [
"你能帮我做什么?",
"你能做什么?",
"你能帮助我吗?",
"我需要帮助",
"能帮我一下吗?"
],
"responses": [
"我可以帮您回答问题、提供信息,或者进行简单的任务。",
"我能帮助您查询信息、安排任务等。",
"您可以问我问题,或者让我做一些简单的事情。",
"请告诉我您需要的帮助!"
]
},
{
"tag": "weather",
"patterns": [
"今天天气怎么样?",
"今天的天气如何?",
"天气预报是什么?",
"外面冷吗?",
"天气好不好?"
],
"responses": [
"今天的天气很好,适合外出!",
"今天天气有点冷,记得穿暖和点。",
"今天天气晴朗,适合去散步。",
"天气晴,温度适宜,非常适合外出。"
]
},
{
"tag": "about",
"patterns": [
"你是什么?",
"你是谁?",
"你是做什么的?",
"你能做些什么?"
],
"responses": [
"我是一个聊天机器人,可以回答您的问题和帮助您解决问题。",
"我是一个智能助手,帮助您完成各种任务。",
"我是一个虚拟助手,可以处理简单的任务和查询。",
"我可以帮助您获取信息,或者做一些简单的任务。"
]
}
]
}

训练数据

训练数据逻辑如下:

import numpy as np
import random
import json import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader from nltk_utils import bag_of_words, tokenize, stem
from model import NeuralNet with open('intents_cn.json', 'r', encoding='utf-8') as f:
intents = json.load(f) all_words = []
tags = []
xy = []
# loop through each sentence in our intents patterns
for intent in intents['intents']:
tag = intent['tag']
# add to tag list
tags.append(tag)
for pattern in intent['patterns']:
# tokenize each word in the sentence
w = tokenize(pattern)
# add to our words list
all_words.extend(w)
# add to xy pair
xy.append((w, tag)) # stem and lower each word
ignore_words = ['?', '.', '!']
all_words = [stem(w) for w in all_words if w not in ignore_words]
# remove duplicates and sort
all_words = sorted(set(all_words))
tags = sorted(set(tags)) print(len(xy), "patterns")
print(len(tags), "tags:", tags)
print(len(all_words), "unique stemmed words:", all_words) # create training data
X_train = []
y_train = []
for (pattern_sentence, tag) in xy:
# X: bag of words for each pattern_sentence
bag = bag_of_words(pattern_sentence, all_words)
X_train.append(bag)
# y: PyTorch CrossEntropyLoss needs only class labels, not one-hot
label = tags.index(tag)
y_train.append(label) X_train = np.array(X_train)
y_train = np.array(y_train) # Hyper-parameters
num_epochs = 1000
batch_size = 8
learning_rate = 0.001
input_size = len(X_train[0])
hidden_size = 8
output_size = len(tags)
print(input_size, output_size) class ChatDataset(Dataset): def __init__(self):
self.n_samples = len(X_train)
self.x_data = X_train
self.y_data = y_train # support indexing such that dataset[i] can be used to get i-th sample
def __getitem__(self, index):
return self.x_data[index], self.y_data[index] # we can call len(dataset) to return the size
def __len__(self):
return self.n_samples dataset = ChatDataset()
train_loader = DataLoader(dataset=dataset,
batch_size=batch_size,
shuffle=True,
num_workers=0) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = NeuralNet(input_size, hidden_size, output_size).to(device) # Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # Train the model
for epoch in range(num_epochs):
for (words, labels) in train_loader:
words = words.to(device)
labels = labels.to(dtype=torch.long).to(device) # Forward pass
outputs = model(words)
# if y would be one-hot, we must apply
# labels = torch.max(labels, 1)[1]
loss = criterion(outputs, labels) # Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step() if (epoch+1) % 100 == 0:
print (f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}') print(f'final loss: {loss.item():.4f}') data = {
"model_state": model.state_dict(),
"input_size": input_size,
"hidden_size": hidden_size,
"output_size": output_size,
"all_words": all_words,
"tags": tags
} FILE = "data.pth"
torch.save(data, FILE) print(f'training complete. file saved to {FILE}')

编写使用聊天

编写使用聊天代码如下:

import random
import json import torch from model import NeuralNet
from nltk_utils import bag_of_words, tokenize device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') with open('intents_cn.json', 'r',encoding='utf-8') as json_data:
intents = json.load(json_data) FILE = "data.pth"
data = torch.load(FILE) input_size = data["input_size"]
hidden_size = data["hidden_size"]
output_size = data["output_size"]
all_words = data['all_words']
tags = data['tags']
model_state = data["model_state"] model = NeuralNet(input_size, hidden_size, output_size).to(device)
model.load_state_dict(model_state)
model.eval() bot_name = "电脑"
print("Let's chat! (type 'quit' to exit)")
while True:
# sentence = "do you use credit cards?"
sentence = input("我:")
if sentence == "quit":
break sentence = tokenize(sentence)
X = bag_of_words(sentence, all_words)
X = X.reshape(1, X.shape[0])
X = torch.from_numpy(X).to(device) output = model(X)
_, predicted = torch.max(output, dim=1) tag = tags[predicted.item()] probs = torch.softmax(output, dim=1)
prob = probs[0][predicted.item()]
if prob.item() > 0.75:
for intent in intents['intents']:
if tag == intent["tag"]:
print(f"{bot_name}: {random.choice(intent['responses'])}")
else:
print(f"{bot_name}: 我不知道")

运行效果如下:


传送门:

零基础学习人工智能—Python—Pytorch学习—全集


注:此文章为原创,任何形式的转载都请联系作者获得授权并注明出处!



若您觉得这篇文章还不错,请点击下方的【推荐】,非常感谢!

https://www.cnblogs.com/kiba/p/18610399

零基础学习人工智能—Python—Pytorch学习(十三)的更多相关文章

  1. 如何零基础开始自学Python编程

    转载——原作者:赛门喵 链接:https://www.zhihu.com/question/29138020/answer/141170242 0. 明确目标 我是真正零基础开始学Python的,从一 ...

  2. 零基础快速掌握Python系统管理视频课程【猎豹网校】

    点击了解更多Python课程>>> 零基础快速掌握Python系统管理视频课程[猎豹网校] 课程目录 01.第01章 Python简介.mp4 02.第02章 IPython基础.m ...

  3. 零基础的人该怎么学习JAVA

    对于JAVA有所兴趣但又是零基础的人,该如何学习JAVA呢?对于想要学习开发技术的学子来说找到一个合适自己的培训机构是非常难的事情,在选择的过程中总是  因为这样或那样的问题让你犹豫不决,阻碍你前进的 ...

  4. 零基础学完Python的7大就业方向,哪个赚钱多?

    “ 我想学 Python,但是学完 Python 后都能干啥 ?” “ 现在学 Python,哪个方向最简单?哪个方向最吃香 ?” “ …… ” 相信不少 Python 的初学者,都会遇到上面的这些问 ...

  5. 零基础怎么学Python编程,新手常犯哪些错误?

    Python是人工智能时代最佳的编程语言,入门简单.功能强大,深获初学者的喜爱. 很多零基础学习Python开发的人都会忽视一些小细节,进而导致整个程序出现错误.下面就给大家介绍一下Python开发者 ...

  6. 零基础如何入门Python

    编程零基础如何学习Python 如果你是零基础,注意是零基础,想入门编程的话,我推荐你学Python.虽然国内基本上是以C语言作为入门教学,但在麻省理工等国外大学都是以Python作为编程入门教学的. ...

  7. 零基础如何学Python爬虫技术?

    在作者学习的众多编程技能中,爬虫技能无疑是最让作者着迷的.与自己闭关造轮子不同,爬虫的感觉是与别人博弈,一个在不停的构建 反爬虫 规则,一个在不停的破译规则. 如何入门爬虫?零基础如何学爬虫技术?那前 ...

  8. 零基础自学人工智能,看这些资料就够了(300G资料免费送)

    为什么有今天这篇? 首先,标题不要太相信,哈哈哈. 本公众号之前已经就人工智能学习的路径.学习方法.经典学习视频等做过完整说明.但是鉴于每个人的基础不同,可能需要额外的学习资料进行辅助.特此,向大家免 ...

  9. 零基础自学用Python 3开发网络爬虫

    原文出处: Jecvay Notes (@Jecvay) 由于本学期好多神都选了Cisco网络课, 而我这等弱渣没选, 去蹭了一节发现讲的内容虽然我不懂但是还是无爱. 我想既然都本科就出来工作还是按照 ...

  10. 零基础如何使用python处理字符串?

    摘要:Python的普遍使用场景是自动化测试.爬取网页数据.科学分析之类,这其中都涉及到了对数据的处理,而数据的表现形式很多,今天我们来讲讲字符串的操作.   字符串是作为任意一门编程语言的基础,在P ...

随机推荐

  1. markdown公式关系符

  2. 谷歌浏览器页面乱码问题在浏览器端解决,charset下载安装;

    一   下载插件(百度网盘) 链接:https://pan.baidu.com/s/1o9Zuo2m 密码:rrcz 二    将下载好的插件拖到谷歌浏览器中 三    如果谷歌浏览器右下角出现如下图 ...

  3. break,continue,return的只要区别

    a)break 结束当前循环体 b)continue 结束本次的循环,执行下次的循环 c)return 结束函数体,并返回值 d)break 和 continue 写在循环里(for,while)re ...

  4. CSharp的lambda表达式的使用

    using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using Syst ...

  5. 45. beforeCreate和created的区别

    data数据和methods的方法是否存在,是否定义了 : beforeCreate 都是 undefiend :

  6. Derivative norm vector repect to time 《PBM by Pixar》 Appendix D.2 code

    目录 1 Derivative normal vector repect to time 1.1 Derivative vector norm repect to time X Ref Vector ...

  7. Solon Ioc 的魔法之注解注入器(也可叫虚空注入器)

    很多人惊叹于 Solon 的注入能力,一个注解怎可注万物??? 一.注解注入器 Solon Ioc 的四大魔法之一:注解注入器(BeanInjector<T extends Annotation ...

  8. 利用jupyter进行股票数据分析

    1.需求:股票分析 使用tushare包获取某股票的历史行情数据. 输出该股票所有收盘比开盘上涨3%以上的日期. 输出该股票所有开盘比前日收盘跌幅超过2%的日期. 假如我从2010年1月1日开始,每月 ...

  9. 《使用Gin框架构建分布式应用》阅读笔记:p393-p437

    <用Gin框架构建分布式应用>学习第17天,p393-p437总结,总45页. 一.技术总结 1.Prometheus Prometheus放在代码里面使用,还是第一次见.在本人实际的工作 ...

  10. PXI板卡的封装和接口形式

    PXI模块 PXI标准同时定义了3U和6U模块适用的机械尺寸与连接器形式.3U模块在模块底部安装有一个助拔手柄.在顶部和底部通过螺钉固定,底部的固定螺钉部分隐藏在助拔手柄中.占用超过一个槽位的模块可以 ...