tensorflow seq2seq.py接口实例
以简单英文问答问题为例测试tensorflow1.4 tf.contrib.legacy_seq2seq中seq2seq文件的几个seq2seq接口
github:https://github.com/buyizhiyou/tf_seq2seq
测试 basic_rnn_seq2seq 的使用
#-*-coding:utf8-*- __author="buyizhiyou"
__date = "2018-7-30" import os
import pdb
import re
from collections import Counter
import matplotlib.pyplot as plt import tensorflow as tf
from seq2seq import basic_rnn_seq2seq from utils import * os.environ["CUDA_VISIBLE_DEVICES"] = ""#choose GPU 1 input_batches = [
['Hi What is your name?', 'Nice to meet you!'],
['Which programming language do you use?', 'See you later.'],
['Where do you live?', 'What is your major?'],
['What do you want to drink?', 'What is your favorite beer?']] target_batches = [
['Hi this is Jaemin.', 'Nice to meet you too!'],
['I like Python.', 'Bye Bye.'],
['I live in Seoul, South Korea.', 'I study industrial engineering.'],
['Beer please!', 'Leffe brown!']] all_input_sentences = []
for input_batch in input_batches:
all_input_sentences.extend(input_batch)
all_target_sentences = []
for target_batch in target_batches:
all_target_sentences.extend(target_batch) enc_vocab, enc_reverse_vocab, enc_vocab_size = build_vocab(all_input_sentences)#enc_vocab:word2idx,enc_reverse_vacab:idx2word,enc_vocab_size:26
dec_vocab, dec_reverse_vocab, dec_vocab_size = build_vocab(all_target_sentences, is_target=True)##dec_vocab:word2idx,dec_reverse_vacab:idx2word,dec_vocab_size:28 #hyperParameters
n_epoch = 2000
hidden_size = 50
enc_emb_size = 20
dec_emb_size = 21
enc_sentence_length=10
dec_sentence_length=11 enc_inputs = tf.placeholder(tf.int32,shape=[None,enc_sentence_length],name='input_sentences')
sequence_lengths = tf.placeholder(tf.int32,shape=[None],name='sentences_length')
dec_inputs = tf.placeholder(tf.int32,shape=[None,dec_sentence_length+1],name='output_sentences') enc_inputs_t = tf.transpose(enc_inputs,perm=[1,0])
dec_inputs_t = tf.transpose(dec_inputs,perm=[1,0]) '''
embedding
'''
enc_Wemb = tf.get_variable('enc_word_emb',initializer=tf.random_uniform([enc_vocab_size+1,enc_emb_size]))
dec_Wemb = tf.get_variable('dec_word_emb',initializer=tf.random_uniform([dec_vocab_size+2,dec_emb_size]))
enc_emb_inputs = tf.nn.embedding_lookup(enc_Wemb,enc_inputs_t)
dec_emb_inputs = tf.nn.embedding_lookup(dec_Wemb,dec_inputs_t)
# enc_emb_inputs:list(enc_sent_len) of tensor[batch_size x embedding_size]
# Because `static_rnn` takes list inputs
enc_emb_inputs = tf.unstack(enc_emb_inputs)
dec_emb_inputs = tf.unstack(dec_emb_inputs) cell = tf.nn.rnn_cell.BasicRNNCell(hidden_size)
dec_outputs,state = basic_rnn_seq2seq(enc_emb_inputs,dec_emb_inputs,cell)
dec_outputs = tf.stack(dec_outputs)
logits = tf.layers.dense(dec_outputs,units=dec_vocab_size+2,activation=tf.nn.relu)#fc层
predictions = tf.argmax(logits,axis=2)
predictions = tf.transpose(predictions,[1,0])
# labels & logits: [dec_sentence_length+1 x batch_size x dec_vocab_size+2]
labels = tf.one_hot(dec_inputs_t, dec_vocab_size+2) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=labels, logits=logits)) # training_op = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(loss)
training_op = tf.train.RMSPropOptimizer(learning_rate=0.0001).minimize(loss) with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
loss_history = []
for epoch in range(n_epoch):
all_preds = []
epoch_loss = 0
for input_batch,target_batch in zip(input_batches,target_batches):
input_token_indices = []
target_token_indices = []
sentence_lengths = [] for input_sent in input_batch:
input_sent,sent_len = sent2idx(input_sent,vocab=enc_vocab,max_sentence_length=enc_sentence_length)
input_token_indices.append(input_sent)
sentence_lengths.append(sent_len) for target_sent in target_batch:
target_token_indices.append(sent2idx(target_sent,vocab=dec_vocab,max_sentence_length=dec_sentence_length,is_target=True)) batch_preds,batch_loss, _ = sess.run(
[predictions,loss, training_op],
feed_dict={
enc_inputs: input_token_indices,
sequence_lengths: sentence_lengths,
dec_inputs: target_token_indices
})
loss_history.append(batch_loss)
epoch_loss += batch_loss
all_preds.append(batch_preds) # Logging every 400 epochs
if epoch % 400 == 0:
print('Epoch', epoch)
for input_batch, target_batch, batch_preds in zip(input_batches, target_batches, all_preds):
for input_sent, target_sent, pred in zip(input_batch, target_batch, batch_preds):
print('\t', input_sent)
print('\t => ', idx2sent(pred, reverse_vocab=dec_reverse_vocab))
print('\tCorrent answer:', target_sent)
print('\tepoch loss: {:.2f}\n'.format(epoch_loss)) show_loss(loss_history)
测试 tied_rnn_seq2seq 的使用(该接口encoder和decoder共享参数)
#-*-coding:utf8-*- __author="buyizhiyou"
__date = "2018-7-30" import os
import pdb
import re
from collections import Counter
import matplotlib.pyplot as plt import tensorflow as tf
from seq2seq import tied_rnn_seq2seq from utils import * os.environ["CUDA_VISIBLE_DEVICES"] = ""#choose GPU 1 input_batches = [
['Hi What is your name?', 'Nice to meet you!'],
['Which programming language do you use?', 'See you later.'],
['Where do you live?', 'What is your major?'],
['What do you want to drink?', 'What is your favorite beer?']] target_batches = [
['Hi this is Jaemin.', 'Nice to meet you too!'],
['I like Python.', 'Bye Bye.'],
['I live in Seoul, South Korea.', 'I study industrial engineering.'],
['Beer please!', 'Leffe brown!']] all_input_sentences = []
for input_batch in input_batches:
all_input_sentences.extend(input_batch)
all_target_sentences = []
for target_batch in target_batches:
all_target_sentences.extend(target_batch) enc_vocab, enc_reverse_vocab, enc_vocab_size = build_vocab(all_input_sentences)#enc_vocab:word2idx,enc_reverse_vacab:idx2word,enc_vocab_size:26
dec_vocab, dec_reverse_vocab, dec_vocab_size = build_vocab(all_target_sentences, is_target=True)##dec_vocab:word2idx,dec_reverse_vacab:idx2word,dec_vocab_size:28 #hyperParameters
n_epoch = 2000
hidden_size = 50
enc_emb_size = 20
dec_emb_size = 20#must consistent with enc_emb_size for parameter sharing
enc_sentence_length=10
dec_sentence_length=11 enc_inputs = tf.placeholder(tf.int32,shape=[None,enc_sentence_length],name='input_sentences')
sequence_lengths = tf.placeholder(tf.int32,shape=[None],name='sentences_length')
dec_inputs = tf.placeholder(tf.int32,shape=[None,dec_sentence_length+1],name='output_sentences') enc_inputs_t = tf.transpose(enc_inputs,perm=[1,0])
dec_inputs_t = tf.transpose(dec_inputs,perm=[1,0]) '''
embedding
'''
enc_Wemb = tf.get_variable('enc_word_emb',initializer=tf.random_uniform([enc_vocab_size+1,enc_emb_size]))
dec_Wemb = tf.get_variable('dec_word_emb',initializer=tf.random_uniform([dec_vocab_size+2,dec_emb_size]))
enc_emb_inputs = tf.nn.embedding_lookup(enc_Wemb,enc_inputs_t)
dec_emb_inputs = tf.nn.embedding_lookup(dec_Wemb,dec_inputs_t)
# enc_emb_inputs:list(enc_sent_len) of tensor[batch_size x embedding_size]
# Because `static_rnn` takes list inputs
enc_emb_inputs = tf.unstack(enc_emb_inputs)
dec_emb_inputs = tf.unstack(dec_emb_inputs) cell = tf.nn.rnn_cell.BasicRNNCell(hidden_size)
dec_outputs,state = tied_rnn_seq2seq(enc_emb_inputs,dec_emb_inputs,cell)
dec_outputs = tf.stack(dec_outputs)
logits = tf.layers.dense(dec_outputs,units=dec_vocab_size+2,activation=tf.nn.relu)#fc层
predictions = tf.argmax(logits,axis=2)
predictions = tf.transpose(predictions,[1,0])
# labels & logits: [dec_sentence_length+1 x batch_size x dec_vocab_size+2]
labels = tf.one_hot(dec_inputs_t, dec_vocab_size+2) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=labels, logits=logits)) # training_op = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(loss)
training_op = tf.train.RMSPropOptimizer(learning_rate=0.0001).minimize(loss) with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
loss_history = []
for epoch in range(n_epoch):
all_preds = []
epoch_loss = 0
for input_batch,target_batch in zip(input_batches,target_batches):
input_token_indices = []
target_token_indices = []
sentence_lengths = [] for input_sent in input_batch:
input_sent,sent_len = sent2idx(input_sent,vocab=enc_vocab,max_sentence_length=enc_sentence_length)
input_token_indices.append(input_sent)
sentence_lengths.append(sent_len) for target_sent in target_batch:
target_token_indices.append(sent2idx(target_sent,vocab=dec_vocab,max_sentence_length=dec_sentence_length,is_target=True)) batch_preds,batch_loss, _ = sess.run(
[predictions,loss, training_op],
feed_dict={
enc_inputs: input_token_indices,
sequence_lengths: sentence_lengths,
dec_inputs: target_token_indices
})
loss_history.append(batch_loss)
epoch_loss += batch_loss
all_preds.append(batch_preds) # Logging every 400 epochs
if epoch % 400 == 0:
print('Epoch', epoch)
for input_batch, target_batch, batch_preds in zip(input_batches, target_batches, all_preds):
for input_sent, target_sent, pred in zip(input_batch, target_batch, batch_preds):
print('\t', input_sent)
print('\t => ', idx2sent(pred, reverse_vocab=dec_reverse_vocab))
print('\tCorrent answer:', target_sent)
print('\tepoch loss: {:.2f}\n'.format(epoch_loss)) show_loss(loss_history)
测试 embedding_attention_seq2seq的使用
#-*-coding:utf8-*- __author="buyizhiyou"
__date = "2018-7-30" import os
import pdb
import re
from collections import Counter
import matplotlib.pyplot as plt import tensorflow as tf
from seq2seq import embedding_attention_seq2seq from utils import * os.environ["CUDA_VISIBLE_DEVICES"] = ""#choose GPU 1 input_batches = [
['Hi What is your name?', 'Nice to meet you!'],
['Which programming language do you use?', 'See you later.'],
['Where do you live?', 'What is your major?'],
['What do you want to drink?', 'What is your favorite beer?']] target_batches = [
['Hi this is Jaemin.', 'Nice to meet you too!'],
['I like Python.', 'Bye Bye.'],
['I live in Seoul, South Korea.', 'I study industrial engineering.'],
['Beer please!', 'Leffe brown!']] all_input_sentences = []
for input_batch in input_batches:
all_input_sentences.extend(input_batch)
all_target_sentences = []
for target_batch in target_batches:
all_target_sentences.extend(target_batch) enc_vocab, enc_reverse_vocab, enc_vocab_size = build_vocab(all_input_sentences)#enc_vocab:word2idx,enc_reverse_vacab:idx2word,enc_vocab_size:26
dec_vocab, dec_reverse_vocab, dec_vocab_size = build_vocab(all_target_sentences, is_target=True)##dec_vocab:word2idx,dec_reverse_vacab:idx2word,dec_vocab_size:28 #hyperParameters
n_epoch = 2000
hidden_size = 50
enc_emb_size = 20
dec_emb_size = 21
enc_sentence_length=10
dec_sentence_length=11 enc_inputs = tf.placeholder(tf.int32,shape=[None,enc_sentence_length],name='input_sentences')
sequence_lengths = tf.placeholder(tf.int32,shape=[None],name='sentences_length')
dec_inputs = tf.placeholder(tf.int32,shape=[None,dec_sentence_length+1],name='output_sentences') enc_inputs_t = tf.transpose(enc_inputs,perm=[1,0])
dec_inputs_t = tf.transpose(dec_inputs,perm=[1,0])
labels = tf.one_hot(dec_inputs_t, dec_vocab_size+2)
# labels & logits: [dec_sentence_length+1 x batch_size x dec_vocab_size+2] # enc_emb_inputs:list(enc_sent_len) of tensor[batch_size x embedding_size]
# Because `static_rnn` takes list inputs
enc_inputs_t = tf.unstack(enc_inputs_t)
dec_inputs_t = tf.unstack(dec_inputs_t) cell = tf.nn.rnn_cell.BasicRNNCell(hidden_size)
dec_outputs,state = embedding_attention_seq2seq(
encoder_inputs=enc_inputs_t,
decoder_inputs=dec_inputs_t,
cell=cell,
num_encoder_symbols=enc_vocab_size+1,
num_decoder_symbols=dec_vocab_size+2,
embedding_size=enc_emb_size,
output_projection=None,
feed_previous=True
)
logits = tf.stack(dec_outputs)
predictions = tf.argmax(logits,axis=2)
predictions = tf.transpose(predictions,[1,0]) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=labels, logits=logits))
# training_op = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(loss)
training_op = tf.train.RMSPropOptimizer(learning_rate=0.0001).minimize(loss) with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
loss_history = []
for epoch in range(n_epoch):
all_preds = []
epoch_loss = 0
for input_batch,target_batch in zip(input_batches,target_batches):
input_token_indices = []
target_token_indices = []
sentence_lengths = [] for input_sent in input_batch:
input_sent,sent_len = sent2idx(input_sent,vocab=enc_vocab,max_sentence_length=enc_sentence_length)
input_token_indices.append(input_sent)
sentence_lengths.append(sent_len) for target_sent in target_batch:
target_token_indices.append(sent2idx(target_sent,vocab=dec_vocab,max_sentence_length=dec_sentence_length,is_target=True)) batch_preds,batch_loss, _ = sess.run(
[predictions,loss, training_op],
feed_dict={
enc_inputs: input_token_indices,
sequence_lengths: sentence_lengths,
dec_inputs: target_token_indices
})
loss_history.append(batch_loss)
epoch_loss += batch_loss
all_preds.append(batch_preds) # Logging every 400 epochs
if epoch % 400 == 0:
print('Epoch', epoch)
for input_batch, target_batch, batch_preds in zip(input_batches, target_batches, all_preds):
for input_sent, target_sent, pred in zip(input_batch, target_batch, batch_preds):
print('\t', input_sent)
print('\t => ', idx2sent(pred, reverse_vocab=dec_reverse_vocab))
print('\tCorrent answer:', target_sent)
print('\tepoch loss: {:.2f}\n'.format(epoch_loss)) show_loss(loss_history)
测试embedding_seq2seq 的使用
#-*-coding:utf8-*- __author="buyizhiyou"
__date = "2018-7-30" '''
测试embedding_rnn_seq2seq函数
''' import os
import pdb
import re
from collections import Counter
import matplotlib.pyplot as plt import tensorflow as tf
from seq2seq import embedding_rnn_seq2seq from utils import * os.environ["CUDA_VISIBLE_DEVICES"] = ""#choose GPU 1 input_batches = [
['Hi What is your name?', 'Nice to meet you!'],
['Which programming language do you use?', 'See you later.'],
['Where do you live?', 'What is your major?'],
['What do you want to drink?', 'What is your favorite beer?']] target_batches = [
['Hi this is Jaemin.', 'Nice to meet you too!'],
['I like Python.', 'Bye Bye.'],
['I live in Seoul, South Korea.', 'I study industrial engineering.'],
['Beer please!', 'Leffe brown!']] all_input_sentences = []
for input_batch in input_batches:
all_input_sentences.extend(input_batch)
all_target_sentences = []
for target_batch in target_batches:
all_target_sentences.extend(target_batch) enc_vocab, enc_reverse_vocab, enc_vocab_size = build_vocab(all_input_sentences)#enc_vocab:word2idx,enc_reverse_vacab:idx2word,enc_vocab_size:26
dec_vocab, dec_reverse_vocab, dec_vocab_size = build_vocab(all_target_sentences, is_target=True)##dec_vocab:word2idx,dec_reverse_vacab:idx2word,dec_vocab_size:28 #hyperParameters
n_epoch = 2000
hidden_size = 50
enc_emb_size = 20
dec_emb_size = 21
enc_sentence_length=10
dec_sentence_length=11 enc_inputs = tf.placeholder(tf.int32,shape=[None,enc_sentence_length],name='input_sentences')
sequence_lengths = tf.placeholder(tf.int32,shape=[None],name='sentences_length')
dec_inputs = tf.placeholder(tf.int32,shape=[None,dec_sentence_length+1],name='output_sentences') enc_inputs_t = tf.transpose(enc_inputs,perm=[1,0])
dec_inputs_t = tf.transpose(dec_inputs,perm=[1,0])
labels = tf.one_hot(dec_inputs_t, dec_vocab_size+2)
# labels & logits: [dec_sentence_length+1 x batch_size x dec_vocab_size+2] # enc_emb_inputs:list(enc_sent_len) of tensor[batch_size x embedding_size]
# Because `static_rnn` takes list inputs
enc_inputs_t = tf.unstack(enc_inputs_t)
dec_inputs_t = tf.unstack(dec_inputs_t) cell = tf.nn.rnn_cell.BasicRNNCell(hidden_size)
dec_outputs,state = embedding_rnn_seq2seq(
encoder_inputs=enc_inputs_t,
decoder_inputs=dec_inputs_t,
cell=cell,
num_encoder_symbols=enc_vocab_size+1,
num_decoder_symbols=dec_vocab_size+2,
embedding_size=enc_emb_size,
output_projection=None,
feed_previous=True
)
logits = tf.stack(dec_outputs)
predictions = tf.argmax(logits,axis=2)
predictions = tf.transpose(predictions,[1,0]) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=labels, logits=logits))
# training_op = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(loss)
training_op = tf.train.RMSPropOptimizer(learning_rate=0.0001).minimize(loss) with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
loss_history = []
for epoch in range(n_epoch):
all_preds = []
epoch_loss = 0
for input_batch,target_batch in zip(input_batches,target_batches):
input_token_indices = []
target_token_indices = []
sentence_lengths = [] for input_sent in input_batch:
input_sent,sent_len = sent2idx(input_sent,vocab=enc_vocab,max_sentence_length=enc_sentence_length)
input_token_indices.append(input_sent)
sentence_lengths.append(sent_len) for target_sent in target_batch:
target_token_indices.append(sent2idx(target_sent,vocab=dec_vocab,max_sentence_length=dec_sentence_length,is_target=True)) batch_preds,batch_loss, _ = sess.run(
[predictions,loss, training_op],
feed_dict={
enc_inputs: input_token_indices,
sequence_lengths: sentence_lengths,
dec_inputs: target_token_indices
})
loss_history.append(batch_loss)
epoch_loss += batch_loss
all_preds.append(batch_preds) # Logging every 400 epochs
if epoch % 400 == 0:
print('Epoch', epoch)
for input_batch, target_batch, batch_preds in zip(input_batches, target_batches, all_preds):
for input_sent, target_sent, pred in zip(input_batch, target_batch, batch_preds):
print('\t', input_sent)
print('\t => ', idx2sent(pred, reverse_vocab=dec_reverse_vocab))
print('\tCorrent answer:', target_sent)
print('\tepoch loss: {:.2f}\n'.format(epoch_loss)) show_loss(loss_history)
tensorflow seq2seq.py接口实例的更多相关文章
- 学习笔记CB014:TensorFlow seq2seq模型步步进阶
神经网络.<Make Your Own Neural Network>,用非常通俗易懂描述讲解人工神经网络原理用代码实现,试验效果非常好. 循环神经网络和LSTM.Christopher ...
- ChatGirl 一个基于 TensorFlow Seq2Seq 模型的聊天机器人[中文文档]
ChatGirl 一个基于 TensorFlow Seq2Seq 模型的聊天机器人[中文文档] 简介 简单地说就是该有的都有了,但是总体跑起来效果还不好. 还在开发中,它工作的效果还不好.但是你可以直 ...
- ChatGirl is an AI ChatBot based on TensorFlow Seq2Seq Model
Introduction [Under developing,it is not working well yet.But you can just train,and run it.] ChatGi ...
- 规则引擎集成接口(八)Java接口实例
接口实例 右键点击“对象库” —“添加接口实例”,如下图: 弹出如下窗体: 输入接口的参数信息: 点击接口“求和”,选择选项卡“求和操作”,点击添加图标 ,如下: 弹出如下窗体,勾选方法“coun ...
- MyBatis 源码分析——生成Statement接口实例
JDBC的知识对于JAVA开发人员来讲在简单不过的知识了.PreparedStatement的作用更是胸有成竹.我们最常见用到有俩个方法:executeQuery方法和executeUpdate方法. ...
- Tensorflow 线性回归预测房价实例
在本节中将通过一个预测房屋价格的实例来讲解利用线性回归预测房屋价格,以及在tensorflow中如何实现 Tensorflow 线性回归预测房价实例 1.1. 准备工作 1.2. 归一化数据 1.3. ...
- R用户的福音︱TensorFlow:TensorFlow的R接口
------------------------------------------------------------ Matt︱R语言调用深度学习架构系列引文 R语言︱H2o深度学习的一些R语言实 ...
- Easy-Mock模拟get接口和post接口实例
1.先创建项目,再新建接口 创建项目入口:首页右下角 + 按钮 创建接口入口如下图: 关于mock的语法这里不做说明,可查看mock.js官方查看更详情的资料. 小tip:在Easy-Mock里面支持 ...
- postman+jmeter接口实例
接口基础 一.为什么要单独测试接口? 1. 程序是分开开发的,前端还没有开发,后端已经开发完了,可以提前进入测试2. 接口直接返回的数据------越底层发现bug,修复成本是越低的3. 接口测试能模 ...
随机推荐
- vi - vim的一些遗忘点
1. vi 供分为三种模式:一般模式.编辑模式和命令行模式.i / Esc + :wq :q :q! 使vi在一般模式与编辑模式中来回转换. /word 向下寻找一个名称为word的字符串: ?wor ...
- Rust安装配置
Rust安装配置 话说前面: 如果你 之前安装过老版本的 rust 请先卸载 我说的是以 msi 文件安装的那种, 请进控制面板–> 程序中进行卸载 首先 下载官网 的 rustup-init. ...
- [CF949C]Data Center Maintenance
题目大意:$n$个点,每个点有一个值$w_i$.$m$个条件,每个条件给出$x,y$,要求$w_x\not =w_y$.选择最少的点,使其值加$1$后,所有条件成立(数据保证有解). 题解:对于每个条 ...
- 为基于busybox根文件系统的ARM嵌入式Linux交叉编译dropbear使能SSH
原创作品,允许转载,转载时请务必以超链接形式标明文章.作者信息和本声明,否则将追究法律责任. 最近使用busybox为基于ARM的板卡定制了一个极简单的根文件系统,由于busybox仅支持telnet ...
- 《c程序设计语言》读书笔记-第二个字符串任意一个在第一个字符串出现的位置,未出先返回-1
#include <stdio.h> #include <string.h> #define Num 1000 int main() { int c,i,j = 0,m = 0 ...
- bzoj 1579: [Usaco2009 Feb]Revamping Trails 道路升级——分层图+dijkstra
Description 每天,农夫John需要经过一些道路去检查牛棚N里面的牛. 农场上有M(1<=M<=50,000)条双向泥土道路,编号为1..M. 道路i连接牛棚P1_i和P2_i ...
- BS4爬取豆瓣电影
爬取豆瓣top250部电影 ####创建表: #connect.py from sqlalchemy import create_engine # HOSTNAME='localhost' # POR ...
- 谈谈dpdk应用层包处理程序的多进程和多线程模型选择时的若干考虑
看到知乎上有个关于linux多进程.多线程的讨论:http://www.zhihu.com/question/19903801/answer/14842584 自己项目里也对这个问题有过很多探讨和测试 ...
- PostgreSQL(EXCEPT,INTERSECT)
except 可以查看表一对表二不一样的数据,有点像是对表一进行表一表二交集的反集的交集,好绕: intersect 可以查看表一和表二一样的数据,求交集: select t1.name,t1.age ...
- Sql Jions 的简易理解
Sql Jions 的简易理解 Select * from TableA A left jion TableB B on A.key = B.key Select * from TableA ...