tensorflow seq2seq.py接口实例
以简单英文问答问题为例测试tensorflow1.4 tf.contrib.legacy_seq2seq中seq2seq文件的几个seq2seq接口
github:https://github.com/buyizhiyou/tf_seq2seq
测试 basic_rnn_seq2seq 的使用
#-*-coding:utf8-*- __author="buyizhiyou"
__date = "2018-7-30" import os
import pdb
import re
from collections import Counter
import matplotlib.pyplot as plt import tensorflow as tf
from seq2seq import basic_rnn_seq2seq from utils import * os.environ["CUDA_VISIBLE_DEVICES"] = ""#choose GPU 1 input_batches = [
['Hi What is your name?', 'Nice to meet you!'],
['Which programming language do you use?', 'See you later.'],
['Where do you live?', 'What is your major?'],
['What do you want to drink?', 'What is your favorite beer?']] target_batches = [
['Hi this is Jaemin.', 'Nice to meet you too!'],
['I like Python.', 'Bye Bye.'],
['I live in Seoul, South Korea.', 'I study industrial engineering.'],
['Beer please!', 'Leffe brown!']] all_input_sentences = []
for input_batch in input_batches:
all_input_sentences.extend(input_batch)
all_target_sentences = []
for target_batch in target_batches:
all_target_sentences.extend(target_batch) enc_vocab, enc_reverse_vocab, enc_vocab_size = build_vocab(all_input_sentences)#enc_vocab:word2idx,enc_reverse_vacab:idx2word,enc_vocab_size:26
dec_vocab, dec_reverse_vocab, dec_vocab_size = build_vocab(all_target_sentences, is_target=True)##dec_vocab:word2idx,dec_reverse_vacab:idx2word,dec_vocab_size:28 #hyperParameters
n_epoch = 2000
hidden_size = 50
enc_emb_size = 20
dec_emb_size = 21
enc_sentence_length=10
dec_sentence_length=11 enc_inputs = tf.placeholder(tf.int32,shape=[None,enc_sentence_length],name='input_sentences')
sequence_lengths = tf.placeholder(tf.int32,shape=[None],name='sentences_length')
dec_inputs = tf.placeholder(tf.int32,shape=[None,dec_sentence_length+1],name='output_sentences') enc_inputs_t = tf.transpose(enc_inputs,perm=[1,0])
dec_inputs_t = tf.transpose(dec_inputs,perm=[1,0]) '''
embedding
'''
enc_Wemb = tf.get_variable('enc_word_emb',initializer=tf.random_uniform([enc_vocab_size+1,enc_emb_size]))
dec_Wemb = tf.get_variable('dec_word_emb',initializer=tf.random_uniform([dec_vocab_size+2,dec_emb_size]))
enc_emb_inputs = tf.nn.embedding_lookup(enc_Wemb,enc_inputs_t)
dec_emb_inputs = tf.nn.embedding_lookup(dec_Wemb,dec_inputs_t)
# enc_emb_inputs:list(enc_sent_len) of tensor[batch_size x embedding_size]
# Because `static_rnn` takes list inputs
enc_emb_inputs = tf.unstack(enc_emb_inputs)
dec_emb_inputs = tf.unstack(dec_emb_inputs) cell = tf.nn.rnn_cell.BasicRNNCell(hidden_size)
dec_outputs,state = basic_rnn_seq2seq(enc_emb_inputs,dec_emb_inputs,cell)
dec_outputs = tf.stack(dec_outputs)
logits = tf.layers.dense(dec_outputs,units=dec_vocab_size+2,activation=tf.nn.relu)#fc层
predictions = tf.argmax(logits,axis=2)
predictions = tf.transpose(predictions,[1,0])
# labels & logits: [dec_sentence_length+1 x batch_size x dec_vocab_size+2]
labels = tf.one_hot(dec_inputs_t, dec_vocab_size+2) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=labels, logits=logits)) # training_op = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(loss)
training_op = tf.train.RMSPropOptimizer(learning_rate=0.0001).minimize(loss) with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
loss_history = []
for epoch in range(n_epoch):
all_preds = []
epoch_loss = 0
for input_batch,target_batch in zip(input_batches,target_batches):
input_token_indices = []
target_token_indices = []
sentence_lengths = [] for input_sent in input_batch:
input_sent,sent_len = sent2idx(input_sent,vocab=enc_vocab,max_sentence_length=enc_sentence_length)
input_token_indices.append(input_sent)
sentence_lengths.append(sent_len) for target_sent in target_batch:
target_token_indices.append(sent2idx(target_sent,vocab=dec_vocab,max_sentence_length=dec_sentence_length,is_target=True)) batch_preds,batch_loss, _ = sess.run(
[predictions,loss, training_op],
feed_dict={
enc_inputs: input_token_indices,
sequence_lengths: sentence_lengths,
dec_inputs: target_token_indices
})
loss_history.append(batch_loss)
epoch_loss += batch_loss
all_preds.append(batch_preds) # Logging every 400 epochs
if epoch % 400 == 0:
print('Epoch', epoch)
for input_batch, target_batch, batch_preds in zip(input_batches, target_batches, all_preds):
for input_sent, target_sent, pred in zip(input_batch, target_batch, batch_preds):
print('\t', input_sent)
print('\t => ', idx2sent(pred, reverse_vocab=dec_reverse_vocab))
print('\tCorrent answer:', target_sent)
print('\tepoch loss: {:.2f}\n'.format(epoch_loss)) show_loss(loss_history)
测试 tied_rnn_seq2seq 的使用(该接口encoder和decoder共享参数)
#-*-coding:utf8-*- __author="buyizhiyou"
__date = "2018-7-30" import os
import pdb
import re
from collections import Counter
import matplotlib.pyplot as plt import tensorflow as tf
from seq2seq import tied_rnn_seq2seq from utils import * os.environ["CUDA_VISIBLE_DEVICES"] = ""#choose GPU 1 input_batches = [
['Hi What is your name?', 'Nice to meet you!'],
['Which programming language do you use?', 'See you later.'],
['Where do you live?', 'What is your major?'],
['What do you want to drink?', 'What is your favorite beer?']] target_batches = [
['Hi this is Jaemin.', 'Nice to meet you too!'],
['I like Python.', 'Bye Bye.'],
['I live in Seoul, South Korea.', 'I study industrial engineering.'],
['Beer please!', 'Leffe brown!']] all_input_sentences = []
for input_batch in input_batches:
all_input_sentences.extend(input_batch)
all_target_sentences = []
for target_batch in target_batches:
all_target_sentences.extend(target_batch) enc_vocab, enc_reverse_vocab, enc_vocab_size = build_vocab(all_input_sentences)#enc_vocab:word2idx,enc_reverse_vacab:idx2word,enc_vocab_size:26
dec_vocab, dec_reverse_vocab, dec_vocab_size = build_vocab(all_target_sentences, is_target=True)##dec_vocab:word2idx,dec_reverse_vacab:idx2word,dec_vocab_size:28 #hyperParameters
n_epoch = 2000
hidden_size = 50
enc_emb_size = 20
dec_emb_size = 20#must consistent with enc_emb_size for parameter sharing
enc_sentence_length=10
dec_sentence_length=11 enc_inputs = tf.placeholder(tf.int32,shape=[None,enc_sentence_length],name='input_sentences')
sequence_lengths = tf.placeholder(tf.int32,shape=[None],name='sentences_length')
dec_inputs = tf.placeholder(tf.int32,shape=[None,dec_sentence_length+1],name='output_sentences') enc_inputs_t = tf.transpose(enc_inputs,perm=[1,0])
dec_inputs_t = tf.transpose(dec_inputs,perm=[1,0]) '''
embedding
'''
enc_Wemb = tf.get_variable('enc_word_emb',initializer=tf.random_uniform([enc_vocab_size+1,enc_emb_size]))
dec_Wemb = tf.get_variable('dec_word_emb',initializer=tf.random_uniform([dec_vocab_size+2,dec_emb_size]))
enc_emb_inputs = tf.nn.embedding_lookup(enc_Wemb,enc_inputs_t)
dec_emb_inputs = tf.nn.embedding_lookup(dec_Wemb,dec_inputs_t)
# enc_emb_inputs:list(enc_sent_len) of tensor[batch_size x embedding_size]
# Because `static_rnn` takes list inputs
enc_emb_inputs = tf.unstack(enc_emb_inputs)
dec_emb_inputs = tf.unstack(dec_emb_inputs) cell = tf.nn.rnn_cell.BasicRNNCell(hidden_size)
dec_outputs,state = tied_rnn_seq2seq(enc_emb_inputs,dec_emb_inputs,cell)
dec_outputs = tf.stack(dec_outputs)
logits = tf.layers.dense(dec_outputs,units=dec_vocab_size+2,activation=tf.nn.relu)#fc层
predictions = tf.argmax(logits,axis=2)
predictions = tf.transpose(predictions,[1,0])
# labels & logits: [dec_sentence_length+1 x batch_size x dec_vocab_size+2]
labels = tf.one_hot(dec_inputs_t, dec_vocab_size+2) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=labels, logits=logits)) # training_op = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(loss)
training_op = tf.train.RMSPropOptimizer(learning_rate=0.0001).minimize(loss) with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
loss_history = []
for epoch in range(n_epoch):
all_preds = []
epoch_loss = 0
for input_batch,target_batch in zip(input_batches,target_batches):
input_token_indices = []
target_token_indices = []
sentence_lengths = [] for input_sent in input_batch:
input_sent,sent_len = sent2idx(input_sent,vocab=enc_vocab,max_sentence_length=enc_sentence_length)
input_token_indices.append(input_sent)
sentence_lengths.append(sent_len) for target_sent in target_batch:
target_token_indices.append(sent2idx(target_sent,vocab=dec_vocab,max_sentence_length=dec_sentence_length,is_target=True)) batch_preds,batch_loss, _ = sess.run(
[predictions,loss, training_op],
feed_dict={
enc_inputs: input_token_indices,
sequence_lengths: sentence_lengths,
dec_inputs: target_token_indices
})
loss_history.append(batch_loss)
epoch_loss += batch_loss
all_preds.append(batch_preds) # Logging every 400 epochs
if epoch % 400 == 0:
print('Epoch', epoch)
for input_batch, target_batch, batch_preds in zip(input_batches, target_batches, all_preds):
for input_sent, target_sent, pred in zip(input_batch, target_batch, batch_preds):
print('\t', input_sent)
print('\t => ', idx2sent(pred, reverse_vocab=dec_reverse_vocab))
print('\tCorrent answer:', target_sent)
print('\tepoch loss: {:.2f}\n'.format(epoch_loss)) show_loss(loss_history)
测试 embedding_attention_seq2seq的使用
#-*-coding:utf8-*- __author="buyizhiyou"
__date = "2018-7-30" import os
import pdb
import re
from collections import Counter
import matplotlib.pyplot as plt import tensorflow as tf
from seq2seq import embedding_attention_seq2seq from utils import * os.environ["CUDA_VISIBLE_DEVICES"] = ""#choose GPU 1 input_batches = [
['Hi What is your name?', 'Nice to meet you!'],
['Which programming language do you use?', 'See you later.'],
['Where do you live?', 'What is your major?'],
['What do you want to drink?', 'What is your favorite beer?']] target_batches = [
['Hi this is Jaemin.', 'Nice to meet you too!'],
['I like Python.', 'Bye Bye.'],
['I live in Seoul, South Korea.', 'I study industrial engineering.'],
['Beer please!', 'Leffe brown!']] all_input_sentences = []
for input_batch in input_batches:
all_input_sentences.extend(input_batch)
all_target_sentences = []
for target_batch in target_batches:
all_target_sentences.extend(target_batch) enc_vocab, enc_reverse_vocab, enc_vocab_size = build_vocab(all_input_sentences)#enc_vocab:word2idx,enc_reverse_vacab:idx2word,enc_vocab_size:26
dec_vocab, dec_reverse_vocab, dec_vocab_size = build_vocab(all_target_sentences, is_target=True)##dec_vocab:word2idx,dec_reverse_vacab:idx2word,dec_vocab_size:28 #hyperParameters
n_epoch = 2000
hidden_size = 50
enc_emb_size = 20
dec_emb_size = 21
enc_sentence_length=10
dec_sentence_length=11 enc_inputs = tf.placeholder(tf.int32,shape=[None,enc_sentence_length],name='input_sentences')
sequence_lengths = tf.placeholder(tf.int32,shape=[None],name='sentences_length')
dec_inputs = tf.placeholder(tf.int32,shape=[None,dec_sentence_length+1],name='output_sentences') enc_inputs_t = tf.transpose(enc_inputs,perm=[1,0])
dec_inputs_t = tf.transpose(dec_inputs,perm=[1,0])
labels = tf.one_hot(dec_inputs_t, dec_vocab_size+2)
# labels & logits: [dec_sentence_length+1 x batch_size x dec_vocab_size+2] # enc_emb_inputs:list(enc_sent_len) of tensor[batch_size x embedding_size]
# Because `static_rnn` takes list inputs
enc_inputs_t = tf.unstack(enc_inputs_t)
dec_inputs_t = tf.unstack(dec_inputs_t) cell = tf.nn.rnn_cell.BasicRNNCell(hidden_size)
dec_outputs,state = embedding_attention_seq2seq(
encoder_inputs=enc_inputs_t,
decoder_inputs=dec_inputs_t,
cell=cell,
num_encoder_symbols=enc_vocab_size+1,
num_decoder_symbols=dec_vocab_size+2,
embedding_size=enc_emb_size,
output_projection=None,
feed_previous=True
)
logits = tf.stack(dec_outputs)
predictions = tf.argmax(logits,axis=2)
predictions = tf.transpose(predictions,[1,0]) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=labels, logits=logits))
# training_op = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(loss)
training_op = tf.train.RMSPropOptimizer(learning_rate=0.0001).minimize(loss) with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
loss_history = []
for epoch in range(n_epoch):
all_preds = []
epoch_loss = 0
for input_batch,target_batch in zip(input_batches,target_batches):
input_token_indices = []
target_token_indices = []
sentence_lengths = [] for input_sent in input_batch:
input_sent,sent_len = sent2idx(input_sent,vocab=enc_vocab,max_sentence_length=enc_sentence_length)
input_token_indices.append(input_sent)
sentence_lengths.append(sent_len) for target_sent in target_batch:
target_token_indices.append(sent2idx(target_sent,vocab=dec_vocab,max_sentence_length=dec_sentence_length,is_target=True)) batch_preds,batch_loss, _ = sess.run(
[predictions,loss, training_op],
feed_dict={
enc_inputs: input_token_indices,
sequence_lengths: sentence_lengths,
dec_inputs: target_token_indices
})
loss_history.append(batch_loss)
epoch_loss += batch_loss
all_preds.append(batch_preds) # Logging every 400 epochs
if epoch % 400 == 0:
print('Epoch', epoch)
for input_batch, target_batch, batch_preds in zip(input_batches, target_batches, all_preds):
for input_sent, target_sent, pred in zip(input_batch, target_batch, batch_preds):
print('\t', input_sent)
print('\t => ', idx2sent(pred, reverse_vocab=dec_reverse_vocab))
print('\tCorrent answer:', target_sent)
print('\tepoch loss: {:.2f}\n'.format(epoch_loss)) show_loss(loss_history)
测试embedding_seq2seq 的使用
#-*-coding:utf8-*- __author="buyizhiyou"
__date = "2018-7-30" '''
测试embedding_rnn_seq2seq函数
''' import os
import pdb
import re
from collections import Counter
import matplotlib.pyplot as plt import tensorflow as tf
from seq2seq import embedding_rnn_seq2seq from utils import * os.environ["CUDA_VISIBLE_DEVICES"] = ""#choose GPU 1 input_batches = [
['Hi What is your name?', 'Nice to meet you!'],
['Which programming language do you use?', 'See you later.'],
['Where do you live?', 'What is your major?'],
['What do you want to drink?', 'What is your favorite beer?']] target_batches = [
['Hi this is Jaemin.', 'Nice to meet you too!'],
['I like Python.', 'Bye Bye.'],
['I live in Seoul, South Korea.', 'I study industrial engineering.'],
['Beer please!', 'Leffe brown!']] all_input_sentences = []
for input_batch in input_batches:
all_input_sentences.extend(input_batch)
all_target_sentences = []
for target_batch in target_batches:
all_target_sentences.extend(target_batch) enc_vocab, enc_reverse_vocab, enc_vocab_size = build_vocab(all_input_sentences)#enc_vocab:word2idx,enc_reverse_vacab:idx2word,enc_vocab_size:26
dec_vocab, dec_reverse_vocab, dec_vocab_size = build_vocab(all_target_sentences, is_target=True)##dec_vocab:word2idx,dec_reverse_vacab:idx2word,dec_vocab_size:28 #hyperParameters
n_epoch = 2000
hidden_size = 50
enc_emb_size = 20
dec_emb_size = 21
enc_sentence_length=10
dec_sentence_length=11 enc_inputs = tf.placeholder(tf.int32,shape=[None,enc_sentence_length],name='input_sentences')
sequence_lengths = tf.placeholder(tf.int32,shape=[None],name='sentences_length')
dec_inputs = tf.placeholder(tf.int32,shape=[None,dec_sentence_length+1],name='output_sentences') enc_inputs_t = tf.transpose(enc_inputs,perm=[1,0])
dec_inputs_t = tf.transpose(dec_inputs,perm=[1,0])
labels = tf.one_hot(dec_inputs_t, dec_vocab_size+2)
# labels & logits: [dec_sentence_length+1 x batch_size x dec_vocab_size+2] # enc_emb_inputs:list(enc_sent_len) of tensor[batch_size x embedding_size]
# Because `static_rnn` takes list inputs
enc_inputs_t = tf.unstack(enc_inputs_t)
dec_inputs_t = tf.unstack(dec_inputs_t) cell = tf.nn.rnn_cell.BasicRNNCell(hidden_size)
dec_outputs,state = embedding_rnn_seq2seq(
encoder_inputs=enc_inputs_t,
decoder_inputs=dec_inputs_t,
cell=cell,
num_encoder_symbols=enc_vocab_size+1,
num_decoder_symbols=dec_vocab_size+2,
embedding_size=enc_emb_size,
output_projection=None,
feed_previous=True
)
logits = tf.stack(dec_outputs)
predictions = tf.argmax(logits,axis=2)
predictions = tf.transpose(predictions,[1,0]) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=labels, logits=logits))
# training_op = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(loss)
training_op = tf.train.RMSPropOptimizer(learning_rate=0.0001).minimize(loss) with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
loss_history = []
for epoch in range(n_epoch):
all_preds = []
epoch_loss = 0
for input_batch,target_batch in zip(input_batches,target_batches):
input_token_indices = []
target_token_indices = []
sentence_lengths = [] for input_sent in input_batch:
input_sent,sent_len = sent2idx(input_sent,vocab=enc_vocab,max_sentence_length=enc_sentence_length)
input_token_indices.append(input_sent)
sentence_lengths.append(sent_len) for target_sent in target_batch:
target_token_indices.append(sent2idx(target_sent,vocab=dec_vocab,max_sentence_length=dec_sentence_length,is_target=True)) batch_preds,batch_loss, _ = sess.run(
[predictions,loss, training_op],
feed_dict={
enc_inputs: input_token_indices,
sequence_lengths: sentence_lengths,
dec_inputs: target_token_indices
})
loss_history.append(batch_loss)
epoch_loss += batch_loss
all_preds.append(batch_preds) # Logging every 400 epochs
if epoch % 400 == 0:
print('Epoch', epoch)
for input_batch, target_batch, batch_preds in zip(input_batches, target_batches, all_preds):
for input_sent, target_sent, pred in zip(input_batch, target_batch, batch_preds):
print('\t', input_sent)
print('\t => ', idx2sent(pred, reverse_vocab=dec_reverse_vocab))
print('\tCorrent answer:', target_sent)
print('\tepoch loss: {:.2f}\n'.format(epoch_loss)) show_loss(loss_history)
tensorflow seq2seq.py接口实例的更多相关文章
- 学习笔记CB014:TensorFlow seq2seq模型步步进阶
神经网络.<Make Your Own Neural Network>,用非常通俗易懂描述讲解人工神经网络原理用代码实现,试验效果非常好. 循环神经网络和LSTM.Christopher ...
- ChatGirl 一个基于 TensorFlow Seq2Seq 模型的聊天机器人[中文文档]
ChatGirl 一个基于 TensorFlow Seq2Seq 模型的聊天机器人[中文文档] 简介 简单地说就是该有的都有了,但是总体跑起来效果还不好. 还在开发中,它工作的效果还不好.但是你可以直 ...
- ChatGirl is an AI ChatBot based on TensorFlow Seq2Seq Model
Introduction [Under developing,it is not working well yet.But you can just train,and run it.] ChatGi ...
- 规则引擎集成接口(八)Java接口实例
接口实例 右键点击“对象库” —“添加接口实例”,如下图: 弹出如下窗体: 输入接口的参数信息: 点击接口“求和”,选择选项卡“求和操作”,点击添加图标 ,如下: 弹出如下窗体,勾选方法“coun ...
- MyBatis 源码分析——生成Statement接口实例
JDBC的知识对于JAVA开发人员来讲在简单不过的知识了.PreparedStatement的作用更是胸有成竹.我们最常见用到有俩个方法:executeQuery方法和executeUpdate方法. ...
- Tensorflow 线性回归预测房价实例
在本节中将通过一个预测房屋价格的实例来讲解利用线性回归预测房屋价格,以及在tensorflow中如何实现 Tensorflow 线性回归预测房价实例 1.1. 准备工作 1.2. 归一化数据 1.3. ...
- R用户的福音︱TensorFlow:TensorFlow的R接口
------------------------------------------------------------ Matt︱R语言调用深度学习架构系列引文 R语言︱H2o深度学习的一些R语言实 ...
- Easy-Mock模拟get接口和post接口实例
1.先创建项目,再新建接口 创建项目入口:首页右下角 + 按钮 创建接口入口如下图: 关于mock的语法这里不做说明,可查看mock.js官方查看更详情的资料. 小tip:在Easy-Mock里面支持 ...
- postman+jmeter接口实例
接口基础 一.为什么要单独测试接口? 1. 程序是分开开发的,前端还没有开发,后端已经开发完了,可以提前进入测试2. 接口直接返回的数据------越底层发现bug,修复成本是越低的3. 接口测试能模 ...
随机推荐
- Codeforces 672D Robin Hood(二分好题)
D. Robin Hood time limit per test 1 second memory limit per test 256 megabytes input standard input ...
- TensorFlow应用实战 | TensorFlow基础知识
挺长的~超出估计值了~预计阅读时间20分钟. 从helloworld开始 mkdir 1.helloworld cd 1.helloworldvim helloworld.py 代码: # -*- c ...
- POJ A Simple Problem with Integers | 线段树基础练习
#include<cstdio> #include<algorithm> #include<cstring> typedef long long ll; #defi ...
- 【CZY选讲·吃东西】
题目描述 一个神秘的村庄里有4家美食店.这四家店分别有A,B,C,D种不同的美食.LYK想在每一家店都吃其中一种美食.每种美食需要吃的时间可能是不一样的.现在给定第1家店A种不同的美食所需要吃的时间 ...
- struts2的ajax支持
struts2支持一种stream类型的Result,这种类型的Result可以直接向客户端浏览器响应二进制,文本等, 我们可以再action里面生成文本响应,然后在客户端页面动态加载该响应即可. 直 ...
- Visual Studio中的/MD, /MT, /MDd, /MTd 选项
Visual Studio中/MD, /MT, /MDd, /MTd表示多线程模块是否为dll.对于这几个选项我的理解如下: /MD: 定义了_MT和_DLL,让程序用多线程和dll版本的运行库. / ...
- 关闭vscode打开新文件自动关闭预览文件功能
经常碰到这个问题,我打开文件就是有用的,每次给我自动关闭了我还得去打开. 当然这个问题可以双击文件,接触那个文件的预览状态就可以解决了.不过还有一个更懒的方法,直接修改vscode配置就好了. // ...
- android hook 框架 libinject2 如何实现so注入
Android so注入-libinject2 简介.编译.运行 Android so注入-libinject2 如何实现so注入 Android so注入-Libinject 如何实现so注入 A ...
- Error:Execution failed for task ':bearBabyClient:processDebugManifest'. > Manifest merger failed with multiple errors, see logs
具体报错如上: 在右侧中 大方块圈中的[com.android.support:support-v4:26.0.0-alpha1] 这个文件导致的,在这的清单文件第27行合并失败,让使用tools:r ...
- SQL 数据库函数
字符串函数 lower(字符串表达式) | select lower('ABCDEF')返回 abcdef | 返回大写字符数据转换为小写的字符表达式. upper(字符串表达式) | select ...