DNNClassifier 深度神经网络 分类器
An Example of a DNNClassifier for the Iris dataset.
models/premade_estimator.py at master · tensorflow/models · GitHub https://github.com/tensorflow/models/blob/master/samples/core/get_started/premade_estimator.py
import pandas as pd
import tensorflow as tf TRAIN_URL = "http://download.tensorflow.org/data/iris_training.csv"
TEST_URL = "http://download.tensorflow.org/data/iris_test.csv" CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth',
'PetalLength', 'PetalWidth', 'Species']
SPECIES = ['Setosa', 'Versicolor', 'Virginica'] def maybe_download():
# train_path = tf.keras.utils.get_file(TRAIN_URL.split('/')[-], TRAIN_URL)
# test_path = tf.keras.utils.get_file(TEST_URL.split('/')[-], TEST_URL)
#
# return train_path, test_path
return 'iris_training.csv', 'iris_test.csv' def load_data(y_name='Species'):
"""Returns the iris dataset as (train_x, train_y), (test_x, test_y)."""
train_path, test_path = maybe_download() train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=)
train_x, train_y = train, train.pop(y_name) test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=)
test_x, test_y = test, test.pop(y_name) return (train_x, train_y), (test_x, test_y) def train_input_fn(features, labels, batch_size):
"""An input function for training"""
# Convert the inputs to a Dataset.
dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels)) # Shuffle, repeat, and batch the examples.
dataset = dataset.shuffle().repeat().batch(batch_size) # Return the dataset.
return dataset def eval_input_fn(features, labels, batch_size):
"""An input function for evaluation or prediction"""
features = dict(features)
if labels is None:
# No labels, use only features.
inputs = features
else:
inputs = (features, labels) # Convert the inputs to a Dataset.
dataset = tf.data.Dataset.from_tensor_slices(inputs) # Batch the examples
assert batch_size is not None, "batch_size must not be None"
dataset = dataset.batch(batch_size) # Return the dataset.
return dataset # The remainder of this file contains a simple example of a csv parser,
# implemented using a the `Dataset` class. # `tf.parse_csv` sets the types of the outputs to match the examples given in
# the `record_defaults` argument.
CSV_TYPES = [[0.0], [0.0], [0.0], [0.0], []] def _parse_line(line):
# Decode the line into its fields
fields = tf.decode_csv(line, record_defaults=CSV_TYPES) # Pack the result into a dictionary
features = dict(zip(CSV_COLUMN_NAMES, fields)) # Separate the label from the features
label = features.pop('Species') return features, label def csv_input_fn(csv_path, batch_size):
# Create a dataset containing the text lines.
dataset = tf.data.TextLineDataset(csv_path).skip() # Parse each line.
dataset = dataset.map(_parse_line) # Shuffle, repeat, and batch the examples.
dataset = dataset.shuffle().repeat().batch(batch_size) # Return the dataset.
return dataset
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""An Example of a DNNClassifier for the Iris dataset."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function import argparse
import tensorflow as tf import iris_data parser = argparse.ArgumentParser()
parser.add_argument('--batch_size', default=100, type=int, help='batch size')
parser.add_argument('--train_steps', default=1000, type=int,
help='number of training steps') def main(argv):
args = parser.parse_args(argv[1:]) # Fetch the data
(train_x, train_y), (test_x, test_y) = iris_data.load_data() # Feature columns describe how to use the input.
my_feature_columns = []
for key in train_x.keys():
my_feature_columns.append(tf.feature_column.numeric_column(key=key)) # Build 2 hidden layer DNN with 10, 10 units respectively.
classifier = tf.estimator.DNNClassifier(
feature_columns=my_feature_columns,
# Two hidden layers of 10 nodes each.
hidden_units=[10, 10],
# The model must choose between 3 classes.
n_classes=3) # Train the Model.
classifier.train(
input_fn=lambda:iris_data.train_input_fn(train_x, train_y,
args.batch_size),
steps=args.train_steps) # Evaluate the model.
eval_result = classifier.evaluate(
input_fn=lambda:iris_data.eval_input_fn(test_x, test_y,
args.batch_size)) print('\nTest set accuracy: {accuracy:0.3f}\n'.format(**eval_result)) # Generate predictions from the model
expected = ['Setosa', 'Versicolor', 'Virginica']
predict_x = {
'SepalLength': [5.1, 5.9, 6.9],
'SepalWidth': [3.3, 3.0, 3.1],
'PetalLength': [1.7, 4.2, 5.4],
'PetalWidth': [0.5, 1.5, 2.1],
} predictions = classifier.predict(
input_fn=lambda:iris_data.eval_input_fn(predict_x,
labels=None,
batch_size=args.batch_size)) template = ('\nPrediction is "{}" ({:.1f}%), expected "{}"') for pred_dict, expec in zip(predictions, expected):
class_id = pred_dict['class_ids'][0]
probability = pred_dict['probabilities'][class_id] print(template.format(iris_data.SPECIES[class_id],
100 * probability, expec)) if __name__ == '__main__':
tf.logging.set_verbosity(tf.logging.INFO)
tf.app.run(main)
C:\Users\Public\py36\python.exe C:/Users/sas/PycharmProjects/py_win_to_unix/sci/iris/premade_estimator.py
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: D:\MYTMPH~1\tmpsp673n0v
INFO:tensorflow:Using config: {'_model_dir': 'D:\\MYTMPH~1\\tmpsp673n0v', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x000001A3C68216D8>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
2018-04-27 19:57:52.516828: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 1 into D:\MYTMPH~1\tmpsp673n0v\model.ckpt.
INFO:tensorflow:loss = 276.79517, step = 1
INFO:tensorflow:global_step/sec: 631.226
INFO:tensorflow:loss = 33.67822, step = 101 (0.158 sec)
INFO:tensorflow:global_step/sec: 923.465
INFO:tensorflow:loss = 17.75303, step = 201 (0.107 sec)
INFO:tensorflow:global_step/sec: 1072.41
INFO:tensorflow:loss = 10.760817, step = 301 (0.094 sec)
INFO:tensorflow:global_step/sec: 1262.46
INFO:tensorflow:loss = 10.723449, step = 401 (0.079 sec)
INFO:tensorflow:global_step/sec: 852.425
INFO:tensorflow:loss = 7.739768, step = 501 (0.117 sec)
INFO:tensorflow:global_step/sec: 1017.69
INFO:tensorflow:loss = 6.8775907, step = 601 (0.098 sec)
INFO:tensorflow:global_step/sec: 1216.27
INFO:tensorflow:loss = 8.007765, step = 701 (0.082 sec)
INFO:tensorflow:global_step/sec: 898.502
INFO:tensorflow:loss = 4.028232, step = 801 (0.111 sec)
INFO:tensorflow:global_step/sec: 1108.16
INFO:tensorflow:loss = 4.0325384, step = 901 (0.090 sec)
INFO:tensorflow:Saving checkpoints for 1000 into D:\MYTMPH~1\tmpsp673n0v\model.ckpt.
INFO:tensorflow:Loss for final step: 7.3920045.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2018-04-27-11:57:54
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from D:\MYTMPH~1\tmpsp673n0v\model.ckpt-1000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Finished evaluation at 2018-04-27-11:57:54
INFO:tensorflow:Saving dict for global step 1000: accuracy = 0.96666664, average_loss = 0.060932837, global_step = 1000, loss = 1.8279852 Test set accuracy: 0.967 INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from D:\MYTMPH~1\tmpsp673n0v\model.ckpt-1000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op. Prediction is "Setosa" (100.0%), expected "Setosa" Prediction is "Versicolor" (98.8%), expected "Versicolor" Prediction is "Virginica" (97.5%), expected "Virginica" Process finished with exit code 0
120,4,setosa,versicolor,virginica
6.4,2.8,5.6,2.2,2
5.0,2.3,3.3,1.0,1
4.9,2.5,4.5,1.7,2
4.9,3.1,1.5,0.1,0
5.7,3.8,1.7,0.3,0
4.4,3.2,1.3,0.2,0
5.4,3.4,1.5,0.4,0
6.9,3.1,5.1,2.3,2
6.7,3.1,4.4,1.4,1
5.1,3.7,1.5,0.4,0
5.2,2.7,3.9,1.4,1
6.9,3.1,4.9,1.5,1
5.8,4.0,1.2,0.2,0
5.4,3.9,1.7,0.4,0
7.7,3.8,6.7,2.2,2
6.3,3.3,4.7,1.6,1
6.8,3.2,5.9,2.3,2
7.6,3.0,6.6,2.1,2
6.4,3.2,5.3,2.3,2
5.7,4.4,1.5,0.4,0
6.7,3.3,5.7,2.1,2
6.4,2.8,5.6,2.1,2
5.4,3.9,1.3,0.4,0
6.1,2.6,5.6,1.4,2
7.2,3.0,5.8,1.6,2
5.2,3.5,1.5,0.2,0
5.8,2.6,4.0,1.2,1
5.9,3.0,5.1,1.8,2
5.4,3.0,4.5,1.5,1
6.7,3.0,5.0,1.7,1
6.3,2.3,4.4,1.3,1
5.1,2.5,3.0,1.1,1
6.4,3.2,4.5,1.5,1
6.8,3.0,5.5,2.1,2
6.2,2.8,4.8,1.8,2
6.9,3.2,5.7,2.3,2
6.5,3.2,5.1,2.0,2
5.8,2.8,5.1,2.4,2
5.1,3.8,1.5,0.3,0
4.8,3.0,1.4,0.3,0
7.9,3.8,6.4,2.0,2
5.8,2.7,5.1,1.9,2
6.7,3.0,5.2,2.3,2
5.1,3.8,1.9,0.4,0
4.7,3.2,1.6,0.2,0
6.0,2.2,5.0,1.5,2
4.8,3.4,1.6,0.2,0
7.7,2.6,6.9,2.3,2
4.6,3.6,1.0,0.2,0
7.2,3.2,6.0,1.8,2
5.0,3.3,1.4,0.2,0
6.6,3.0,4.4,1.4,1
6.1,2.8,4.0,1.3,1
5.0,3.2,1.2,0.2,0
7.0,3.2,4.7,1.4,1
6.0,3.0,4.8,1.8,2
7.4,2.8,6.1,1.9,2
5.8,2.7,5.1,1.9,2
6.2,3.4,5.4,2.3,2
5.0,2.0,3.5,1.0,1
5.6,2.5,3.9,1.1,1
6.7,3.1,5.6,2.4,2
6.3,2.5,5.0,1.9,2
6.4,3.1,5.5,1.8,2
6.2,2.2,4.5,1.5,1
7.3,2.9,6.3,1.8,2
4.4,3.0,1.3,0.2,0
7.2,3.6,6.1,2.5,2
6.5,3.0,5.5,1.8,2
5.0,3.4,1.5,0.2,0
4.7,3.2,1.3,0.2,0
6.6,2.9,4.6,1.3,1
5.5,3.5,1.3,0.2,0
7.7,3.0,6.1,2.3,2
6.1,3.0,4.9,1.8,2
4.9,3.1,1.5,0.1,0
5.5,2.4,3.8,1.1,1
5.7,2.9,4.2,1.3,1
6.0,2.9,4.5,1.5,1
6.4,2.7,5.3,1.9,2
5.4,3.7,1.5,0.2,0
6.1,2.9,4.7,1.4,1
6.5,2.8,4.6,1.5,1
5.6,2.7,4.2,1.3,1
6.3,3.4,5.6,2.4,2
4.9,3.1,1.5,0.1,0
6.8,2.8,4.8,1.4,1
5.7,2.8,4.5,1.3,1
6.0,2.7,5.1,1.6,1
5.0,3.5,1.3,0.3,0
6.5,3.0,5.2,2.0,2
6.1,2.8,4.7,1.2,1
5.1,3.5,1.4,0.3,0
4.6,3.1,1.5,0.2,0
6.5,3.0,5.8,2.2,2
4.6,3.4,1.4,0.3,0
4.6,3.2,1.4,0.2,0
7.7,2.8,6.7,2.0,2
5.9,3.2,4.8,1.8,1
5.1,3.8,1.6,0.2,0
4.9,3.0,1.4,0.2,0
4.9,2.4,3.3,1.0,1
4.5,2.3,1.3,0.3,0
5.8,2.7,4.1,1.0,1
5.0,3.4,1.6,0.4,0
5.2,3.4,1.4,0.2,0
5.3,3.7,1.5,0.2,0
5.0,3.6,1.4,0.2,0
5.6,2.9,3.6,1.3,1
4.8,3.1,1.6,0.2,0
6.3,2.7,4.9,1.8,2
5.7,2.8,4.1,1.3,1
5.0,3.0,1.6,0.2,0
6.3,3.3,6.0,2.5,2
5.0,3.5,1.6,0.6,0
5.5,2.6,4.4,1.2,1
5.7,3.0,4.2,1.2,1
4.4,2.9,1.4,0.2,0
4.8,3.0,1.4,0.1,0
5.5,2.4,3.7,1.0,1
30,4,setosa,versicolor,virginica
5.9,3.0,4.2,1.5,1
6.9,3.1,5.4,2.1,2
5.1,3.3,1.7,0.5,0
6.0,3.4,4.5,1.6,1
5.5,2.5,4.0,1.3,1
6.2,2.9,4.3,1.3,1
5.5,4.2,1.4,0.2,0
6.3,2.8,5.1,1.5,2
5.6,3.0,4.1,1.3,1
6.7,2.5,5.8,1.8,2
7.1,3.0,5.9,2.1,2
4.3,3.0,1.1,0.1,0
5.6,2.8,4.9,2.0,2
5.5,2.3,4.0,1.3,1
6.0,2.2,4.0,1.0,1
5.1,3.5,1.4,0.2,0
5.7,2.6,3.5,1.0,1
4.8,3.4,1.9,0.2,0
5.1,3.4,1.5,0.2,0
5.7,2.5,5.0,2.0,2
5.4,3.4,1.7,0.2,0
5.6,3.0,4.5,1.5,1
6.3,2.9,5.6,1.8,2
6.3,2.5,4.9,1.5,1
5.8,2.7,3.9,1.2,1
6.1,3.0,4.6,1.4,1
5.2,4.1,1.5,0.1,0
6.7,3.1,4.7,1.5,1
6.7,3.3,5.7,2.5,2
6.4,2.9,4.3,1.3,1

import pandas as pd
import tensorflow as tf TRAIN_URL = "http://download.tensorflow.org/data/iris_training.csv"
TEST_URL = "http://download.tensorflow.org/data/iris_test.csv" CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth',
'PetalLength', 'PetalWidth', 'Species']
SPECIES = ['Setosa', 'Versicolor', 'Virginica'] def maybe_download():
# train_path = tf.keras.utils.get_file(TRAIN_URL.split('/')[-], TRAIN_URL)
# test_path = tf.keras.utils.get_file(TEST_URL.split('/')[-], TEST_URL)
#
# return train_path, test_path
return 'iris_training.csv', 'iris_test.csv' def load_data(label_name='Species'):
train_path, test_path = maybe_download() """Parses the csv file in TRAIN_URL and TEST_URL.""" # Create a local copy of the training set.
# train_path = tf.keras.utils.get_file(fname=TRAIN_URL.split('/')[-],
# origin=TRAIN_URL)
# train_path now holds the pathname: ~/.keras/datasets/iris_training.csv # Parse the local CSV file.
train = pd.read_csv(filepath_or_buffer=train_path,
names=CSV_COLUMN_NAMES, # list of column names
header= # ignore the first row of the CSV file.
)
# train now holds a pandas DataFrame, which is data structure
# analogous to a table. # . Assign the DataFrame's labels (the right-most column) to train_label.
# . Delete (pop) the labels from the DataFrame.
# . Assign the remainder of the DataFrame to train_features # label_name = y_name
train_features, train_label = train, train.pop(label_name) # Apply the preceding logic to the test set.
# test_path = tf.keras.utils.get_file(TEST_URL.split('/')[-], TEST_URL)
test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=)
test_features, test_label = test, test.pop(label_name) # Return four DataFrames.
return (train_features, train_label), (test_features, test_label) def train_input_fn(features, labels, batch_size):
"""An input function for training"""
# Convert the inputs to a Dataset.
dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels)) # Shuffle, repeat, and batch the examples.
dataset = dataset.shuffle().repeat().batch(batch_size) # Return the dataset.
return dataset def eval_input_fn(features, labels, batch_size):
"""An input function for evaluation or prediction"""
features = dict(features)
if labels is None:
# No labels, use only features.
inputs = features
else:
inputs = (features, labels) # Convert the inputs to a Dataset.
dataset = tf.data.Dataset.from_tensor_slices(inputs) # Batch the examples
assert batch_size is not None, "batch_size must not be None"
dataset = dataset.batch(batch_size) # Return the dataset.
return dataset # The remainder of this file contains a simple example of a csv parser,
# implemented using a the `Dataset` class. # `tf.parse_csv` sets the types of the outputs to match the examples given in
# the `record_defaults` argument.
CSV_TYPES = [[0.0], [0.0], [0.0], [0.0], []] def _parse_line(line):
# Decode the line into its fields
fields = tf.decode_csv(line, record_defaults=CSV_TYPES) # Pack the result into a dictionary
features = dict(zip(CSV_COLUMN_NAMES, fields)) # Separate the label from the features
label = features.pop('Species') return features, label def csv_input_fn(csv_path, batch_size):
# Create a dataset containing the text lines.
dataset = tf.data.TextLineDataset(csv_path).skip() # Parse each line.
dataset = dataset.map(_parse_line) # Shuffle, repeat, and batch the examples.
dataset = dataset.shuffle().repeat().batch(batch_size) # Return the dataset.
return dataset
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""An Example of a DNNClassifier for the Iris dataset."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function import argparse
import tensorflow as tf import iris_data_mystudy parser = argparse.ArgumentParser()
parser.add_argument('--batch_size', default=100, type=int, help='batch size')
parser.add_argument('--train_steps', default=1000, type=int,
help='number of training steps') (train_x, train_y), (test_x, test_y) = iris_data_mystudy.load_data() import os def main(argv):
args = parser.parse_args(argv[1:]) # Fetch the data
(train_x, train_y), (test_x, test_y) = iris_data_mystudy.load_data() my_feature_columns, predict_x = [], {}
for key in train_x.keys():
my_feature_columns.append(tf.feature_column.numeric_column(key=key))
predict_x[key] = [float(i) for i in test_x[key].values]
expected = ['' for i in predict_x[key]] # Build 2 hidden layer DNN with 10, 10 units respectively.
classifier = tf.estimator.DNNClassifier(
feature_columns=my_feature_columns,
# Two hidden layers of 10 nodes each.
hidden_units=[10, 10],
# The model must choose between 3 classes.
n_classes=3) # Train the Model.
classifier.train(
input_fn=lambda: iris_data_mystudy.train_input_fn(train_x, train_y,
args.batch_size),
steps=args.train_steps) # Evaluate the model.
eval_result = classifier.evaluate(
input_fn=lambda: iris_data_mystudy.eval_input_fn(test_x, test_y,
args.batch_size)) print('\nTest set accuracy: {accuracy:0.3f}\n'.format(**eval_result)) predictions = classifier.predict(
input_fn=lambda: iris_data_mystudy.eval_input_fn(predict_x,
labels=None,
batch_size=args.batch_size)) template = ('\nmyProgress{}/{}ORI{}||RESULT{}|| Prediction is "{}" ({:.1f}%), expected "{}"') c, c_all_ = 0, len(expected)
for pred_dict, expec in zip(predictions, expected):
class_id = pred_dict['class_ids'][0]
probability = pred_dict['probabilities'][class_id]
ori = ','.join([str(predict_x[k][c]) for k in predict_x])
print(template.format(c, c_all_, ori, str(pred_dict), iris_data_mystudy.SPECIES[class_id],
100 * probability, expec))
c += 1 if __name__ == '__main__':
tf.logging.set_verbosity(tf.logging.INFO)
tf.app.run(main)
C:\Users\Public\py36\python.exe C:/Users/sas/PycharmProjects/py_win_to_unix/sci/iris/premade_estimator_mywholedata.py
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: D:\MYTMPH~\tmpx25o9607
INFO:tensorflow:Using config: {'_model_dir': 'D:\\MYTMPH~1\\tmpx25o9607', '_tf_random_seed': None, '_save_summary_steps': , '_save_checkpoints_steps': None, '_save_checkpoints_secs': , '_session_config': None, '_keep_checkpoint_max': , '_keep_checkpoint_every_n_hours': , '_log_step_count_steps': , '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x000002765C0B2A20>, '_task_type': 'worker', '_task_id': , '_global_id_in_cluster': , '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': , '_num_worker_replicas': }
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
-- ::00.872812: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for into D:\MYTMPH~\tmpx25o9607\model.ckpt.
INFO:tensorflow:loss = 234.66115, step =
INFO:tensorflow:global_step/sec: 660.215
INFO:tensorflow:loss = 17.675238, step = (0.151 sec)
INFO:tensorflow:global_step/sec: 942.801
INFO:tensorflow:loss = 11.180588, step = (0.106 sec)
INFO:tensorflow:global_step/sec: 1299.09
INFO:tensorflow:loss = 7.819012, step = (0.076 sec)
INFO:tensorflow:global_step/sec: 1279.31
INFO:tensorflow:loss = 8.395781, step = (0.079 sec)
INFO:tensorflow:global_step/sec: 1120.52
INFO:tensorflow:loss = 12.372395, step = (0.089 sec)
INFO:tensorflow:global_step/sec: 1178.67
INFO:tensorflow:loss = 7.282875, step = (0.084 sec)
INFO:tensorflow:global_step/sec: 1218.92
INFO:tensorflow:loss = 8.7485, step = (0.082 sec)
INFO:tensorflow:global_step/sec: 968.145
INFO:tensorflow:loss = 3.7724056, step = (0.104 sec)
INFO:tensorflow:global_step/sec: 934.229
INFO:tensorflow:loss = 3.3475294, step = (0.107 sec)
INFO:tensorflow:Saving checkpoints for into D:\MYTMPH~\tmpx25o9607\model.ckpt.
INFO:tensorflow:Loss for final step: 5.2043657.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at ---::
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from D:\MYTMPH~\tmpx25o9607\model.ckpt-
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Finished evaluation at ---::
INFO:tensorflow:Saving dict for global step : accuracy = 1.0, average_loss = 0.04594822, global_step = , loss = 1.3784466 Test set accuracy: 1.000 INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from D:\MYTMPH~\tmpx25o9607\model.ckpt-
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op. myProgress0/30ORI5.,3.0,4.2,1.5||RESULT{'logits': array([-4.073111 , 3.3400419, -3.450334 ], dtype=float32), 'probabilities': array([6.0222525e-04, 9.9827516e-01, 1.1226065e-03], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Versicolor" (99.8%), expected "" myProgress1/30ORI6.,3.1,5.4,2.1||RESULT{'logits': array([-8.557374 , 0.5901505, 3.692759 ], dtype=float32), 'probabilities': array([4.5787260e-06, 4.2999577e-02, 9.5699579e-01], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Virginica" (95.7%), expected "" myProgress2/30ORI5.,3.3,1.7,0.5||RESULT{'logits': array([ 15.67865 , 9.518664, -17.122147], dtype=float32), 'probabilities': array([9.9789220e-01, 2.1078316e-03, 5.6738612e-15], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Setosa" (99.8%), expected "" myProgress3/30ORI6.,3.4,4.5,1.6||RESULT{'logits': array([-4.488565 , 2.8848784, -2.4938211], dtype=float32), 'probabilities': array([6.244299e-04, 9.947857e-01, 4.589761e-03], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Versicolor" (99.5%), expected "" myProgress4/30ORI5.,2.5,4.0,1.3||RESULT{'logits': array([-4.125968 , 2.9445832, -2.7388015], dtype=float32), 'probabilities': array([8.4616721e-04, 9.9576628e-01, 3.3876204e-03], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Versicolor" (99.6%), expected "" myProgress5/30ORI6.,2.9,4.3,1.3||RESULT{'logits': array([-3.5961967, 4.0570755, -4.9506564], dtype=float32), 'probabilities': array([4.7420594e-04, 9.9940348e-01, 1.2238618e-04], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Versicolor" (99.9%), expected "" myProgress6/30ORI5.,4.2,1.4,0.2||RESULT{'logits': array([ 21.595142, 11.861579, -21.650354], dtype=float32), 'probabilities': array([9.9994075e-01, 5.9257236e-05, 1.6545992e-19], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Setosa" (100.0%), expected "" myProgress7/30ORI6.,2.8,5.1,1.5||RESULT{'logits': array([-6.8899775, 1.2537876, 1.5890163], dtype=float32), 'probabilities': array([1.2113204e-04, 4.1691846e-01, 5.8296043e-01], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Virginica" (58.3%), expected "" myProgress8/30ORI5.,3.0,4.1,1.3||RESULT{'logits': array([-3.3489664, 3.5279539, -4.189754 ], dtype=float32), 'probabilities': array([1.0297953e-03, 9.9852604e-01, 4.4422352e-04], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Versicolor" (99.9%), expected "" myProgress9/30ORI6.,2.5,5.8,1.8||RESULT{'logits': array([-9.557738 , -0.5458323, 6.196618 ], dtype=float32), 'probabilities': array([1.4370033e-07, 1.1783625e-03, 9.9882144e-01], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Virginica" (99.9%), expected "" myProgress10/30ORI7.,3.0,5.9,2.1||RESULT{'logits': array([-9.772497 , -0.28590763, 5.876704 ], dtype=float32), 'probabilities': array([1.5948658e-07, 2.1023140e-03, 9.9789751e-01], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Virginica" (99.8%), expected "" myProgress11/30ORI4.,3.0,1.1,0.1||RESULT{'logits': array([ 17.55983 , 9.681561, -17.754019], dtype=float32), 'probabilities': array([9.9962127e-01, 3.7874514e-04, 4.6049518e-16], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Setosa" (100.0%), expected "" myProgress12/30ORI5.,2.8,4.9,2.0||RESULT{'logits': array([-7.803207 , -0.3124646, 4.896084 ], dtype=float32), 'probabilities': array([3.0366703e-06, 5.4398365e-03, 9.9455714e-01], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Virginica" (99.5%), expected "" myProgress13/30ORI5.,2.3,4.0,1.3||RESULT{'logits': array([-4.5208964, 2.6824176, -2.0642245], dtype=float32), 'probabilities': array([7.3716807e-04, 9.9066305e-01, 8.5997432e-03], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Versicolor" (99.1%), expected "" myProgress14/30ORI6.,2.2,4.0,1.0||RESULT{'logits': array([-3.103953 , 4.2947545, -5.656597 ], dtype=float32), 'probabilities': array([6.1163987e-04, 9.9934071e-01, 4.7631765e-05], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Versicolor" (99.9%), expected "" myProgress15/30ORI5.,3.5,1.4,0.2||RESULT{'logits': array([ 19.246971, 10.753842, -19.625887], dtype=float32), 'probabilities': array([9.9979514e-01, 2.0482930e-04, 1.3111250e-17], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Setosa" (100.0%), expected "" myProgress16/30ORI5.,2.6,3.5,1.0||RESULT{'logits': array([ 0.12415126, 5.1074505 , -7.748658 ], dtype=float32), 'probabilities': array([6.8047806e-03, 9.9319261e-01, 2.5923666e-06], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Versicolor" (99.3%), expected "" myProgress17/30ORI4.,3.4,1.9,0.2||RESULT{'logits': array([ 14.914921, 9.332862, -16.685436], dtype=float32), 'probabilities': array([9.9624938e-01, 3.7506856e-03, 1.8815136e-14], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Setosa" (99.6%), expected "" myProgress18/30ORI5.,3.4,1.5,0.2||RESULT{'logits': array([ 18.556929, 10.536166, -19.18138 ], dtype=float32), 'probabilities': array([9.996716e-01, 3.284615e-04, 4.076791e-17], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Setosa" (100.0%), expected "" myProgress19/30ORI5.,2.5,5.0,2.0||RESULT{'logits': array([-8.281928 , -0.5296105, 5.5087314], dtype=float32), 'probabilities': array([1.0227221e-06, 2.3798312e-03, 9.9761909e-01], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Virginica" (99.8%), expected "" myProgress20/30ORI5.,3.4,1.7,0.2||RESULT{'logits': array([ 18.629036, 10.756583, -19.529491], dtype=float32), 'probabilities': array([9.9961901e-01, 3.8095328e-04, 2.6779140e-17], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Setosa" (100.0%), expected "" myProgress21/30ORI5.,3.0,4.5,1.5||RESULT{'logits': array([-5.327266 , 1.7238306 , -0.07224458], dtype=float32), 'probabilities': array([7.4258365e-04, 8.5703361e-01, 1.4222382e-01], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Versicolor" (85.7%), expected "" myProgress22/30ORI6.,2.9,5.6,1.8||RESULT{'logits': array([-8.589258 , -0.3179294, 5.2680035], dtype=float32), 'probabilities': array([9.5552411e-07, 3.7362350e-03, 9.9626285e-01], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Virginica" (99.6%), expected "" myProgress23/30ORI6.,2.5,4.9,1.5||RESULT{'logits': array([-6.850107 , 1.4749087, 1.2317538], dtype=float32), 'probabilities': array([1.3583169e-04, 5.6041485e-01, 4.3944934e-01], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Versicolor" (56.0%), expected "" myProgress24/30ORI5.,2.7,3.9,1.2||RESULT{'logits': array([-2.8687124, 4.1638584, -5.565254 ], dtype=float32), 'probabilities': array([8.818289e-04, 9.990588e-01, 5.946907e-05], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Versicolor" (99.9%), expected "" myProgress25/30ORI6.,3.0,4.6,1.4||RESULT{'logits': array([-4.7632866, 2.8746686, -2.311274 ], dtype=float32), 'probabilities': array([4.7890263e-04, 9.9396026e-01, 5.5608703e-03], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Versicolor" (99.4%), expected "" myProgress26/30ORI5.,4.1,1.5,0.1||RESULT{'logits': array([ 20.011753, 11.262881, -20.466146], dtype=float32), 'probabilities': array([9.9984133e-01, 1.5861503e-04, 2.6339257e-18], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Setosa" (100.0%), expected "" myProgress27/30ORI6.,3.1,4.7,1.5||RESULT{'logits': array([-4.609805 , 3.8163486, -3.9574132], dtype=float32), 'probabilities': array([2.1892253e-04, 9.9936074e-01, 4.2035911e-04], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Versicolor" (99.9%), expected "" myProgress28/30ORI6.,3.3,5.7,2.5||RESULT{'logits': array([-9.505449 , -0.5826268, 6.30414 ], dtype=float32), 'probabilities': array([1.3600010e-07, 1.0201682e-03, 9.9897963e-01], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Virginica" (99.9%), expected "" myProgress29/30ORI6.,2.9,4.3,1.3||RESULT{'logits': array([-3.4441397, 4.3723693, -5.5904927], dtype=float32), 'probabilities': array([4.0284474e-04, 9.9955004e-01, 4.7096488e-05], dtype=float32), 'class_ids': array([], dtype=int64), 'classes': array([b''], dtype=object)}|| Prediction is "Versicolor" (100.0%), expected "" Process finished with exit code
DNNClassifier 深度神经网络 分类器的更多相关文章
- Batch Normalization原理及其TensorFlow实现——为了减少深度神经网络中的internal covariate shift,论文中提出了Batch Normalization算法,首先是对”每一层“的输入做一个Batch Normalization 变换
批标准化(Bactch Normalization,BN)是为了克服神经网络加深导致难以训练而诞生的,随着神经网络深度加深,训练起来就会越来越困难,收敛速度回很慢,常常会导致梯度弥散问题(Vanish ...
- 深度神经网络DNN的多GPU数据并行框架 及其在语音识别的应用
深度神经网络(Deep Neural Networks, 简称DNN)是近年来机器学习领域中的研究热点,产生了广泛的应用.DNN具有深层结构.数千万参数需要学习,导致训练非常耗时.GPU有强大的计算能 ...
- BP神经网络分类器的设计
1.BP神经网络训练过程论述 BP网络结构有3层:输入层.隐含层.输出层,如图1所示. 图1 三层BP网络结构 3层BP神经网络学习训练过程主要由4部分组成:输入模式顺传播(输入模式由输入层经隐含层向 ...
- CNN(卷积神经网络)、RNN(循环神经网络)、DNN(深度神经网络)的内部网络结构有什么区别?
https://www.zhihu.com/question/34681168 CNN(卷积神经网络).RNN(循环神经网络).DNN(深度神经网络)的内部网络结构有什么区别?修改 CNN(卷积神经网 ...
- TensorFlow 深度学习笔记 TensorFlow实现与优化深度神经网络
转载请注明作者:梦里风林 Github工程地址:https://github.com/ahangchen/GDLnotes 欢迎star,有问题可以到Issue区讨论 官方教程地址 视频/字幕下载 全 ...
- TensorFlow实现与优化深度神经网络
TensorFlow实现与优化深度神经网络 转载请注明作者:梦里风林Github工程地址:https://github.com/ahangchen/GDLnotes欢迎star,有问题可以到Issue ...
- 如何用70行Java代码实现深度神经网络算法
http://www.tuicool.com/articles/MfYjQfV 如何用70行Java代码实现深度神经网络算法 时间 2016-02-18 10:46:17 ITeye 原文 htt ...
- 深度神经网络(DNN)模型与前向传播算法
深度神经网络(Deep Neural Networks, 以下简称DNN)是深度学习的基础,而要理解DNN,首先我们要理解DNN模型,下面我们就对DNN的模型与前向传播算法做一个总结. 1. 从感知机 ...
- 深度神经网络(DNN)反向传播算法(BP)
在深度神经网络(DNN)模型与前向传播算法中,我们对DNN的模型和前向传播算法做了总结,这里我们更进一步,对DNN的反向传播算法(Back Propagation,BP)做一个总结. 1. DNN反向 ...
随机推荐
- JS授权
(function(){ var origin_url = location.href; var oauth_url = 'https://vx.mcilife.com/weixin/api/oaut ...
- show()的方向
<!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content ...
- python装饰器、迭代器、生成器
装饰器:为已存在的函数或者或者对象添加额外的功能 def wrapper(f): #装饰器函数,f是被装饰的函数 def inner(*args,**kwargs): '''在被装饰函数之前要做的事' ...
- 编译Nginx, 并使用自签证书实现https访问
1. 编译安装nginx1.8.1 [root@centos7 nginx-1.8.1]# ./configure --prefix=/usr/local/nginx.1.8.1 --with-htt ...
- thinkphp5 框架修改的地方
框架修改的地方 vendor/topthink/think-captcha/src/Captcha.php api验证码入库 196行 $img_code = strtoupper(implode(' ...
- 【tips】RESTful架构
认识RESTful在前后端分离的应用模式里,后端API接口如何定义?例如对于后端数据库中保存了商品的信息,前端可能需要对商品数据进行增删改查,那相应的每个操作后端都需要提供一个API接口: PO ...
- Windows Server 2008 R2 Enterprise 安装.NET Framework 4.0
由于服务器上没有.NET 4.0 部署不了 4.0及以上版本的网站,所以给他安排一下: 复制下好的.NET Framework 4.0到服务器开始安装: 安装完,重新打开IIS,已经 ...
- Nginx学习总结(4)——负载均衡session会话保持方法
负载均衡时,为了保证同一用户session会被分配到同一台服务器上,可以使用以下方法: 1.使用cookie 将用户的session存入cookie里,当用户分配到不同的服务器时,先判断服务器是否存在 ...
- 将 Oracle VirtualBox 中运行的虚拟机导入 VMware Fusion、Workstation 或 Player
1.从virtualbox种导出电脑为 .ova格式镜像 要导入 Oracle VirtualBox 中运行的虚拟机,必须将该虚拟机从 VirtualBox 导出到开放虚拟化格式存档(.ova 文件) ...
- POJ 1741 Tree【Tree,点分治】
给一棵边带权树,问两点之间的距离小于等于K的点对有多少个. 模板题:就是不断找树的重心,然后分开了,分治,至少分成两半,就是上限为log 然后一起统计就ok了 #include<iostream ...