1-python中的两大法宝和加载数据
python中的两大法宝和加载数据
1. Python两大法宝
① Python3.6.3相当于一个package,package里面有不同的区域,不同的区域有不同的工具。
② Python语法有两大法宝:dir()、help() 函数。
- dir():打开,看见里面有多少分区、多少工具。
- help():说明书。
import torch
print(torch.cuda.is_available())
help(torch.cuda.is_available) # 查看 torch.cuda.is_available 的用法
dir(torch) # 查看torch包中有哪些区、有哪些工具
True
Help on function is_available in module torch.cuda:
is_available() -> bool
Returns a bool indicating if CUDA is currently available.
['AVG',
'AggregationType',
'AliasDb',
'AnyType',
'Argument',
'ArgumentSpec',
'BFloat16Storage',
'BFloat16Tensor',
'BenchmarkConfig',
'BenchmarkExecutionStats',
'Block',
'BoolStorage',
'BoolTensor',
'BoolType',
'BufferDict',
'ByteStorage',
'ByteTensor',
'CONV_BN_FUSION',
'CallStack',
'Capsule',
'CharStorage',
'CharTensor',
'ClassType',
'Code',
'CompilationUnit',
'CompleteArgumentSpec',
'ComplexDoubleStorage',
'ComplexFloatStorage',
'ComplexType',
'ConcreteModuleType',
'ConcreteModuleTypeBuilder',
'CudaBFloat16StorageBase',
'CudaBoolStorageBase',
'CudaByteStorageBase',
'CudaCharStorageBase',
'CudaComplexDoubleStorageBase',
'CudaComplexFloatStorageBase',
'CudaDoubleStorageBase',
'CudaFloatStorageBase',
'CudaHalfStorageBase',
'CudaIntStorageBase',
'CudaLongStorageBase',
'CudaShortStorageBase',
'DeepCopyMemoTable',
'DeserializationStorageContext',
'DeviceObjType',
'DictType',
'DisableTorchFunction',
'DoubleStorage',
'DoubleTensor',
'EnumType',
'ErrorReport',
'ExecutionPlan',
'FUSE_ADD_RELU',
'FatalError',
'FileCheck',
'FloatStorage',
'FloatTensor',
'FloatType',
'FunctionSchema',
'Future',
'FutureType',
'Generator',
'Gradient',
'Graph',
'GraphExecutorState',
'HOIST_CONV_PACKED_PARAMS',
'HalfStorage',
'HalfStorageBase',
'HalfTensor',
'INSERT_FOLD_PREPACK_OPS',
'IODescriptor',
'InferredType',
'IntStorage',
'IntTensor',
'IntType',
'InterfaceType',
'JITException',
'ListType',
'LiteScriptModule',
'LockingLogger',
'LoggerBase',
'LongStorage',
'LongTensor',
'MobileOptimizerType',
'ModuleDict',
'Node',
'NoneType',
'NoopLogger',
'NumberType',
'OperatorInfo',
'OptionalType',
'PRIVATE_OPS',
'ParameterDict',
'PyObjectType',
'PyTorchFileReader',
'PyTorchFileWriter',
'QInt32Storage',
'QInt32StorageBase',
'QInt8Storage',
'QInt8StorageBase',
'QUInt4x2Storage',
'QUInt8Storage',
'REMOVE_DROPOUT',
'RRefType',
'SUM',
'ScriptClass',
'ScriptClassFunction',
'ScriptDict',
'ScriptDictIterator',
'ScriptDictKeyIterator',
'ScriptFunction',
'ScriptList',
'ScriptListIterator',
'ScriptMethod',
'ScriptModule',
'ScriptModuleSerializer',
'ScriptObject',
'ScriptObjectProperty',
'SerializationStorageContext',
'Set',
'ShortStorage',
'ShortTensor',
'Size',
'StaticModule',
'Storage',
'Stream',
'StreamObjType',
'StringType',
'TYPE_CHECKING',
'Tensor',
'TensorType',
'ThroughputBenchmark',
'TracingState',
'TupleType',
'Type',
'USE_GLOBAL_DEPS',
'USE_RTLD_GLOBAL_WITH_LIBTORCH',
'UnionType',
'Use',
'Value',
'_C',
'_StorageBase',
'_VF',
'__all__',
'__annotations__',
'__builtins__',
'__cached__',
'__config__',
'__doc__',
'__file__',
'__future__',
'__loader__',
'__name__',
'__package__',
'__path__',
'__spec__',
'__version__',
'_adaptive_avg_pool2d',
'_adaptive_avg_pool3d',
'_add_batch_dim',
'_add_relu',
'_add_relu_',
'_aminmax',
'_amp_foreach_non_finite_check_and_unscale_',
'_amp_update_scale_',
'_assert',
'_assert_async',
'_baddbmm_mkl_',
'_batch_norm_impl_index',
'_cast_Byte',
'_cast_Char',
'_cast_Double',
'_cast_Float',
'_cast_Half',
'_cast_Int',
'_cast_Long',
'_cast_Short',
'_cat',
'_choose_qparams_per_tensor',
'_classes',
'_coalesce',
'_compute_linear_combination',
'_conj',
'_conj_physical',
'_convert_indices_from_coo_to_csr',
'_convolution',
'_convolution_mode',
'_convolution_nogroup',
'_copy_from',
'_copy_from_and_resize',
'_ctc_loss',
'_cudnn_ctc_loss',
'_cudnn_init_dropout_state',
'_cudnn_rnn',
'_cudnn_rnn_flatten_weight',
'_cufft_clear_plan_cache',
'_cufft_get_plan_cache_max_size',
'_cufft_get_plan_cache_size',
'_cufft_set_plan_cache_max_size',
'_cummax_helper',
'_cummin_helper',
'_debug_has_internal_overlap',
'_det_lu_based_helper',
'_det_lu_based_helper_backward_helper',
'_dim_arange',
'_dirichlet_grad',
'_embedding_bag',
'_embedding_bag_forward_only',
'_empty_affine_quantized',
'_empty_per_channel_affine_quantized',
'_euclidean_dist',
'_fake_quantize_learnable_per_channel_affine',
'_fake_quantize_learnable_per_tensor_affine',
'_fake_quantize_per_tensor_affine_cachemask_tensor_qparams',
'_fft_c2c',
'_fft_c2r',
'_fft_r2c',
'_foreach_abs',
'_foreach_abs_',
'_foreach_acos',
'_foreach_acos_',
'_foreach_add',
'_foreach_add_',
'_foreach_addcdiv',
'_foreach_addcdiv_',
'_foreach_addcmul',
'_foreach_addcmul_',
'_foreach_asin',
'_foreach_asin_',
'_foreach_atan',
'_foreach_atan_',
'_foreach_ceil',
'_foreach_ceil_',
'_foreach_cos',
'_foreach_cos_',
'_foreach_cosh',
'_foreach_cosh_',
'_foreach_div',
'_foreach_div_',
'_foreach_erf',
'_foreach_erf_',
'_foreach_erfc',
'_foreach_erfc_',
'_foreach_exp',
'_foreach_exp_',
'_foreach_expm1',
'_foreach_expm1_',
'_foreach_floor',
'_foreach_floor_',
'_foreach_frac',
'_foreach_frac_',
'_foreach_lgamma',
'_foreach_lgamma_',
'_foreach_log',
'_foreach_log10',
'_foreach_log10_',
'_foreach_log1p',
'_foreach_log1p_',
'_foreach_log2',
'_foreach_log2_',
'_foreach_log_',
'_foreach_maximum',
'_foreach_minimum',
'_foreach_mul',
'_foreach_mul_',
'_foreach_neg',
'_foreach_neg_',
'_foreach_reciprocal',
'_foreach_reciprocal_',
'_foreach_round',
'_foreach_round_',
'_foreach_sigmoid',
'_foreach_sigmoid_',
'_foreach_sin',
'_foreach_sin_',
'_foreach_sinh',
'_foreach_sinh_',
'_foreach_sqrt',
'_foreach_sqrt_',
'_foreach_sub',
'_foreach_sub_',
'_foreach_tan',
'_foreach_tan_',
'_foreach_tanh',
'_foreach_tanh_',
'_foreach_trunc',
'_foreach_trunc_',
'_foreach_zero_',
'_fused_dropout',
'_fused_moving_avg_obs_fq_helper',
'_grid_sampler_2d_cpu_fallback',
'_has_compatible_shallow_copy_type',
'_import_dotted_name',
'_index_copy_',
'_index_put_impl_',
'_initExtension',
'_jit_internal',
'_linalg_inv_out_helper_',
'_linalg_qr_helper',
'_linalg_utils',
'_load_global_deps',
'_lobpcg',
'_log_softmax',
'_log_softmax_backward_data',
'_logcumsumexp',
'_lowrank',
'_lu_with_info',
'_make_dual',
'_make_per_channel_quantized_tensor',
'_make_per_tensor_quantized_tensor',
'_masked_scale',
'_mkldnn',
'_mkldnn_reshape',
'_mkldnn_transpose',
'_mkldnn_transpose_',
'_namedtensor_internals',
'_neg_view',
'_nnpack_available',
'_nnpack_spatial_convolution',
'_ops',
'_pack_padded_sequence',
'_pad_packed_sequence',
'_pin_memory',
'_register_device_module',
'_remove_batch_dim',
'_reshape_from_tensor',
'_rowwise_prune',
'_s_where',
'_sample_dirichlet',
'_saturate_weight_to_fp16',
'_shape_as_tensor',
'_six',
'_sobol_engine_draw',
'_sobol_engine_ff_',
'_sobol_engine_initialize_state_',
'_sobol_engine_scramble_',
'_softmax',
'_softmax_backward_data',
'_sources',
'_sparse_addmm',
'_sparse_coo_tensor_unsafe',
'_sparse_csr_tensor_unsafe',
'_sparse_log_softmax',
'_sparse_log_softmax_backward_data',
'_sparse_mask_helper',
'_sparse_mm',
'_sparse_softmax',
'_sparse_softmax_backward_data',
'_sparse_sparse_matmul',
'_sparse_sum',
'_stack',
'_standard_gamma',
'_standard_gamma_grad',
'_storage_classes',
'_string_classes',
'_tensor',
'_tensor_classes',
'_tensor_str',
'_test_serialization_subcmul',
'_to_cpu',
'_trilinear',
'_unique',
'_unique2',
'_unpack_dual',
'_use_cudnn_ctc_loss',
'_use_cudnn_rnn_flatten_weight',
'_utils',
'_utils_internal',
'_validate_sparse_coo_tensor_args',
'_validate_sparse_csr_tensor_args',
'_vmap_internals',
'_weight_norm',
'_weight_norm_cuda_interface',
'abs',
'abs_',
'absolute',
'acos',
'acos_',
'acosh',
'acosh_',
'adaptive_avg_pool1d',
'adaptive_max_pool1d',
'add',
'addbmm',
'addcdiv',
'addcmul',
'addmm',
'addmv',
'addmv_',
'addr',
'affine_grid_generator',
'align_tensors',
'all',
'allclose',
'alpha_dropout',
'alpha_dropout_',
'amax',
'amin',
'aminmax',
'angle',
'any',
'ao',
'arange',
'arccos',
'arccos_',
'arccosh',
'arccosh_',
'arcsin',
'arcsin_',
'arcsinh',
'arcsinh_',
'arctan',
'arctan_',
'arctanh',
'arctanh_',
'are_deterministic_algorithms_enabled',
'argmax',
'argmin',
'argsort',
'as_strided',
'as_strided_',
'as_tensor',
'asin',
'asin_',
'asinh',
'asinh_',
'atan',
'atan2',
'atan_',
'atanh',
'atanh_',
'atleast_1d',
'atleast_2d',
'atleast_3d',
'attr',
'autocast',
'autocast_decrement_nesting',
'autocast_increment_nesting',
'autocast_mode',
'autograd',
'avg_pool1d',
'backends',
'baddbmm',
'bartlett_window',
'base_py_dll_path',
'batch_norm',
'batch_norm_backward_elemt',
'batch_norm_backward_reduce',
'batch_norm_elemt',
'batch_norm_gather_stats',
'batch_norm_gather_stats_with_counts',
'batch_norm_stats',
'batch_norm_update_stats',
'bernoulli',
'bfloat16',
'bilinear',
'binary_cross_entropy_with_logits',
'bincount',
'binomial',
'bitwise_and',
'bitwise_left_shift',
'bitwise_not',
'bitwise_or',
'bitwise_right_shift',
'bitwise_xor',
'blackman_window',
'block_diag',
'bmm',
'bool',
'broadcast_shapes',
'broadcast_tensors',
'broadcast_to',
'bucketize',
'can_cast',
'candidate',
'cartesian_prod',
'cat',
'cdist',
'cdouble',
'ceil',
'ceil_',
'celu',
'celu_',
'cfloat',
'chain_matmul',
'channel_shuffle',
'channels_last',
'channels_last_3d',
'cholesky',
'cholesky_inverse',
'cholesky_solve',
'choose_qparams_optimized',
'chunk',
'clamp',
'clamp_',
'clamp_max',
'clamp_max_',
'clamp_min',
'clamp_min_',
'classes',
'clear_autocast_cache',
'clip',
'clip_',
'clone',
'column_stack',
'combinations',
'compiled_with_cxx11_abi',
'complex',
'complex128',
'complex32',
'complex64',
'concat',
'conj',
'conj_physical',
'conj_physical_',
'constant_pad_nd',
'contiguous_format',
'conv1d',
'conv2d',
'conv3d',
'conv_tbc',
'conv_transpose1d',
'conv_transpose2d',
'conv_transpose3d',
'convolution',
'copysign',
'corrcoef',
'cos',
'cos_',
'cosh',
'cosh_',
'cosine_embedding_loss',
'cosine_similarity',
'count_nonzero',
'cov',
'cpp',
'cpu',
'cross',
'ctc_loss',
'ctypes',
'cuda',
'cuda_path',
'cuda_version',
'cudnn_affine_grid_generator',
'cudnn_batch_norm',
'cudnn_convolution',
'cudnn_convolution_add_relu',
'cudnn_convolution_relu',
'cudnn_convolution_transpose',
'cudnn_grid_sampler',
'cudnn_is_acceptable',
'cummax',
'cummin',
'cumprod',
'cumsum',
'cumulative_trapezoid',
'default_generator',
'deg2rad',
'deg2rad_',
'dequantize',
'det',
'detach',
'detach_',
'device',
'diag',
'diag_embed',
'diagflat',
'diagonal',
'diff',
'digamma',
'dist',
'distributed',
'distributions',
'div',
'divide',
'dll',
'dll_path',
'dll_paths',
'dlls',
'dot',
'double',
'dropout',
'dropout_',
'dsmm',
'dsplit',
'dstack',
'dtype',
'e',
'eig',
'einsum',
'embedding',
'embedding_bag',
'embedding_renorm_',
'empty',
'empty_like',
'empty_quantized',
'empty_strided',
'enable_grad',
'eq',
'equal',
'erf',
'erf_',
'erfc',
'erfc_',
'erfinv',
'exp',
'exp2',
'exp2_',
'exp_',
'expm1',
'expm1_',
'eye',
'fake_quantize_per_channel_affine',
'fake_quantize_per_tensor_affine',
'fbgemm_linear_fp16_weight',
'fbgemm_linear_fp16_weight_fp32_activation',
'fbgemm_linear_int8_weight',
'fbgemm_linear_int8_weight_fp32_activation',
'fbgemm_linear_quantize_weight',
'fbgemm_pack_gemm_matrix_fp16',
'fbgemm_pack_quantized_matrix',
'feature_alpha_dropout',
'feature_alpha_dropout_',
'feature_dropout',
'feature_dropout_',
'fft',
'fill_',
'finfo',
'fix',
'fix_',
'flatten',
'flip',
'fliplr',
'flipud',
'float',
'float16',
'float32',
'float64',
'float_power',
'floor',
'floor_',
'floor_divide',
'fmax',
'fmin',
'fmod',
'fork',
'frac',
'frac_',
'frexp',
'frobenius_norm',
'from_dlpack',
'from_file',
'from_numpy',
'frombuffer',
'full',
'full_like',
'functional',
'fused_moving_avg_obs_fake_quant',
'futures',
'gather',
'gcd',
'gcd_',
'ge',
'geqrf',
'ger',
'get_autocast_cpu_dtype',
'get_autocast_gpu_dtype',
'get_default_dtype',
'get_device',
'get_file_path',
'get_num_interop_threads',
'get_num_threads',
'get_rng_state',
'glob',
'gradient',
'greater',
'greater_equal',
'grid_sampler',
'grid_sampler_2d',
'grid_sampler_3d',
'group_norm',
'gru',
'gru_cell',
'gt',
'half',
'hamming_window',
'hann_window',
'hardshrink',
'has_cuda',
'has_cudnn',
'has_lapack',
'has_mkl',
'has_mkldnn',
'has_mlc',
'has_openmp',
'has_spectral',
'heaviside',
'hinge_embedding_loss',
'histc',
'histogram',
'hsmm',
'hsplit',
'hspmm',
'hstack',
'hub',
'hypot',
'i0',
'i0_',
'igamma',
'igammac',
'iinfo',
'imag',
'import_ir_module',
'import_ir_module_from_buffer',
'index_add',
'index_copy',
'index_fill',
'index_put',
'index_put_',
'index_select',
'inf',
'inference_mode',
'init_num_threads',
'initial_seed',
'inner',
'instance_norm',
'int',
'int16',
'int32',
'int64',
'int8',
'int_repr',
'inverse',
'is_anomaly_enabled',
'is_autocast_cache_enabled',
'is_autocast_cpu_enabled',
'is_autocast_enabled',
'is_complex',
'is_conj',
'is_distributed',
'is_floating_point',
'is_grad_enabled',
'is_inference',
'is_inference_mode_enabled',
'is_loaded',
'is_neg',
'is_nonzero',
'is_same_size',
'is_signed',
'is_storage',
'is_tensor',
'is_vulkan_available',
'is_warn_always_enabled',
'isclose',
'isfinite',
'isin',
'isinf',
'isnan',
'isneginf',
'isposinf',
'isreal',
'istft',
'jit',
'kaiser_window',
'kernel32',
'kl_div',
'kron',
'kthvalue',
'last_error',
'layer_norm',
'layout',
'lcm',
'lcm_',
'ldexp',
'ldexp_',
'le',
'legacy_contiguous_format',
'lerp',
'less',
'less_equal',
'lgamma',
'linalg',
'linspace',
'load',
'lobpcg',
'log',
'log10',
'log10_',
'log1p',
'log1p_',
'log2',
'log2_',
'log_',
'log_softmax',
'logaddexp',
'logaddexp2',
'logcumsumexp',
'logdet',
'logical_and',
'logical_not',
'logical_or',
'logical_xor',
'logit',
'logit_',
'logspace',
'logsumexp',
'long',
'lstm',
'lstm_cell',
'lstsq',
'lt',
'lu',
'lu_solve',
'lu_unpack',
'manual_seed',
'margin_ranking_loss',
'masked_fill',
'masked_scatter',
'masked_select',
'matmul',
'matrix_exp',
'matrix_power',
'matrix_rank',
'max',
'max_pool1d',
'max_pool1d_with_indices',
'max_pool2d',
'max_pool3d',
'maximum',
'mean',
'median',
'memory_format',
'merge_type_from_type_comment',
'meshgrid',
'min',
'minimum',
'miopen_batch_norm',
'miopen_convolution',
'miopen_convolution_transpose',
'miopen_depthwise_convolution',
'miopen_rnn',
'mkldnn_adaptive_avg_pool2d',
'mkldnn_convolution',
'mkldnn_convolution_backward_weights',
'mkldnn_linear_backward_weights',
'mkldnn_max_pool2d',
'mkldnn_max_pool3d',
'mm',
'mode',
'moveaxis',
'movedim',
'msort',
'mul',
'multinomial',
'multiply',
'multiprocessing',
'mv',
'mvlgamma',
'name',
'nan',
'nan_to_num',
'nan_to_num_',
'nanmean',
'nanmedian',
'nanquantile',
'nansum',
'narrow',
'narrow_copy',
'native_batch_norm',
'native_group_norm',
'native_layer_norm',
'native_norm',
'ne',
'neg',
'neg_',
'negative',
'negative_',
'nextafter',
'nn',
'no_grad',
'nonzero',
'norm',
'norm_except_dim',
'normal',
'not_equal',
'nuclear_norm',
'numel',
'nvtoolsext_dll_path',
'ones',
'ones_like',
'onnx',
'ops',
'optim',
'orgqr',
'ormqr',
'os',
'outer',
'overrides',
'package',
'pairwise_distance',
'parse_ir',
'parse_schema',
'parse_type_comment',
'path_patched',
'pca_lowrank',
'pdist',
'per_channel_affine',
'per_channel_affine_float_qparams',
'per_channel_symmetric',
'per_tensor_affine',
'per_tensor_symmetric',
'permute',
'pfiles_path',
'pi',
'pinverse',
'pixel_shuffle',
'pixel_unshuffle',
'platform',
'poisson',
'poisson_nll_loss',
'polar',
'polygamma',
'positive',
'pow',
'prelu',
'prepare_multiprocessing_environment',
'preserve_format',
'prev_error_mode',
'prod',
'profiler',
'promote_types',
'put',
'py_dll_path',
'q_per_channel_axis',
'q_per_channel_scales',
'q_per_channel_zero_points',
'q_scale',
'q_zero_point',
'qint32',
'qint8',
'qr',
'qscheme',
'quantile',
'quantization',
'quantize_per_channel',
'quantize_per_tensor',
'quantized_batch_norm',
'quantized_gru',
'quantized_gru_cell',
'quantized_lstm',
'quantized_lstm_cell',
'quantized_max_pool1d',
'quantized_max_pool2d',
'quantized_rnn_relu_cell',
'quantized_rnn_tanh_cell',
'quasirandom',
'quint4x2',
'quint8',
'rad2deg',
'rad2deg_',
'rand',
'rand_like',
'randint',
'randint_like',
'randn',
'randn_like',
'random',
'randperm',
'range',
'ravel',
'read_vitals',
'real',
'reciprocal',
'reciprocal_',
'relu',
'relu_',
'remainder',
'renorm',
'repeat_interleave',
'res',
'reshape',
'resize_as_',
'resize_as_sparse_',
'resolve_conj',
'resolve_neg',
'result_type',
'rnn_relu',
'rnn_relu_cell',
'rnn_tanh',
'rnn_tanh_cell',
'roll',
'rot90',
'round',
'round_',
'row_stack',
'rrelu',
'rrelu_',
'rsqrt',
'rsqrt_',
...]
2. Pytorch加载数据
① Pytorch中加载数据需要Dataset、Dataloader。
- Dataset提供一种方式去获取每个数据及其对应的label,告诉我们总共有多少个数据。
- Dataloader为后面的网络提供不同的数据形式,它将一批一批数据进行一个打包。
2.1 常用数据集两种形式
① 常用的第一种数据形式,文件夹的名称是它的label。
② 常用的第二种形式,lebel为文本格式,文本名称为图片名称,文本中的内容为对应的label。
from torch.utils.data import Dataset
help(Dataset)
Help on class Dataset in module torch.utils.data.dataset:
class Dataset(typing.Generic)
| An abstract class representing a :class:`Dataset`.
|
| All datasets that represent a map from keys to data samples should subclass
| it. All subclasses should overwrite :meth:`__getitem__`, supporting fetching a
| data sample for a given key. Subclasses could also optionally overwrite
| :meth:`__len__`, which is expected to return the size of the dataset by many
| :class:`~torch.utils.data.Sampler` implementations and the default options
| of :class:`~torch.utils.data.DataLoader`.
|
| .. note::
| :class:`~torch.utils.data.DataLoader` by default constructs a index
| sampler that yields integral indices. To make it work with a map-style
| dataset with non-integral indices/keys, a custom sampler must be provided.
|
| Method resolution order:
| Dataset
| typing.Generic
| builtins.object
|
| Methods defined here:
|
| __add__(self, other:'Dataset[T_co]') -> 'ConcatDataset[T_co]'
|
| __getattr__(self, attribute_name)
|
| __getitem__(self, index) -> +T_co
|
| ----------------------------------------------------------------------
| Class methods defined here:
|
| register_datapipe_as_function(function_name, cls_to_register, enable_df_api_tracing=False) from typing.GenericMeta
|
| register_function(function_name, function) from typing.GenericMeta
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __abstractmethods__ = frozenset()
|
| __annotations__ = {'functions': typing.Dict[str, typing.Callable]}
|
| __args__ = None
|
| __extra__ = None
|
| __next_in_mro__ = <class 'object'>
| The most base type
|
| __orig_bases__ = (typing.Generic[+T_co],)
|
| __origin__ = None
|
| __parameters__ = (+T_co,)
|
| __tree_hash__ = -9223371886060913604
|
| functions = {'concat': functools.partial(<function Dataset.register_da...
|
| ----------------------------------------------------------------------
| Static methods inherited from typing.Generic:
|
| __new__(cls, *args, **kwds)
| Create and return a new object. See help(type) for accurate signature.
2.2 路径直接加载数据
from PIL import Image
img_path = "Data/FirstTypeData/train/ants/0013035.jpg"
img = Image.open(img_path)
img.show()
2.3 Dataset加载数据
from torch.utils.data import Dataset
from PIL import Image
import os
class MyData(Dataset):
def __init__(self,root_dir,label_dir): # 该魔术方法当创建一个事例对象时,会自动调用该函数
self.root_dir = root_dir # self.root_dir 相当于类中的全局变量
self.label_dir = label_dir
self.path = os.path.join(self.root_dir,self.label_dir) # 字符串拼接,根据是Windows或Lixus系统情况进行拼接
self.img_path = os.listdir(self.path) # 获得路径下所有图片的地址
def __getitem__(self,idx):
img_name = self.img_path[idx]
img_item_path = os.path.join(self.root_dir,self.label_dir,img_name)
img = Image.open(img_item_path)
label = self.label_dir
return img, label
def __len__(self):
return len(self.img_path)
root_dir = "Data/FirstTypeData/train"
ants_label_dir = "ants"
bees_label_dir = "bees"
ants_dataset = MyData(root_dir, ants_label_dir)
bees_dataset = MyData(root_dir, bees_label_dir)
print(len(ants_dataset))
print(len(bees_dataset))
train_dataset = ants_dataset + bees_dataset # train_dataset 就是两个数据集的集合了
print(len(train_dataset))
img,label = train_dataset[200]
print("label:",label)
img.show()
124
121
245
label: bees
1-python中的两大法宝和加载数据的更多相关文章
- 【PyTorch教程】P3. Python学习中的两大法宝函数(当然也可以用在PyTorch)
温馨提示:为了更好的教程体验,提供视频.阅读地址 Youtube: https://www.youtube.com/playlist?list=PLgAyVnrNJ96CqYdjZ8v9YjQvCBc ...
- python中对两个 list 求交集,并集和差集
python中对两个 list 求交集,并集和差集: 1.首先是较为浅白的做法: >>> a=[1,2,3,4,5,6,7,8,9,10] >>> b=[1,2,3 ...
- Web 开发和数据科学家仍是 Python 开发的两大主力
由于 Python 2 即将退役,使用 Python 3 的开发者大约为 90%,Python 2 的使用量正在迅速减少.而去年仍有 1/4 的人使用 Python 2. Web 开发和数据科学家仍是 ...
- Kettle实现数据抽取、转换、装入和加载数据-数据转移ETL工具
原文地址:http://www.xue51.com/soft/5341.html Kettle是来自国外的一款开源的ETL工具,纯java编写,可以在Window.Linux.Unix上运行,绿色无需 ...
- python中保留两位小数
今天写程序的时候碰到了一个问题关于如何控制浮点数只显示小数点后两位,正常的想法是用round函数,例如 round(a, 2),但是在面对下面的问题时候round就不太好用了 >>> ...
- Python中的两种结构dict和set
Python内置了字典:dict的支持,dict全称dictionary,在其他语言中也称为map,使用键-值(key-value)存储,具有极快的查找速度. 假设要根据同学的名字查找对应的成绩 如果 ...
- Python中的print、input函数以及Python中交换两个变量解析
一.Python中的值交换操作 首先明确一点点,Python中的一切都是面向对象的,可以理解为Python的中一切都是对象. 我们知道Java也是面向对象的语言,但是在Java中定义一个值变量如下: ...
- Python中的两种路径
Java中有两种路径,一种是操作系统的路径path,另一种是类路径classpath. Python中也是如此,一种是操作系统环境变量中的path,另一种是PYTHONPATH. 当import xx ...
- 3dTiles 数据规范详解[3] 内嵌在瓦片文件中的两大数据表
转载请声明出处:全网@秋意正寒 零.本篇前言 说实话,我很纠结是先介绍瓦片的二进制数据文件结构,还是先介绍这两个重要的表.思前想后,我决定还是先介绍这两个数据表. 因为这两个表不先给读者灌输,那么介绍 ...
- Python中的十大图像处理工具
转自:微信博客 机器学习研究会订阅号 微信号 功能介绍机器学习研究会由百度七剑客雷鸣先生创办,旨在推动AI的技术发展和产业落地.参与组织北大.清华”AI前沿与产业趋势“公开课,广泛的和高校.企业.创业 ...
随机推荐
- [每日算法 - 华为机试] leetcode20 :有效的括号 「栈」
入口 力扣https://leetcode.cn/problems/valid-parentheses/submissions/ 题目描述 给定一个只包括 '(',')','{','}','[','] ...
- 如何查看 linux 发行版本
以 debian 10 buster 为例 有时候我们需要知道当前正在使用的 linux 的发行版本信息...可以通过下面几种方式来查看 使用 lsb_release 命令查看 lsb_release ...
- 正在开发的.net sql拼写神器
我正在开发的一个.net sql拼写工具,当然也可以算是ORM 该工具的作用就是帮忙码农拼写sql,对标开源项目SqlKata.该工具最适合搭配Dapper使用,所以附带了一个Dapper扩展.当然直 ...
- CountDownLatch的countDown()方法的底层源码
一.CountDownLatch的构造方法 // 创建倒数闩,设置倒数的总数State的值 CountDownLatch doneSignal = new CountDownLatch(N); 二.c ...
- Trie树做题记录
Trie树 字典树 本质上就是树上路径字符串版本 特定的路径表示完整的字符串,同层的相同字母合并为一个一样的字母. B. 数字串前缀匹配 || [一本通提高篇Trie字典树]Phone List 这题 ...
- phpoffice php操作excel表格的导入和导出
https://packagist.org/packages/phpoffice/phpexcel 使用: composer require phpoffice/phpexcel 控制器引入 //引入 ...
- 使用 Joplin + Git + Gitee 实现笔记的多端同步
1-远程仓库环境准备 1.1-注册 Gitee 账号 由于使用 Git 作为版本控制工具,所以只要是 Git 支持的托管平台都是可以的.比如 Github.Gitlab.这里使用 Gitee 主要是考 ...
- Spring连接线程的事务控制
Spring连接线程的事务控制 通过把线程ThreadLocal绑定数据库来连接Connection来控制事务 源码 实现的方式不够优雅 代码实现 pom.xml <?xml version=& ...
- 反悔贪心&局部调整法学习笔记
一.什么是反悔贪心 反悔贪心就是在普通贪心的过程中"反悔",从而使得一些看似不太好贪心的题变成贪心可做题. 二.反悔贪心普遍流程 就是先使用一个好想的贪心策略,使用优先队列进行维护 ...
- 解析V8引擎底层原理,探究其优异性能之谜
@charset "UTF-8"; .markdown-body { line-height: 1.75; font-weight: 400; font-size: 15px; o ...