[Spark][Python][DataFrame][Write]DataFrame写入的例子

$ hdfs dfs -cat people.json

{"name":"Alice","pcode":"94304"}
{"name":"Brayden","age":30,"pcode":"94304"}
{"name":"Carla","age":19,"pcoe":"10036"}
{"name":"Diana","age":46}
{"name":"Etienne","pcode":"94104"}

$pyspark

sqlContext = HiveContext(sc)

peopleDF = sqlContext.read.json("people.json")

peopleDF.write.format("parquet").mode("append").partitionBy("age").saveAsTable("people")

17/10/07 00:58:18 INFO storage.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 65.5 KB, free 338.2 KB)
17/10/07 00:58:18 INFO storage.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 21.4 KB, free 359.6 KB)
17/10/07 00:58:18 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:59616 (size: 21.4 KB, free: 208.8 MB)
17/10/07 00:58:18 INFO spark.SparkContext: Created broadcast 2 from saveAsTable at NativeMethodAccessorImpl.java:-2
17/10/07 00:58:18 INFO storage.MemoryStore: Block broadcast_3 stored as values in memory (estimated size 251.1 KB, free 610.7 KB)
17/10/07 00:58:18 INFO storage.MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 21.6 KB, free 632.4 KB)
17/10/07 00:58:18 INFO storage.BlockManagerInfo: Added broadcast_3_piece0 in memory on localhost:59616 (size: 21.6 KB, free: 208.7 MB)
17/10/07 00:58:18 INFO spark.SparkContext: Created broadcast 3 from saveAsTable at NativeMethodAccessorImpl.java:-2
17/10/07 00:58:19 INFO parquet.ParquetRelation: Using default output committer for Parquet: parquet.hadoop.ParquetOutputCommitter
17/10/07 00:58:19 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
17/10/07 00:58:19 INFO datasources.DynamicPartitionWriterContainer: Using user defined output committer class parquet.hadoop.ParquetOutputCommitter
17/10/07 00:58:19 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
17/10/07 00:58:19 INFO mapred.FileInputFormat: Total input paths to process : 1
17/10/07 00:58:19 INFO spark.SparkContext: Starting job: saveAsTable at NativeMethodAccessorImpl.java:-2
17/10/07 00:58:19 INFO scheduler.DAGScheduler: Got job 1 (saveAsTable at NativeMethodAccessorImpl.java:-2) with 1 output partitions
17/10/07 00:58:19 INFO scheduler.DAGScheduler: Final stage: ResultStage 1 (saveAsTable at NativeMethodAccessorImpl.java:-2)
17/10/07 00:58:19 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/10/07 00:58:19 INFO scheduler.DAGScheduler: Missing parents: List()
17/10/07 00:58:19 INFO scheduler.DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[7] at saveAsTable at NativeMethodAccessorImpl.java:-2), which has no missing parents
17/10/07 00:58:19 INFO storage.MemoryStore: Block broadcast_4 stored as values in memory (estimated size 72.7 KB, free 705.0 KB)
17/10/07 00:58:20 INFO storage.MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 26.4 KB, free 731.4 KB)
17/10/07 00:58:20 INFO storage.BlockManagerInfo: Added broadcast_4_piece0 in memory on localhost:59616 (size: 26.4 KB, free: 208.7 MB)
17/10/07 00:58:20 INFO spark.SparkContext: Created broadcast 4 from broadcast at DAGScheduler.scala:1006
17/10/07 00:58:20 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[7] at saveAsTable at NativeMethodAccessorImpl.java:-2)
17/10/07 00:58:20 INFO scheduler.TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
17/10/07 00:58:20 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, partition 0,PROCESS_LOCAL, 2149 bytes)
17/10/07 00:58:20 INFO executor.Executor: Running task 0.0 in stage 1.0 (TID 1)
17/10/07 00:58:20 INFO rdd.HadoopRDD: Input split: hdfs://localhost:8020/user/training/people.json:0+179
17/10/07 00:58:20 INFO codegen.GenerateUnsafeProjection: Code generated in 314.888218 ms
17/10/07 00:58:20 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
17/10/07 00:58:20 INFO datasources.DynamicPartitionWriterContainer: Using user defined output committer class parquet.hadoop.ParquetOutputCommitter
17/10/07 00:58:20 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
17/10/07 00:58:20 INFO codegen.GenerateUnsafeProjection: Code generated in 46.978197 ms
17/10/07 00:58:20 INFO codegen.GenerateUnsafeProjection: Code generated in 64.665839 ms
17/10/07 00:58:21 INFO codegen.GenerateUnsafeProjection: Code generated in 94.259071 ms
17/10/07 00:58:21 INFO codec.CodecConfig: Compression: GZIP
17/10/07 00:58:21 INFO hadoop.ParquetOutputFormat: Parquet block size to 134217728
17/10/07 00:58:21 INFO hadoop.ParquetOutputFormat: Parquet page size to 1048576
17/10/07 00:58:21 INFO hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576
17/10/07 00:58:21 INFO hadoop.ParquetOutputFormat: Dictionary is on
17/10/07 00:58:21 INFO hadoop.ParquetOutputFormat: Validation is off
17/10/07 00:58:21 INFO hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0
17/10/07 00:58:21 INFO hadoop.ParquetOutputFormat: Maximum row group padding size is 8388608 bytes
17/10/07 00:58:21 INFO parquet.CatalystWriteSupport: Initialized Parquet WriteSupport with Catalyst schema:
{
"type" : "struct",
"fields" : [ {
"name" : "name",
"type" : "string",
"nullable" : true,
"metadata" : { }
}, {
"name" : "pcode",
"type" : "string",
"nullable" : true,
"metadata" : { }
}, {
"name" : "pcoe",
"type" : "string",
"nullable" : true,
"metadata" : { }
} ]
}
and corresponding Parquet message type:
message spark_schema {
optional binary name (UTF8);
optional binary pcode (UTF8);
optional binary pcoe (UTF8);
}

17/10/07 00:58:21 INFO compress.CodecPool: Got brand-new compressor [.gz]
17/10/07 00:58:21 INFO datasources.DynamicPartitionWriterContainer: Maximum partitions reached, falling back on sorting.
17/10/07 00:58:21 INFO codegen.GenerateUnsafeProjection: Code generated in 34.281133 ms
17/10/07 00:58:21 INFO codegen.GenerateOrdering: Code generated in 85.573905 ms
17/10/07 00:58:21 INFO datasources.DynamicPartitionWriterContainer: Sorting complete. Writing out partition files one at a time.
17/10/07 00:58:21 INFO hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 54
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/parquet/lib/parquet-hadoop-bundle-1.5.0-cdh5.7.0.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/parquet/lib/parquet-pig-bundle-1.5.0-cdh5.7.0.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/parquet/lib/parquet-format-2.1.0-cdh5.7.0.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hive/lib/hive-jdbc-1.1.0-cdh5.7.0-standalone.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hive/lib/hive-exec-1.1.0-cdh5.7.0.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [shaded.parquet.org.slf4j.helpers.NOPLoggerFactory]
17/10/07 00:58:21 INFO hadoop.ColumnChunkPageWriteStore: written 80B for [name] BINARY: 2 values, 26B raw, 43B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
17/10/07 00:58:21 INFO hadoop.ColumnChunkPageWriteStore: written 73B for [pcode] BINARY: 2 values, 24B raw, 38B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
17/10/07 00:58:21 INFO hadoop.ColumnChunkPageWriteStore: written 47B for [pcoe] BINARY: 2 values, 6B raw, 26B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
17/10/07 00:58:22 INFO codec.CodecConfig: Compression: GZIP
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Parquet block size to 134217728
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Parquet page size to 1048576
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Dictionary is on
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Validation is off
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Maximum row group padding size is 8388608 bytes
17/10/07 00:58:22 INFO parquet.CatalystWriteSupport: Initialized Parquet WriteSupport with Catalyst schema:
{
"type" : "struct",
"fields" : [ {
"name" : "name",
"type" : "string",
"nullable" : true,
"metadata" : { }
}, {
"name" : "pcode",
"type" : "string",
"nullable" : true,
"metadata" : { }
}, {
"name" : "pcoe",
"type" : "string",
"nullable" : true,
"metadata" : { }
} ]
}
and corresponding Parquet message type:
message spark_schema {
optional binary name (UTF8);
optional binary pcode (UTF8);
optional binary pcoe (UTF8);
}

17/10/07 00:58:22 INFO compress.CodecPool: Got brand-new compressor [.gz]
17/10/07 00:58:22 INFO hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 26
17/10/07 00:58:22 INFO hadoop.ColumnChunkPageWriteStore: written 68B for [name] BINARY: 1 values, 15B raw, 33B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
17/10/07 00:58:22 INFO hadoop.ColumnChunkPageWriteStore: written 47B for [pcode] BINARY: 1 values, 6B raw, 26B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
17/10/07 00:58:22 INFO hadoop.ColumnChunkPageWriteStore: written 68B for [pcoe] BINARY: 1 values, 15B raw, 33B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
17/10/07 00:58:22 INFO codec.CodecConfig: Compression: GZIP
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Parquet block size to 134217728
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Parquet page size to 1048576
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Dictionary is on
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Validation is off
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Maximum row group padding size is 8388608 bytes
17/10/07 00:58:22 INFO parquet.CatalystWriteSupport: Initialized Parquet WriteSupport with Catalyst schema:
{
"type" : "struct",
"fields" : [ {
"name" : "name",
"type" : "string",
"nullable" : true,
"metadata" : { }
}, {
"name" : "pcode",
"type" : "string",
"nullable" : true,
"metadata" : { }
}, {
"name" : "pcoe",
"type" : "string",
"nullable" : true,
"metadata" : { }
} ]
}
and corresponding Parquet message type:
message spark_schema {
optional binary name (UTF8);
optional binary pcode (UTF8);
optional binary pcoe (UTF8);
}

17/10/07 00:58:22 INFO compress.CodecPool: Got brand-new compressor [.gz]
17/10/07 00:58:22 INFO hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 28
17/10/07 00:58:22 INFO hadoop.ColumnChunkPageWriteStore: written 74B for [name] BINARY: 1 values, 17B raw, 35B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
17/10/07 00:58:22 INFO hadoop.ColumnChunkPageWriteStore: written 68B for [pcode] BINARY: 1 values, 15B raw, 33B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
17/10/07 00:58:22 INFO hadoop.ColumnChunkPageWriteStore: written 47B for [pcoe] BINARY: 1 values, 6B raw, 26B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
17/10/07 00:58:22 INFO storage.BlockManagerInfo: Removed broadcast_2_piece0 on localhost:59616 in memory (size: 21.4 KB, free: 208.7 MB)
17/10/07 00:58:22 INFO codec.CodecConfig: Compression: GZIP
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Parquet block size to 134217728
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Parquet page size to 1048576
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Dictionary is on
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Validation is off
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0
17/10/07 00:58:22 INFO hadoop.ParquetOutputFormat: Maximum row group padding size is 8388608 bytes
17/10/07 00:58:22 INFO parquet.CatalystWriteSupport: Initialized Parquet WriteSupport with Catalyst schema:
{
"type" : "struct",
"fields" : [ {
"name" : "name",
"type" : "string",
"nullable" : true,
"metadata" : { }
}, {
"name" : "pcode",
"type" : "string",
"nullable" : true,
"metadata" : { }
}, {
"name" : "pcoe",
"type" : "string",
"nullable" : true,
"metadata" : { }
} ]
}
and corresponding Parquet message type:
message spark_schema {
optional binary name (UTF8);
optional binary pcode (UTF8);
optional binary pcoe (UTF8);
}

17/10/07 00:58:22 INFO compress.CodecPool: Got brand-new compressor [.gz]
17/10/07 00:58:22 INFO hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 13
17/10/07 00:58:22 INFO hadoop.ColumnChunkPageWriteStore: written 68B for [name] BINARY: 1 values, 15B raw, 33B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
17/10/07 00:58:22 INFO hadoop.ColumnChunkPageWriteStore: written 47B for [pcode] BINARY: 1 values, 6B raw, 26B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
17/10/07 00:58:22 INFO hadoop.ColumnChunkPageWriteStore: written 47B for [pcoe] BINARY: 1 values, 6B raw, 26B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
17/10/07 00:58:22 INFO output.FileOutputCommitter: Saved output of task 'attempt_201710070058_0001_m_000000_0' to hdfs://localhost:8020/user/hive/warehouse/people/_temporary/0/task_201710070058_0001_m_000000
17/10/07 00:58:22 INFO mapred.SparkHadoopMapRedUtil: attempt_201710070058_0001_m_000000_0: Committed
17/10/07 00:58:22 INFO executor.Executor: Finished task 0.0 in stage 1.0 (TID 1). 2057 bytes result sent to driver
17/10/07 00:58:22 INFO scheduler.DAGScheduler: ResultStage 1 (saveAsTable at NativeMethodAccessorImpl.java:-2) finished in 2.797 s
17/10/07 00:58:22 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 2797 ms on localhost (1/1)
17/10/07 00:58:22 INFO scheduler.DAGScheduler: Job 1 finished: saveAsTable at NativeMethodAccessorImpl.java:-2, took 3.236619 s
17/10/07 00:58:22 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
17/10/07 00:58:23 INFO hadoop.ParquetFileReader: Initiating action with parallelism: 5
17/10/07 00:58:23 INFO datasources.DynamicPartitionWriterContainer: Job job_201710070058_0000 committed.
17/10/07 00:58:23 INFO parquet.ParquetRelation: Listing hdfs://localhost:8020/user/hive/warehouse/people on driver
17/10/07 00:58:23 INFO parquet.ParquetRelation: Listing hdfs://localhost:8020/user/hive/warehouse/people/age=19 on driver
17/10/07 00:58:23 INFO parquet.ParquetRelation: Listing hdfs://localhost:8020/user/hive/warehouse/people/age=30 on driver
17/10/07 00:58:23 INFO parquet.ParquetRelation: Listing hdfs://localhost:8020/user/hive/warehouse/people/age=46 on driver
17/10/07 00:58:23 INFO parquet.ParquetRelation: Listing hdfs://localhost:8020/user/hive/warehouse/people/age=__HIVE_DEFAULT_PARTITION__ on driver
17/10/07 00:58:23 INFO parquet.ParquetRelation: Listing hdfs://localhost:8020/user/hive/warehouse/people on driver
17/10/07 00:58:23 INFO parquet.ParquetRelation: Listing hdfs://localhost:8020/user/hive/warehouse/people/age=19 on driver
17/10/07 00:58:23 INFO parquet.ParquetRelation: Listing hdfs://localhost:8020/user/hive/warehouse/people/age=30 on driver
17/10/07 00:58:23 INFO parquet.ParquetRelation: Listing hdfs://localhost:8020/user/hive/warehouse/people/age=46 on driver
17/10/07 00:58:23 INFO parquet.ParquetRelation: Listing hdfs://localhost:8020/user/hive/warehouse/people/age=__HIVE_DEFAULT_PARTITION__ on driver
17/10/07 00:58:24 WARN hive.HiveContext$$anon$2: Persisting partitioned data source relation `people` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive. Input path(s): 
hdfs://localhost:8020/user/hive/warehouse/people

[training@localhost ~]$ hive

hive>
> show tables like 'people';
OK
people
Time taken: 5.046 seconds, Fetched: 1 row(s)
hive>

sqlContext =HiveContext(sc)
newPeopleDF = sqlContext.read.table("people")

newPeopleDF.limit(5).show()

+-------+-----+-----+----+
| name|pcode| pcoe| age|
+-------+-----+-----+----+
|Brayden|94304| null| 30|
| Diana| null| null| 46|
| Carla| null|10036| 19|
| Alice|94304| null|null|
|Etienne|94104| null|null|
+-------+-----+-----+----+

可以看到,确实把一个从jason 读取得到的 DataFrame,写入了parquet 格式的表,表名为 people

然后,通过再一次地通过 HiveContext 来读取此表,得到并显示了它的数据。

[Spark][Python][DataFrame][Write]DataFrame写入的例子的更多相关文章

  1. [Spark][Python][RDD][DataFrame]从 RDD 构造 DataFrame 例子

    [Spark][Python][RDD][DataFrame]从 RDD 构造 DataFrame 例子 from pyspark.sql.types import * schema = Struct ...

  2. [Spark][Python][DataFrame][RDD]DataFrame中抽取RDD例子

    [Spark][Python][DataFrame][RDD]DataFrame中抽取RDD例子 sqlContext = HiveContext(sc) peopleDF = sqlContext. ...

  3. [Spark][Python][DataFrame][RDD]从DataFrame得到RDD的例子

    [Spark][Python][DataFrame][RDD]从DataFrame得到RDD的例子 $ hdfs dfs -cat people.json {"name":&quo ...

  4. [Spark][Python][DataFrame][SQL]Spark对DataFrame直接执行SQL处理的例子

    [Spark][Python][DataFrame][SQL]Spark对DataFrame直接执行SQL处理的例子 $cat people.json {"name":" ...

  5. [Spark][Python]DataFrame的左右连接例子

    [Spark][Python]DataFrame的左右连接例子 $ hdfs dfs -cat people.json {"name":"Alice",&quo ...

  6. [Spark][Python]DataFrame where 操作例子

    [Spark][Python]DataFrame中取出有限个记录的例子 的 继续 [15]: myDF=peopleDF.where("age>21") In [16]: m ...

  7. [Spark][Python]DataFrame select 操作例子

    [Spark][Python]DataFrame中取出有限个记录的例子 的 继续 In [4]: peopleDF.select("age")Out[4]: DataFrame[a ...

  8. [Spark][Python]DataFrame中取出有限个记录的例子

    [Spark][Python]DataFrame中取出有限个记录的例子: sqlContext = HiveContext(sc) peopleDF = sqlContext.read.json(&q ...

  9. [Spark][Python]spark 从 avro 文件获取 Dataframe 的例子

    [Spark][Python]spark 从 avro 文件获取 Dataframe 的例子 从如下地址获取文件: https://github.com/databricks/spark-avro/r ...

随机推荐

  1. loadrunner 脚本录制-Protocol Advisor协议分析器的使用

    脚本录制-Protocol Advisor协议分析器的使用 by:授客 QQ:1033553122 1.启动Protocol Advisor File > Protocol Advisor &g ...

  2. Android ScrollView内部组件设置android:layout_height="fill_parent"无效的解决办法

    问题:scrollview内部组件都设置了android:layout_height="fill_parent"却没有效果. 解决办法:设置scrollview的fillViewp ...

  3. git 入门教程之分支管理

    背景 什么是分支?简单地说,分支就是两个相对独立的时间线,正常情况下,独立的时间线永远不会有交集,彼此不知道对方的存在,只有特定情况下,两条时间线才会相遇,因为相遇,所以相知,因为相知,所以改变! 正 ...

  4. (网页)习惯了CS回车操作人员,操作BS网页表单也是回车666

    1.第一步把表单,里面需要回车的input,或者是其他的表单按钮给一个clsss,例如下面的$('.cls'); 2.第二步,  把下面的代码复制过去,填写完最后一个自动提交:$("#sav ...

  5. seajs的原理以及基本使用

    seajs模块化开发 模块化开发,把整个文件分割成一个一个小文件. 使用方法 使用方法特别简单,首先在官网中下载sea.js,然后在页面中引入. index.html // 1.路径 // 2.回调 ...

  6. 【底层原理】深入理解Cache (下)

    得到了我的PC的cache参数如下: L1 Cache : 32KB , 8路组相连,linesize为 64Byte 64个组 L2 Cache:256KB 8路组相连,linesize为 64By ...

  7. python中装饰器的原理

    装饰器这玩意挺有用,当时感觉各种绕,现在终于绕明白了,俺滴个大爷,还是要慢慢思考才能买明白各种的真谛,没事就来绕一绕 def outer(func): def inner(): print(" ...

  8. centos7如何安装gcc5.4

    由于需要使用到cilk plus和std=c++14,所以决定将编译器升级. 基本教程如下: 1.下载GCC源码: wget ftp://mirrors.kernel.org/gnu/gcc/gcc- ...

  9. 手把手教你“将系统安装在U盘”上,实现个人系统随身带!

    本教程纯原创,转载请标注来源. 本教程适用安装的操作系统:Win XP,Win 7,优麒麟,Ubuntu,deepin,linux. 优盘要求:最好是USB3.0,USB2.0也可以,但是优盘至少要求 ...

  10. 力扣算法题—060第K个排列

    给出集合 [1,2,3,…,n],其所有元素共有 n! 种排列. 按大小顺序列出所有排列情况,并一一标记,当 n = 3 时, 所有排列如下: "123" "132&qu ...