2.2 RDD:计算 transform->action

  • 2.2.1 aggregate
x = sc.parallelize([2,3,4], 2)[Task不能跨分片,task数为2]
neutral_zero_value = (0,1) # sum: x+0 = x, product: 1*x = x
seqOp = (lambda aggregated, el: (aggregated[0] + el, aggregated[1] * el))
combOp = (lambda aggregated, el: (aggregated[0] + el[0], aggregated[1] * el[1]))
y = x.aggregate(neutral_zero_value,seqOp,combOp)[seqOp:各分片聚合计算
combOp:聚合各个分片
neutral_zero_value:初始化设定值,都要参与计算] # computes (cumulative sum, cumulative product)
print(x.collect())
print(y) [2, 3, 4]
(9, 24)
  • 2.2.2 aggregateByKey
x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)])
zeroValue = [] # empty list is 'zero value' for append operation
mergeVal = (lambda aggregated, el: aggregated + [(el,el**2)])
mergeComb = (lambda agg1,agg2: agg1 + agg2 )
y = x.aggregateByKey(zeroValue,mergeVal,mergeComb)
print(x.collect())
print(y.collect()) [('B', 1), ('B', 2), ('A', 3), ('A', 4), ('A', 5)]
[('A', [(3, 9), (4, 16), (5, 25)]), ('B', [(1, 1), (2, 4)])]
  • 2.2.3 cache(persistent)

  • 2.2.4 cartesian(笛卡儿积)

# cartesian
x = sc.parallelize(['A','B'])
y = sc.parallelize(['C','D'])
z = x.cartesian(y)
print(x.collect())
print(y.collect())
print(z.collect()) ['A', 'B']
['C', 'D']
[('A', 'C'), ('A', 'D'), ('B', 'C'), ('B', 'D')]
  • 2.2.5 checkpoint
  • 2.2.6 coalesce(合并分片)
# coalesce
x = sc.parallelize([1,2,3,4,5],2)
y = x.coalesce(numPartitions=1)
print(x.glom().collect())
print(y.glom().collect()) [[1, 2], [3, 4, 5]]
[[1, 2, 3, 4, 5]]
  • 2.2.7 cogroup(按rdd分组)
# cogroup
x = sc.parallelize([('C',4),('B',(3,3)),('A',2),('A',(1,1))])
y = sc.parallelize([('A',8),('B',7),('A',6),('D',(5,5))])
z = x.cogroup(y)
print(x.collect())
print(y.collect())
for key,val in list(z.collect()):
print(key, [list(i) for i in val]) [('C', 4), ('B', (3, 3)), ('A', 2), ('A', (1, 1))]
[('A', 8), ('B', 7), ('A', 6), ('D', (5, 5))]
A [[2, (1, 1)], [8, 6]]
C [[4], []]
B [[(3, 3)], [7]]
D [[], [(5, 5)]]
  • 2.2.8 combineByKey
# combineByKey
x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)])
createCombiner = (lambda el: [(el,el**2)])
mergeVal = (lambda aggregated, el: aggregated + [(el,el**2)]) # append to aggregated
mergeComb = (lambda agg1,agg2: agg1 + agg2 ) # append agg1 with agg2
y = x.combineByKey(createCombiner,mergeVal,mergeComb)
print(x.collect())
print(y.collect()) [('B', 1), ('B', 2), ('A', 3), ('A', 4), ('A', 5)]
[('A', [(3, 9), (4, 16), (5, 25)]), ('B', [(1, 1), (2, 4)])]
  • 2.2.9 context
  • 2.2.10 countByKey
  • 2.2.11 countByValue
  • 2.2.12 distinct
  • 2.2.13 filter
# filter
x = sc.parallelize([1,2,3])
y = x.filter(lambda x: x%2 == 1) # filters out even elements
print(x.collect())
print(y.collect()) [1, 2, 3]
[1, 3]
2.2.14 first
备注:Return the first element of this rdd
  • 2.2.15 flatMap(降维)
# flatMap
x = sc.parallelize([1,2,3])
y = x.flatMap(lambda x: (x, 100*x, x**2))
print(x.collect())
print(y.collect()) [1, 2, 3]
[1, 100, 1, 2, 200, 4, 3, 300, 9]
  • 2.2.16 flatMapValues
x = sc.parallelize([("a", ["x", "y", "z"]), ("b", ["p", "r"])])
def f(x): return x
x.flatMapValues(f).collect()
[('a', 'x'), ('a', 'y'), ('a', 'z'), ('b', 'p'), ('b', 'r')]
  • 2.2.17 fold
# fold
x = sc.parallelize([1,2,3])
neutral_zero_value = 0 # 0 for sum, 1 for multiplication
y = x.fold(neutral_zero_value,lambda obj, accumulated: accumulated + obj) # computes cumulative sum
print(x.collect())
print(y) [1, 2, 3]
6
  • 2.2.18 foldByKey
>>> rdd = sc.parallelize([("a", 1), ("b", 1), ("a", 1)])
>>> from operator import add[python标准库:常用运算函数库]
>>> sorted(rdd.foldByKey(0, add).collect())
[('a', 2), ('b', 1)]
  • 2.2.19 foreach
>>> def f(x): print(x)
>>> sc.parallelize([1, 2, 3, 4, 5]).foreach(f)[循环rdd的所有元素,无返回值]
  • 2.2.20 foreachPartition [循环rdd的分片]
# foreachPartition
from __future__ import print_function
x = sc.parallelize([1,2,3],5)
def f(parition):
'''side effect: append the current RDD partition contents to a file'''
f1=open("./foreachPartitionExample.txt", 'a+')
print([el for el in parition],file=f1) # first clear the file contents
open('./foreachPartitionExample.txt', 'w').close() y = x.foreachPartition(f) # writes into foreachExample.txt print(x.glom().collect())
print(y) # foreach returns 'None'
# print the contents of foreachExample.txt
with open("./foreachPartitionExample.txt", "r") as foreachExample:
print (foreachExample.read()) [[], [1], [], [2], [3]]
None
[]
[]
[1]
[2]
[3]
  • 2.2.21 fullOuterJoin

    备注:Perform a right outer join of self and other.

  • 2.2.22 glom()[合并rdd的分片的所有元素]

  • 2.2.23 groupby

# groupBy
x = sc.parallelize([1,2,3])
y = x.groupBy(lambda x: 'A' if (x%2 == 1) else 'B' )
print(x.collect())
# y is nested, this iterates through it
print([(j[0],[i for i in j[1]]) for j in y.collect()]) [1, 2, 3]
[('A', [1, 3]), ('B', [2])]
  • 2.2.24 groupbykey
# groupByKey
x = sc.parallelize([('B',5),('B',4),('A',3),('A',2),('A',1)])
y = x.groupByKey()
print(x.collect())
print([(j[0],[i for i in j[1]]) for j in y.collect()]) [('B', 5), ('B', 4), ('A', 3), ('A', 2), ('A', 1)]
[('A', [3, 2, 1]), ('B', [5, 4])]
  • 2.2.25 groupwith(Alias for cogroup)
# groupWith
x = sc.parallelize([('C',4),('B',(3,3)),('A',2),('A',(1,1))])
y = sc.parallelize([('B',(7,7)),('A',6),('D',(5,5))])
z = sc.parallelize([('D',9),('B',(8,8))])
a = x.groupWith(y,z)
print(x.collect())
print(y.collect())
print(z.collect())
print("Result:")
for key,val in list(a.collect()):
print(key, [list(i) for i in val]) [('C', 4), ('B', (3, 3)), ('A', 2), ('A', (1, 1))]
[('B', (7, 7)), ('A', 6), ('D', (5, 5))]
[('D', 9), ('B', (8, 8))]
Result:
D [[], [(5, 5)], [9]]
C [[4], [], []]
B [[(3, 3)], [(7, 7)], [(8, 8)]]
A [[2, (1, 1)], [6], []]
  • 2.2.26 histogram
rdd = sc.parallelize(range(51))
rdd.histogram(2)
([0, 25, 50], [25, 26])
rdd.histogram([0, 5, 25, 50])
([0, 5, 25, 50], [5, 20, 26])[分组区间,每区间元素个数]
rdd.histogram([0, 15, 30, 45, 60]) # evenly spaced buckets
([0, 15, 30, 45, 60], [15, 15, 15, 6])
rdd = sc.parallelize(["ab", "ac", "b", "bd", "ef"])
rdd.histogram(("a", "b", "c"))
(('a', 'b', 'c'), [2, 2])
  • 2.2.27 join
# join
x = sc.parallelize([('C',4),('B',3),('A',2),('A',1)])
y = sc.parallelize([('A',8),('B',7),('A',6),('D',5)]) z = x.join(y)
print(x.collect())
print(y.collect())
print(z.collect())
[('C', 4), ('B', 3), ('A', 2), ('A', 1)]
[('A', 8), ('B', 7), ('A', 6), ('D', 5)]
[('A', (2, 8)), ('A', (2, 6)), ('A', (1, 8)), ('A', (1, 6)), ('B', (3, 7))]
  • 2.2.28 leftoutjoin
# leftOuterJoin
x = sc.parallelize([('C',4),('B',3),('A',2),('A',1)])
y = sc.parallelize([('A',8),('B',7),('A',6),('D',5)])
z = x.leftOuterJoin(y)
print(x.collect())
print(y.collect())
print(z.collect()) [('C', 4), ('B', 3), ('A', 2), ('A', 1)]
[('A', 8), ('B', 7), ('A', 6), ('D', 5)]
[('A', (2, 8)), ('A', (2, 6)), ('A', (1, 8)), ('A', (1, 6)), ('C', (4, None)), ('B', (3, 7))]
  • 2.2.29 rightoutjoin
# rightOuterJoin
x = sc.parallelize([('C',4),('B',3),('A',2),('A',1)])
y = sc.parallelize([('A',8),('B',7),('A',6),('D',5)])
z = x.rightOuterJoin(y)
print(x.collect())
print(y.collect())
print(z.collect()) [('C', 4), ('B', 3), ('A', 2), ('A', 1)]
[('A', 8), ('B', 7), ('A', 6), ('D', 5)]
[('A', (2, 8)), ('A', (2, 6)), ('A', (1, 8)), ('A', (1, 6)), ('B', (3, 7)), ('D', (None, 5))]
  • 2.2.30 keyby
# keyBy
x = sc.parallelize([1,2,3])
y = x.keyBy(lambda x: x**2)
print(x.collect())
print(y.collect()) [1, 2, 3]
[(1, 1), (4, 2), (9, 3)]
2.2.31 lookup(key)
  • 2.2.32 mapPartitions
# mapPartitions
x = sc.parallelize([1,2,3], 2)
def f(iterator): yield sum(iterator)
y = x.mapPartitions(f)
# glom() flattens elements on the same partition
print(x.glom().collect())
print(y.glom().collect()) [[1], [2, 3]]
[[1], [5]]
  • 2.2.33 mapPartitionsWithIndex
# mapPartitionsWithIndex
x = sc.parallelize([1,2,3], 2)
def f(partitionIndex, iterator): yield (partitionIndex,sum(iterator))
y = x.mapPartitionsWithIndex(f) # glom() flattens elements on the same partition
print(x.glom().collect())
print(y.glom().collect()) [[1], [2, 3]]
[[(0, 1)], [(1, 5)]]
  • 2.2.34 map、max、min、mean、name
  • 2.2.35 mapValues
>>> x = sc.parallelize([("a", ["apple", "banana", "lemon"]), ("b", ["grapes"])])
>>> def f(x): return len(x)
>>> x.mapValues(f).collect()
[('a', 3), ('b', 1)]
  • 2.2.36 partitionby
# partitionBy
x = sc.parallelize([(0,1),(1,2),(2,3)],2)
y = x.partitionBy(numPartitions = 3, partitionFunc = lambda x: x)
# only key is passed to paritionFunc
print(x.glom().collect())
print(y.glom().collect()) [[(0, 1)], [(1, 2), (2, 3)]]
[[(0, 1)], [(1, 2)], [(2, 3)]]
  • 2.2.37 persist
 # rdd持久化,降低内存消耗
reqrdd.persist(storageLevel=StorageLevel.MEMORY_AND_DISK_SER)
Cache:
Persist this RDD with the default storage level (MEMORY_ONLY).
  • 2.2.38 pipe
# pipe
x = sc.parallelize(['A', 'Ba', 'C', 'AD'])
y = x.pipe('grep -i "A"') # calls out to grep, may fail under Windows
print(x.collect())
print(y.collect())
['A', 'Ba', 'C', 'AD']
['A', 'Ba', 'AD']
  • 2.2.40 randomSplit
  • 2.2.41 reduce
# reduce
x = sc.parallelize([1,2,3])
y = x.reduce(lambda obj, accumulated: obj + accumulated) # computes a cumulative sum
print(x.collect())
print(y) [1, 2, 3]
6
  • 2.2.42 reducebykey
# reduceByKey
x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)])
y = x.reduceByKey(lambda agg, obj: agg + obj)
print(x.collect())
print(y.collect()) [('B', 1), ('B', 2), ('A', 3), ('A', 4), ('A', 5)]
[('A', 12), ('B', 3)]
  • 2.2.43 reduceByKeyLocally
# reduceByKeyLocally
x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)])
y = x.reduceByKeyLocally(lambda agg, obj: agg + obj)
print(x.collect())
print(y) [('B', 1), ('B', 2), ('A', 3), ('A', 4), ('A', 5)]
{'A': 12, 'B': 3}
备注:This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a “combiner” in MapReduce.
  • 2.2.44 repartition
# repartition
x = sc.parallelize([1,2,3,4,5],2)
y = x.repartition(numPartitions=3)
print(x.glom().collect())
print(y.glom().collect())
[[1, 2], [3, 4, 5]]
[[], [1, 2, 3, 4], [5]]
  • 2.2.45 repartitionAndSortWithinPartitions
rdd = sc.parallelize([(0, 5), (3, 8), (2, 6), (0, 8), (3, 8), (1, 3)])
rdd2 = rdd.repartitionAndSortWithinPartitions(2, lambda x: x % 2, True)
rdd2.glom().collect()
[[(0, 5), (0, 8), (2, 6)], [(1, 3), (3, 8), (3, 8)]]
  • 2.2.46 smaple
# sample
x = sc.parallelize(range(7))
# call 'sample' 5 times
ylist = [x.sample(withReplacement=False, fraction=0.5) for i in range(5)]
print('x = ' + str(x.collect()))
for cnt,y in zip(range(len(ylist)), ylist):
print('sample:' + str(cnt) + ' y = ' + str(y.collect())) x = [0, 1, 2, 3, 4, 5, 6]
sample:0 y = [0, 2, 5, 6]
sample:1 y = [2, 6]
sample:2 y = [0, 4, 5, 6]
sample:3 y = [0, 2, 6]
sample:4 y = [0, 3, 4]
  • 2.2.47 sampleByKey
# sampleByKey
x = sc.parallelize([('A',1),('B',2),('C',3),('B',4),('A',5)])
y = x.sampleByKey(withReplacement=False, fractions={'A':0.5, 'B':1, 'C':0.2})
print(x.collect())
print(y.collect()) [('A', 1), ('B', 2), ('C', 3), ('B', 4), ('A', 5)]
[('B', 2), ('C', 3), ('B', 4)]
  • 2.2.48 sampleStdev
# sampleStdev
x = sc.parallelize([1,3,2])
y = x.sampleStdev() # divides by N-1
print(x.collect())
print(y)
[1, 3, 2]
1.0
  • 2.2.49 sampleVariance
# sampleVariance
x = sc.parallelize([1,3,2])
y = x.sampleVariance() # divides by N-1
print(x.collect())
print(y)
[1, 3, 2]
1.0
  • 2.2.50 sortByKey
# sortByKey
x = sc.parallelize([('B',1),('A',2),('C',3)])
y = x.sortByKey()
print(x.collect())
print(y.collect()) [('B', 1), ('A', 2), ('C', 3)]
[('A', 2), ('B', 1), ('C', 3)]
  • 2.2.51 sortBy
tmp = [('a', 1), ('b', 2), ('1', 3), ('d', 4), ('2', 5)]
sc.parallelize(tmp).sortBy(lambda x: x[0]).collect()
[('1', 3), ('2', 5), ('a', 1), ('b', 2), ('d', 4)]
sc.parallelize(tmp).sortBy(lambda x: x[1]).collect()
[('a', 1), ('b', 2), ('1', 3), ('d', 4), ('2', 5)]
  • 2.2.52 stats、stdev、sum
  • 2.2.53 subtract
x = sc.parallelize([("a", 1), ("b", 4), ("b", 5), ("a", 3)])
y = sc.parallelize([("a", 3), ("c", None)])
sorted(x.subtract(y).collect())
[('a', 1), ('b', 4), ('b', 5)]
2.2.54 subtractByKey
x = sc.parallelize([("a", 1), ("b", 4), ("b", 5), ("a", 2)])
y = sc.parallelize([("a", 3), ("c", None)])
sorted(x.subtractByKey(y).collect())
[('b', 4), ('b', 5)]
  • 2.2.55 take
# take
x = sc.parallelize([1,3,1,2,3])
y = x.take(num = 3)
print(x.collect())
print(y) [1, 3, 1, 2, 3]
[1, 3, 1]
  • 2.2.56 takeOrder
# takeOrdered
x = sc.parallelize([1,3,1,2,3])
y = x.takeOrdered(num = 3)
print(x.collect())
print(y) [1, 3, 1, 2, 3]
[1, 1, 2]
  • 2.2.57 takeSample
# takeSample
x = sc.parallelize(range(7))
# call 'sample' 5 times
ylist = [x.takeSample(withReplacement=False, num=3) for i in range(5)]
print('x = ' + str(x.collect()))
for cnt,y in zip(range(len(ylist)), ylist):
print('sample:' + str(cnt) + ' y = ' + str(y)) # no collect on y x = [0, 1, 2, 3, 4, 5, 6]
sample:0 y = [0, 2, 6]
sample:1 y = [6, 4, 2]
sample:2 y = [2, 0, 4]
sample:3 y = [5, 4, 1]
sample:4 y = [3, 1, 4]
  • 2.2.58 treeAggregate
 add = lambda x, y: x + y
rdd = sc.parallelize([-5, -4, -3, -2, -1, 1, 2, 3, 4], 10)
rdd.treeAggregate(0, add, add)
-5
rdd.treeAggregate(0, add, add, 1)
-5
rdd.treeAggregate(0, add, add, 2)
-5
rdd.treeAggregate(0, add, add, 5)
-5
rdd.treeAggregate(0, add, add, 10)
-5
  • 2.2.59 treeReduce
add = lambda x, y: x + y
rdd = sc.parallelize([-5, -4, -3, -2, -1, 1, 2, 3, 4], 10)
rdd.treeReduce(add)
-5
rdd.treeReduce(add, 1)
-5
rdd.treeReduce(add, 2)
-5
rdd.treeReduce(add, 5)
-5
rdd.treeReduce(add, 10)
-5
  • 2.2.60 uion、unpersist、values、variance
  • 2.2.61 zip
# zip
x = sc.parallelize(['B','A','A'])
# zip expects x and y to have same #partitions and #elements/partition
y = x.map(lambda x: ord(x))
z = x.zip(y)
print(x.collect())
print(y.collect())
print(z.collect()) ['B', 'A', 'A']
[66, 65, 65]
[('B', 66), ('A', 65), ('A', 65)]
  • 2.2.62 zipWithIndex
# zipWithIndex
x = sc.parallelize(['B','A','A'],2)
y = x.zipWithIndex()
print(x.glom().collect())
print(y.collect()) [['B'], ['A', 'A']]
[('B', 0), ('A', 1), ('A', 2)]
  • 2.2.63 zipWithUniqueId
# zipWithUniqueId
x = sc.parallelize(['B','A','A'],2)
y = x.zipWithUniqueId()
print(x.glom().collect())
print(y.collect()) [['B'], ['A', 'A']]
[('B', 0), ('A', 1), ('A', 3)]

大数据实战手册-开发篇之RDD:计算 transform->action的更多相关文章

  1. 《OD大数据实战》驴妈妈旅游网大型离线数据电商分析平台

    一.环境搭建 1. <OD大数据实战>Hadoop伪分布式环境搭建 2. <OD大数据实战>Hive环境搭建 3. <OD大数据实战>Sqoop入门实例 4. &l ...

  2. SparkSQL大数据实战:揭开Join的神秘面纱

    本文来自 网易云社区 . Join操作是数据库和大数据计算中的高级特性,大多数场景都需要进行复杂的Join操作,本文从原理层面介绍了SparkSQL支持的常见Join算法及其适用场景. Join背景介 ...

  3. Java,面试题,简历,Linux,大数据,常用开发工具类,API文档,电子书,各种思维导图资源,百度网盘资源,BBS论坛系统 ERP管理系统 OA办公自动化管理系统 车辆管理系统 各种后台管理系统

    Java,面试题,简历,Linux,大数据,常用开发工具类,API文档,电子书,各种思维导图资源,百度网盘资源BBS论坛系统 ERP管理系统 OA办公自动化管理系统 车辆管理系统 家庭理财系统 各种后 ...

  4. 《OD大数据实战》HDFS入门实例

    一.环境搭建 1.  下载安装配置 <OD大数据实战>Hadoop伪分布式环境搭建 2. Hadoop配置信息 1)${HADOOP_HOME}/libexec:存储hadoop的默认环境 ...

  5. 《OD大数据实战》Hive环境搭建

    一.搭建hadoop环境 <OD大数据实战>hadoop伪分布式环境搭建 二.Hive环境搭建 1. 准备安装文件 下载地址: http://archive.cloudera.com/cd ...

  6. 大数据学习笔记——Java篇之基础知识

    Java / 计算机基础知识整理 在进行知识梳理同时也是个人的第一篇技术博客之前,首先祝贺一下,经历了一年左右的学习,从完完全全的计算机小白,现在终于可以做一些产出了!可以说也是颇为感慨,个人认为,学 ...

  7. Azure HDInsight 和 Spark 大数据实战(一)

    What is HDInsight? Microsoft Azure HDInsight 是基于 Hortonoworks Data Platform (HDP) 的 Hadoop 集群,包括Stor ...

  8. 【原创干货】大数据Hadoop/Spark开发环境搭建

    已经自学了好几个月的大数据了,第一个月里自己通过看书.看视频.网上查资料也把hadoop(1.x.2.x).spark单机.伪分布式.集群都部署了一遍,但经历短暂的兴奋后,还是觉得不得门而入. 只有深 ...

  9. 大数据实战-Spark实战技巧

    1.连接mysql --driver-class-path mysql-connector-java-5.1.21.jar 在数据库中,SET GLOBAL binlog_format=mixed; ...

  10. 大数据学习day21-----spark04------1. 广播变量 2. RDD中的cache 3.RDD的checkpoint方法 4. 计算学科最受欢迎老师TopN

    1. 广播变量  1.1 补充知识(来源:https://blog.csdn.net/huashetianzu/article/details/7821674) 之所以存在reduce side jo ...

随机推荐

  1. Django笔记十五之in查询及date日期相关过滤操作

    这一篇介绍关于范围,日期的筛选 in range date year week weekday quarter hour 1.in in 对应于 MySQL 中的 in 操作,可以接受数组.元组等类型 ...

  2. python入门教程之十三错误和异常

    作为 Python 初学者,在刚学习 Python 编程时,经常会看到一些报错信息,在前面我们没有提及,这章节我们会专门介绍. Python 有两种错误很容易辨认:语法错误和异常. Python as ...

  3. pandas之时间操作

    顾名思义,时间序列(time series),就是由时间构成的序列,它指的是在一定时间内按照时间顺序测量的某个变量的取值序列,比如一天内的温度会随时间而发生变化,或者股票的价格会随着时间不断的波动,这 ...

  4. IDEA中隐藏问文件或者文件夹

    点击+,输入要隐藏的文件名,支持*通配符 回车确认

  5. 【Diary】CSP-J 2019 游记

    大废话游记. CSP-J1 Day-1 写13年的初赛题.感觉挺简单.但是问题求解第二问数数数错了,加上各种sb错误,只写了八十几分... 然后跑去机房问,那个相同球放相同袋子的题有没有数学做法. 没 ...

  6. 第2章. reco主题介绍

    1. 这是一个vuepress主题,旨在添加博客所需的分类.TAB墙.分页.评论等能: 2. 主题追求极简,根据 vuepress 的默认主题修改而成,官方的主题配置仍然适用: 3. 你可以打开 [午 ...

  7. Vue实战案例

    Vue项目案例 结合之前学习的 vue.js.脚手架.vuex.vue-router.axios.elementui 等知识点,来开发前端项目案例(仅前端不含后端). 1.项目搭建 其实就是将我们项目 ...

  8. Apache ShenYu 学习笔记一

    1.简介 这是一个异步的,高性能的,跨语言的,响应式的 API 网关. 官网文档:https://shenyu.apache.org/zh/docs/index 仓库地址:https://github ...

  9. 【Visual Leak Detector】在 VS 高版本中使用 VLD

    说明 使用 VLD 内存泄漏检测工具辅助开发时整理的学习笔记. 本篇介绍如何在 VS 高版本中使用 vld2.5.1.同系列文章目录可见 <内存泄漏检测工具>目录 目录 说明 1. 使用前 ...

  10. LeetCode 周赛 343(2023/04/30)结合「下一个排列」的贪心构造问题

    本文已收录到 AndroidFamily,技术和职场问题,请关注公众号 [彭旭锐] 提问. 大家好,我是小彭. 今天是五一假期的第二天,打周赛的人数比前一天的双周赛多了,难道大家都只玩一天吗?这场周赛 ...