通过上篇关于Cluster-Singleton的介绍,我们了解了Akka为分布式程序提供的编程支持:基于消息驱动的运算模式特别适合分布式程序编程,我们不需要特别的努力,只需要按照普通的Actor编程方式就可以实现集群分布式程序了。Cluster-Singleton可以保证无论集群节点出了任何问题,只要集群中还有节点在线,都可以持续的安全运算。Cluster-Singleton这种模式保证了某种Actor的唯一实例可以安全稳定地在集群环境下运行。还有一种情况就是如果有许多特别占用资源的Actor需要同时运行,而这些Actor同时占用的资源远远超过一台服务器的容量,如此我们必须把这些Actor分布到多台服务器上,或者是一个由多台服务器组成的集群环境,这时就需要Cluster-Sharding模式来帮助解决这样的问题了。

我把通过使用Cluster-Sharding后达到的一些目的和大家分享一下,大家一起来分析分析到底这些达成的目标里是否包括了Actor在集群节点间的分布:

首先我有个Actor,它的名称是一个自编码,由Cluster-Sharding在集群中某个节点上构建。由于在一个集群环境里所以这个Actor到底在哪个节点上,具体地址是什么我都不知道,我只需要用这个自编码就可以和它沟通。如果我有许多自编码的消耗资源的Actor,我可以通过自编码中的分片(shard)编号来指定在其它的分片(shard)里构建这些Actor。Akka-Cluster还可以根据整个集群中节点的增减按当前集群节点情况进行分片在集群节点调动来重新配载(rebalance),包括在某些节点因故脱离集群时把节点上的所有Actor在其它在线节点上重新构建。这样看来,这个Actor的自编码应该是Cluster-Sharding的应用核心元素了。按惯例我们还是用例子来示范Cluster-Sharding的使用。我们需要分片(sharding)的Actor就是前几篇讨论里提到的Calculator:

package clustersharding.entity

import akka.actor._
import akka.cluster._
import akka.persistence._
import scala.concurrent.duration._
import akka.cluster.sharding._ object Calculator {
sealed trait Command
case class Num(d: Double) extends Command
case class Add(d: Double) extends Command
case class Sub(d: Double) extends Command
case class Mul(d: Double) extends Command
case class Div(d: Double) extends Command
case object ShowResult extends Command sealed trait Event
case class SetResult(d: Any) extends Event def getResult(res: Double, cmd: Command) = cmd match {
case Num(x) => x
case Add(x) => res + x
case Sub(x) => res - x
case Mul(x) => res * x
case Div(x) => {
val _ = res.toInt / x.toInt //yield ArithmeticException when /0.00
res / x
}
case _ => new ArithmeticException("Invalid Operation!")
} case class State(result: Double) { def updateState(evt: Event): State = evt match {
case SetResult(n) => copy(result = n.asInstanceOf[Double])
}
} case object Disconnect extends Command //exit cluster def props = Props(new Calcultor) } class Calcultor extends PersistentActor with ActorLogging {
import Calculator._
val cluster = Cluster(context.system) var state: State = State() override def persistenceId: String = self.path.parent.name+"-"+self.path.name override def receiveRecover: Receive = {
case evt: Event => state = state.updateState(evt)
case SnapshotOffer(_,st: State) => state = state.copy(result = st.result)
} override def receiveCommand: Receive = {
case Num(n) => persist(SetResult(getResult(state.result,Num(n))))(evt => state = state.updateState(evt))
case Add(n) => persist(SetResult(getResult(state.result,Add(n))))(evt => state = state.updateState(evt))
case Sub(n) => persist(SetResult(getResult(state.result,Sub(n))))(evt => state = state.updateState(evt))
case Mul(n) => persist(SetResult(getResult(state.result,Mul(n))))(evt => state = state.updateState(evt))
case Div(n) => persist(SetResult(getResult(state.result,Div(n))))(evt => state = state.updateState(evt))
case ShowResult => log.info(s"Result on ${cluster.selfAddress.hostPort} is: ${state.result}")
case Disconnect =>
log.info(s"${cluster.selfAddress} is leaving cluster!!!")
cluster.leave (cluster.selfAddress) } override def preRestart(reason: Throwable, message: Option[Any]): Unit = {
log.info(s"Restarting calculator: ${reason.getMessage}")
super.preRestart(reason, message)
}
} class CalcSupervisor extends Actor {
def decider: PartialFunction[Throwable,SupervisorStrategy.Directive] = {
case _: ArithmeticException => SupervisorStrategy.Resume
} override def supervisorStrategy: SupervisorStrategy =
OneForOneStrategy(maxNrOfRetries = , withinTimeRange = seconds){
decider.orElse(SupervisorStrategy.defaultDecider)
}
val calcActor = context.actorOf(Calculator.props,"calculator") override def receive: Receive = {
case msg@ _ => calcActor.forward(msg)
} }

我们看到:Calculator是一个普通的PersisitentActor,内部状态可以实现持久化,Actor重启时可以恢复状态。CalcSupervisor是Calculator的监管,这样做是为了实现新的监管策略SupervisorStrategy。

Calculator就是我们准备集群分片(sharding)的目标enitity。一种Actor的分片是通过Akka的Cluster-Sharding的ClusterSharding.start方法在集群中构建的。我们需要在所有将承载分片的节点上运行这个方法来部署分片:

/**
* Register a named entity type by defining the [[akka.actor.Props]] of the entity actor and
* functions to extract entity and shard identifier from messages. The [[ShardRegion]] actor
* for this type can later be retrieved with the [[#shardRegion]] method.
*
* The default shard allocation strategy [[ShardCoordinator.LeastShardAllocationStrategy]]
* is used. [[akka.actor.PoisonPill]] is used as `handOffStopMessage`.
*
* Some settings can be configured as described in the `akka.cluster.sharding` section
* of the `reference.conf`.
*
* @param typeName the name of the entity type
* @param entityProps the `Props` of the entity actors that will be created by the `ShardRegion`
* @param settings configuration settings, see [[ClusterShardingSettings]]
* @param extractEntityId partial function to extract the entity id and the message to send to the
* entity from the incoming message, if the partial function does not match the message will
* be `unhandled`, i.e. posted as `Unhandled` messages on the event stream
* @param extractShardId function to determine the shard id for an incoming message, only messages
* that passed the `extractEntityId` will be used
* @return the actor ref of the [[ShardRegion]] that is to be responsible for the shard
*/
def start(
typeName: String,
entityProps: Props,
settings: ClusterShardingSettings,
extractEntityId: ShardRegion.ExtractEntityId,
extractShardId: ShardRegion.ExtractShardId): ActorRef = { val allocationStrategy = new LeastShardAllocationStrategy(
settings.tuningParameters.leastShardAllocationRebalanceThreshold,
settings.tuningParameters.leastShardAllocationMaxSimultaneousRebalance) start(typeName, entityProps, settings, extractEntityId, extractShardId, allocationStrategy, PoisonPill)
}

start返回了ShardRegion,是个ActorRef类型。ShardRegion是一个特殊的Actor,负责管理可能多个分片(shard)内称为Entity的Actor实例。这些分片可能是分布在不同的集群节点上的,外界通过ShardRegion与其辖下Entities沟通。从start函数参数entityProps我们看到:每个分片中只容许一个种类的Actor;具体的Entity实例是由另一个内部Actor即shard构建的,shard可以在一个分片中构建多个Entity实例。多shard多entity的特性可以从extractShardId,extractEntityId这两个方法中得到一些信息。我们说过Actor自编码即entity-id是Cluster-Sharding的核心元素。在entity-id这个自编码中还包含了shard-id,所以用户可以通过entity-id的编码规则来设计整个分片系统包括每个ShardRegion下shard和entity的数量。当ShardRegion得到一个entity-id后,首先从中抽取shard-id,如果shard-id在集群中不存在的话就按集群各节点负载情况在其中一个节点上构建新的shard;然后再用entity-id在shard-id分片中查找entity,如果不存在就构建一个新的entity实例。整个shard和entity的构建过程都是通过用户提供的函数extractShardId和extractEntityId实现的,Cluster-Sharding就是通过这两个函数按用户的要求来构建和使用shard和entity的。这个自编码无需按一定的顺序,只需要保证唯一性。下面是一个编码例子:

object CalculatorShard {
import Calculator._ case class CalcCommands(eid: String, msg: Command) //user should use it to talk to shardregion
val shardName = "calcShard"
val getEntityId: ShardRegion.ExtractEntityId = {
case CalcCommands(id,msg) => (id,msg)
}
val getShardId: ShardRegion.ExtractShardId = {
case CalcCommands(id,_) => id.head.toString
}
def entityProps = Props(new CalcSupervisor)
}

用户是用CalcCommands与ShardRegion沟通的。这是一个专门为与分片系统沟通而设的包嵌消息类型,包嵌的信息里除了Calculator正常支持的Command消息外,还包括了目标Entity实例的编号eid。这个eid的第一个字节代表shard-id,这样我们可以直接指定目标entity所在分片或者随意任选一个shard-id如:Random.NextInt(9).toString。由于每个分片只含一种类型的Actor,不同的entity-id代表多个同类Actor实例的同时存在,就像前面讨论的Router一样:所有实例针对不同的输入进行相同功能的运算处理。一般来说用户会通过某种算法任意产生entity-id,希望能做到各分片中entity的均衡部署,Cluster-Sharding可以根据具体的集群负载情况自动调整分片在集群节点层面上的部署。

下面的代码示范了如何在一个集群节点上部署分片:

package clustersharding.shard
import akka.persistence.journal.leveldb._
import akka.actor._
import akka.cluster.sharding._
import com.typesafe.config.ConfigFactory
import akka.util.Timeout
import scala.concurrent.duration._
import akka.pattern._
import clustersharding.entity.CalculatorShard object CalcShards {
def create(port: Int) = {
val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=${port}")
.withFallback(ConfigFactory.load("sharding"))
// Create an Akka system
val system = ActorSystem("ShardingSystem", config) startupSharding(port,system) } def startupSharedJournal(system: ActorSystem, startStore: Boolean, path: ActorPath): Unit = {
// Start the shared journal one one node (don't crash this SPOF)
// This will not be needed with a distributed journal
if (startStore)
system.actorOf(Props[SharedLeveldbStore], "store")
// register the shared journal
import system.dispatcher
implicit val timeout = Timeout(.seconds)
val f = (system.actorSelection(path) ? Identify(None))
f.onSuccess {
case ActorIdentity(_, Some(ref)) =>
SharedLeveldbJournal.setStore(ref, system)
case _ =>
system.log.error("Shared journal not started at {}", path)
system.terminate()
}
f.onFailure {
case _ =>
system.log.error("Lookup of shared journal at {} timed out", path)
system.terminate()
}
} def startupSharding(port: Int, system: ActorSystem) = { startupSharedJournal(system, startStore = (port == ), path =
ActorPath.fromString("akka.tcp://ShardingSystem@127.0.0.1:2551/user/store")) ClusterSharding(system).start(
typeName = CalculatorShard.shardName,
entityProps = CalculatorShard.entityProps,
settings = ClusterShardingSettings(system),
extractEntityId = CalculatorShard.getEntityId,
extractShardId = CalculatorShard.getShardId
) } }

具体的部署代码在startupSharding方法里。下面这段代码示范了如何使用分片里的entity:

package clustersharding.demo
import akka.actor.ActorSystem
import akka.cluster.sharding._
import clustersharding.entity.CalculatorShard.CalcCommands
import clustersharding.entity._
import clustersharding.shard.CalcShards
import com.typesafe.config.ConfigFactory object ClusterShardingDemo extends App { CalcShards.create()
CalcShards.create()
CalcShards.create()
CalcShards.create() Thread.sleep() val shardingSystem = ActorSystem("ShardingSystem",ConfigFactory.load("sharding"))
CalcShards.startupSharding(,shardingSystem) Thread.sleep() val calcRegion = ClusterSharding(shardingSystem).shardRegion(CalculatorShard.shardName) calcRegion ! CalcCommands("",Calculator.Num(13.0)) //shard 1, entity 1012
calcRegion ! CalcCommands("",Calculator.Add(12.0))
calcRegion ! CalcCommands("",Calculator.ShowResult) //shows address too
calcRegion ! CalcCommands("",Calculator.Disconnect) //disengage cluster calcRegion ! CalcCommands("",Calculator.Num(10.0)) //shard 2, entity 2012
calcRegion ! CalcCommands("",Calculator.Mul(3.0))
calcRegion ! CalcCommands("",Calculator.Div(2.0))
calcRegion ! CalcCommands("",Calculator.Div(0.0)) //divide by zero Thread.sleep()
calcRegion ! CalcCommands("",Calculator.ShowResult) //check if restore result on another node
calcRegion ! CalcCommands("",Calculator.ShowResult)
}

以上代码里人为选定了分片和entity-id,其中包括了从集群中抽出一个节点的操作。运算结果如下:

[INFO] [// ::49.414] [ShardingSystem-akka.actor.default-dispatcher-] [akka.tcp://ShardingSystem@127.0.0.1:50456/system/sharding/calcShard/1/1012/calculator] Result on ShardingSystem@127.0.0.1:50456 is: 25.0
[INFO] [// ::49.414] [ShardingSystem-akka.actor.default-dispatcher-] [akka.tcp://ShardingSystem@127.0.0.1:50456/system/sharding/calcShard/1/1012/calculator] akka.tcp://ShardingSystem@127.0.0.1:50456 is leaving cluster!!!
[WARN] [// ::49.431] [ShardingSystem-akka.actor.default-dispatcher-] [akka://ShardingSystem/system/sharding/calcShard/2/2012/calculator] / by zero
[INFO] [// ::01.320] [ShardingSystem-akka.actor.default-dispatcher-] [akka.tcp://ShardingSystem@127.0.0.1:50464/system/sharding/calcShard/2/2012/calculator] Result on ShardingSystem@127.0.0.1:50464 is: 15.0
[INFO] [// ::01.330] [ShardingSystem-akka.actor.default-dispatcher-] [akka.tcp://ShardingSystem@127.0.0.1:50457/system/sharding/calcShard/1/1012/calculator] Result on ShardingSystem@127.0.0.1:50457 is: 25.0

结果显示entity1012在节点50456退出集群后被转移到节点50457上,并行保留了状态。

下面是本次示范的源代码:

build.sbt

name := "cluster-sharding"

version := "1.0"

scalaVersion := "2.11.9"

resolvers += "Akka Snapshot Repository" at "http://repo.akka.io/snapshots/"

val akkaversion = "2.4.8"

libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-actor" % akkaversion,
"com.typesafe.akka" %% "akka-remote" % akkaversion,
"com.typesafe.akka" %% "akka-cluster" % akkaversion,
"com.typesafe.akka" %% "akka-cluster-tools" % akkaversion,
"com.typesafe.akka" %% "akka-cluster-sharding" % akkaversion,
"com.typesafe.akka" %% "akka-persistence" % "2.4.8",
"com.typesafe.akka" %% "akka-contrib" % akkaversion,
"org.iq80.leveldb" % "leveldb" % "0.7",
"org.fusesource.leveldbjni" % "leveldbjni-all" % "1.8")

resources/sharding.conf

akka.actor.warn-about-java-serializer-usage = off
akka.log-dead-letters-during-shutdown = off
akka.log-dead-letters = off akka {
loglevel = INFO
actor {
provider = "akka.cluster.ClusterActorRefProvider"
} remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port =
}
} cluster {
seed-nodes = [
"akka.tcp://ShardingSystem@127.0.0.1:2551"]
log-info = off
} persistence {
journal.plugin = "akka.persistence.journal.leveldb-shared"
journal.leveldb-shared.store {
# DO NOT USE 'native = off' IN PRODUCTION !!!
native = off
dir = "target/shared-journal"
}
snapshot-store.plugin = "akka.persistence.snapshot-store.local"
snapshot-store.local.dir = "target/snapshots"
}
}

Calculator.scala

package clustersharding.entity

import akka.actor._
import akka.cluster._
import akka.persistence._
import scala.concurrent.duration._
import akka.cluster.sharding._ object Calculator {
sealed trait Command
case class Num(d: Double) extends Command
case class Add(d: Double) extends Command
case class Sub(d: Double) extends Command
case class Mul(d: Double) extends Command
case class Div(d: Double) extends Command
case object ShowResult extends Command sealed trait Event
case class SetResult(d: Any) extends Event def getResult(res: Double, cmd: Command) = cmd match {
case Num(x) => x
case Add(x) => res + x
case Sub(x) => res - x
case Mul(x) => res * x
case Div(x) => {
val _ = res.toInt / x.toInt //yield ArithmeticException when /0.00
res / x
}
case _ => new ArithmeticException("Invalid Operation!")
} case class State(result: Double) { def updateState(evt: Event): State = evt match {
case SetResult(n) => copy(result = n.asInstanceOf[Double])
}
} case object Disconnect extends Command //exit cluster def props = Props(new Calcultor) } class Calcultor extends PersistentActor with ActorLogging {
import Calculator._
val cluster = Cluster(context.system) var state: State = State() override def persistenceId: String = self.path.parent.name+"-"+self.path.name override def receiveRecover: Receive = {
case evt: Event => state = state.updateState(evt)
case SnapshotOffer(_,st: State) => state = state.copy(result = st.result)
} override def receiveCommand: Receive = {
case Num(n) => persist(SetResult(getResult(state.result,Num(n))))(evt => state = state.updateState(evt))
case Add(n) => persist(SetResult(getResult(state.result,Add(n))))(evt => state = state.updateState(evt))
case Sub(n) => persist(SetResult(getResult(state.result,Sub(n))))(evt => state = state.updateState(evt))
case Mul(n) => persist(SetResult(getResult(state.result,Mul(n))))(evt => state = state.updateState(evt))
case Div(n) => persist(SetResult(getResult(state.result,Div(n))))(evt => state = state.updateState(evt))
case ShowResult => log.info(s"Result on ${cluster.selfAddress.hostPort} is: ${state.result}")
case Disconnect =>
log.info(s"${cluster.selfAddress} is leaving cluster!!!")
cluster.leave (cluster.selfAddress) } override def preRestart(reason: Throwable, message: Option[Any]): Unit = {
log.info(s"Restarting calculator: ${reason.getMessage}")
super.preRestart(reason, message)
}
} class CalcSupervisor extends Actor {
def decider: PartialFunction[Throwable,SupervisorStrategy.Directive] = {
case _: ArithmeticException => SupervisorStrategy.Resume
} override def supervisorStrategy: SupervisorStrategy =
OneForOneStrategy(maxNrOfRetries = , withinTimeRange = seconds){
decider.orElse(SupervisorStrategy.defaultDecider)
}
val calcActor = context.actorOf(Calculator.props,"calculator") override def receive: Receive = {
case msg@ _ => calcActor.forward(msg)
} } object CalculatorShard {
import Calculator._ case class CalcCommands(eid: String, msg: Command) //user should use it to talk to shardregion
val shardName = "calcShard"
val getEntityId: ShardRegion.ExtractEntityId = {
case CalcCommands(id,msg) => (id,msg)
}
val getShardId: ShardRegion.ExtractShardId = {
case CalcCommands(id,_) => id.head.toString
}
def entityProps = Props(new CalcSupervisor)
}

CalcShard.scala

package clustersharding.shard
import akka.persistence.journal.leveldb._
import akka.actor._
import akka.cluster.sharding._
import com.typesafe.config.ConfigFactory
import akka.util.Timeout
import scala.concurrent.duration._
import akka.pattern._
import clustersharding.entity.CalculatorShard object CalcShards {
def create(port: Int) = {
val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=${port}")
.withFallback(ConfigFactory.load("sharding"))
// Create an Akka system
val system = ActorSystem("ShardingSystem", config) startupSharding(port,system) } def startupSharedJournal(system: ActorSystem, startStore: Boolean, path: ActorPath): Unit = {
// Start the shared journal one one node (don't crash this SPOF)
// This will not be needed with a distributed journal
if (startStore)
system.actorOf(Props[SharedLeveldbStore], "store")
// register the shared journal
import system.dispatcher
implicit val timeout = Timeout(.seconds)
val f = (system.actorSelection(path) ? Identify(None))
f.onSuccess {
case ActorIdentity(_, Some(ref)) =>
SharedLeveldbJournal.setStore(ref, system)
case _ =>
system.log.error("Shared journal not started at {}", path)
system.terminate()
}
f.onFailure {
case _ =>
system.log.error("Lookup of shared journal at {} timed out", path)
system.terminate()
}
} def startupSharding(port: Int, system: ActorSystem) = { startupSharedJournal(system, startStore = (port == ), path =
ActorPath.fromString("akka.tcp://ShardingSystem@127.0.0.1:2551/user/store")) ClusterSharding(system).start(
typeName = CalculatorShard.shardName,
entityProps = CalculatorShard.entityProps,
settings = ClusterShardingSettings(system),
extractEntityId = CalculatorShard.getEntityId,
extractShardId = CalculatorShard.getShardId
) } }

ClusterShardingDemo.scala

package clustersharding.demo
import akka.actor.ActorSystem
import akka.cluster.sharding._
import clustersharding.entity.CalculatorShard.CalcCommands
import clustersharding.entity._
import clustersharding.shard.CalcShards
import com.typesafe.config.ConfigFactory object ClusterShardingDemo extends App { CalcShards.create()
CalcShards.create()
CalcShards.create()
CalcShards.create() Thread.sleep() val shardingSystem = ActorSystem("ShardingSystem",ConfigFactory.load("sharding"))
CalcShards.startupSharding(,shardingSystem) Thread.sleep() val calcRegion = ClusterSharding(shardingSystem).shardRegion(CalculatorShard.shardName) calcRegion ! CalcCommands("",Calculator.Num(13.0)) //shard 1, entity 1012
calcRegion ! CalcCommands("",Calculator.Add(12.0))
calcRegion ! CalcCommands("",Calculator.ShowResult) //shows address too
calcRegion ! CalcCommands("",Calculator.Disconnect) //disengage cluster calcRegion ! CalcCommands("",Calculator.Num(10.0)) //shard 2, entity 2012
calcRegion ! CalcCommands("",Calculator.Mul(3.0))
calcRegion ! CalcCommands("",Calculator.Div(2.0))
calcRegion ! CalcCommands("",Calculator.Div(0.0)) //divide by zero Thread.sleep()
calcRegion ! CalcCommands("",Calculator.ShowResult) //check if restore result on another node
calcRegion ! CalcCommands("",Calculator.ShowResult)
}

Akka(13): 分布式运算:Cluster-Sharding-运算的集群分片的更多相关文章

  1. akka-typed(7) - cluster:sharding, 集群分片

    在使用akka-typed的过程中发现有很多地方都简化了不少,变得更方便了,包括:Supervision,只要用Behaviors.supervise()把Behavior包住,很容易就可以实现这个a ...

  2. Akka Cluster之集群分片

    一.介绍  当您需要在集群中的多个节点之间分配Actor,并希望能够使用其逻辑标识符与它们进行交互时,集群分片是非常有用的.你无需关心Actor在集群中的物理位置,因为这可能也会随着时间的推移而发生变 ...

  3. Akka-Cluster(6)- Cluster-Sharding:集群分片,分布式交互程序核心方式

    在前面几篇讨论里我们介绍了在集群环境里的一些编程模式.分布式数据结构及具体实现方式.到目前为止,我们已经实现了把程序任务分配给处于很多服务器上的actor,能够最大程度的利用整体系统的硬件资源.这是因 ...

  4. akka 集群分片

    akka 集群 Sharding分片 分片上下级结构 集群(多台节点机) —> 每台节点机(1个片区) —> 每个片区(多个分片) —> 每个分片(多个实体) 实体: 分片管理的 A ...

  5. MySQL Cluster 7.3.5 集群配置参数优化(优化篇)

    按照前面的教程:MySQL Cluster 7.3.5 集群配置实例(入门篇),可快速搭建起基础版的MySQL Cluster集群,但是在生成环境中,还是有很多问题的,即配置参数需要优化下, 当前生产 ...

  6. [转贴]CentOS7.5 Kubernetes V1.13(最新版)二进制部署集群

    CentOS7.5 Kubernetes V1.13(最新版)二进制部署集群 http://blog.51cto.com/10880347/2326146   一.概述 kubernetes 1.13 ...

  7. Redis Cluster 4.0.9 集群安装搭建

    Redis Cluster 4.0.9集群搭建步骤:yum install -y gcc g++ gcc-c++ make openssl cd redis-4.0.9 make mkdir -p / ...

  8. MySQL Cluster 7.3.5 集群配置实例(入门篇)

    一.环境说明: CentOS6.3(32位) + MySQL Cluster 7.3.5,规划5台机器,资料如下: 节点分布情况: MGM:192.168.137. NDBD1:192.168.137 ...

  9. Mongodb集群搭建之 Sharding+ Replica Sets集群架构

    1.本例使用1台Linux主机,通过Docker 启动三个容器 IP地址如下: docker run -d -v `pwd`/data/master:/mongodb -p 27017:27017 d ...

随机推荐

  1. 【R与数据库】R + 数据库 = 非常完美

    前言 经常用R处理数据的分析师都会对dplyr包情有独钟,它强大的数据整理功能让原始数据从杂乱无章到有序清晰,便于后期进一步的深入分析,特别是配合上数据库的使用,更是让分析师如虎添翼,轻松搞定Exce ...

  2. 关于vue生命周期中的同步异步的理解

    在vue官网中介绍生命周期的图如下: 主要测试代码如下: 主要是测试前四个生命周期beforeCreate,created,beforeMount,mounted,里面同步和异步的执行顺序,其它的类似 ...

  3. Windows下以Local模式调试SparkStreaming的WordCount例子

    1.下载Windows版的NetCat https://eternallybored.org/misc/netcat/ 2.启动NetCat nc -l -p 9999 3.将SAPRK_HOME\c ...

  4. tornado之文件上传的几种形式form,伪ajax(iframe)

    1直接form提交给后台处理 <!DOCTYPE html> <html lang="en"> <head> <meta charset= ...

  5. VR全景:“互联网+之后的下一个“风口”

    2017年VR虚拟现实会成为流行趋势吗? 2017年,另一个时代正在悄然走来--720全景时代!如果你错过了前十年的互联网大爆发,千万不要再错过接下来十年的VR全景时代的机遇! VR全景是" ...

  6. 谈谈this对象

    通过平常的使用简单总结了一下不同形式的函数调用下this的指向,函数的调用形式决定了this的指向.就简单分析一下以下几种情况: 情况一:纯粹的函数调用 eg: var x=1; function f ...

  7. Spring学习(8)--- @Autowired注解(一)

    可以将@Autowired注解为“传统”的setter方法 package com.mypackage; import org.springframework.beans.factory.annota ...

  8. Java NIO学习笔记 NIO选择器

    Java NIO选择器 A Selector是一个Java NIO组件,可以检查一个或多个NIO通道,并确定哪些通道已准备就绪,例如读取或写入.这样一个线程可以管理多个通道,从而管理多个网络连接. 为 ...

  9. Java集合源码分析之 LinkedList

    一.简介 LinkedList是一个常用的集合类,用于顺序存储元素.LinkedList经常和ArrayList一起被提及.大部分人应该都知道ArrayList内部采用数组保存元素,适合用于随机访问比 ...

  10. R语言统计分析技术研究——岭回归技术的原理和应用

    岭回归技术的原理和应用 作者马文敏 岭回归分析是一种专用于共线性分析的有偏估计回归方法,实质上是一种改良的最小二乘估计法,通过放弃最小二乘法的无偏性,以损失部分信息,降低精度为代价获得回归系数更为符合 ...