通过上篇关于Cluster-Singleton的介绍,我们了解了Akka为分布式程序提供的编程支持:基于消息驱动的运算模式特别适合分布式程序编程,我们不需要特别的努力,只需要按照普通的Actor编程方式就可以实现集群分布式程序了。Cluster-Singleton可以保证无论集群节点出了任何问题,只要集群中还有节点在线,都可以持续的安全运算。Cluster-Singleton这种模式保证了某种Actor的唯一实例可以安全稳定地在集群环境下运行。还有一种情况就是如果有许多特别占用资源的Actor需要同时运行,而这些Actor同时占用的资源远远超过一台服务器的容量,如此我们必须把这些Actor分布到多台服务器上,或者是一个由多台服务器组成的集群环境,这时就需要Cluster-Sharding模式来帮助解决这样的问题了。

我把通过使用Cluster-Sharding后达到的一些目的和大家分享一下,大家一起来分析分析到底这些达成的目标里是否包括了Actor在集群节点间的分布:

首先我有个Actor,它的名称是一个自编码,由Cluster-Sharding在集群中某个节点上构建。由于在一个集群环境里所以这个Actor到底在哪个节点上,具体地址是什么我都不知道,我只需要用这个自编码就可以和它沟通。如果我有许多自编码的消耗资源的Actor,我可以通过自编码中的分片(shard)编号来指定在其它的分片(shard)里构建这些Actor。Akka-Cluster还可以根据整个集群中节点的增减按当前集群节点情况进行分片在集群节点调动来重新配载(rebalance),包括在某些节点因故脱离集群时把节点上的所有Actor在其它在线节点上重新构建。这样看来,这个Actor的自编码应该是Cluster-Sharding的应用核心元素了。按惯例我们还是用例子来示范Cluster-Sharding的使用。我们需要分片(sharding)的Actor就是前几篇讨论里提到的Calculator:

package clustersharding.entity

import akka.actor._
import akka.cluster._
import akka.persistence._
import scala.concurrent.duration._
import akka.cluster.sharding._ object Calculator {
sealed trait Command
case class Num(d: Double) extends Command
case class Add(d: Double) extends Command
case class Sub(d: Double) extends Command
case class Mul(d: Double) extends Command
case class Div(d: Double) extends Command
case object ShowResult extends Command sealed trait Event
case class SetResult(d: Any) extends Event def getResult(res: Double, cmd: Command) = cmd match {
case Num(x) => x
case Add(x) => res + x
case Sub(x) => res - x
case Mul(x) => res * x
case Div(x) => {
val _ = res.toInt / x.toInt //yield ArithmeticException when /0.00
res / x
}
case _ => new ArithmeticException("Invalid Operation!")
} case class State(result: Double) { def updateState(evt: Event): State = evt match {
case SetResult(n) => copy(result = n.asInstanceOf[Double])
}
} case object Disconnect extends Command //exit cluster def props = Props(new Calcultor) } class Calcultor extends PersistentActor with ActorLogging {
import Calculator._
val cluster = Cluster(context.system) var state: State = State() override def persistenceId: String = self.path.parent.name+"-"+self.path.name override def receiveRecover: Receive = {
case evt: Event => state = state.updateState(evt)
case SnapshotOffer(_,st: State) => state = state.copy(result = st.result)
} override def receiveCommand: Receive = {
case Num(n) => persist(SetResult(getResult(state.result,Num(n))))(evt => state = state.updateState(evt))
case Add(n) => persist(SetResult(getResult(state.result,Add(n))))(evt => state = state.updateState(evt))
case Sub(n) => persist(SetResult(getResult(state.result,Sub(n))))(evt => state = state.updateState(evt))
case Mul(n) => persist(SetResult(getResult(state.result,Mul(n))))(evt => state = state.updateState(evt))
case Div(n) => persist(SetResult(getResult(state.result,Div(n))))(evt => state = state.updateState(evt))
case ShowResult => log.info(s"Result on ${cluster.selfAddress.hostPort} is: ${state.result}")
case Disconnect =>
log.info(s"${cluster.selfAddress} is leaving cluster!!!")
cluster.leave (cluster.selfAddress) } override def preRestart(reason: Throwable, message: Option[Any]): Unit = {
log.info(s"Restarting calculator: ${reason.getMessage}")
super.preRestart(reason, message)
}
} class CalcSupervisor extends Actor {
def decider: PartialFunction[Throwable,SupervisorStrategy.Directive] = {
case _: ArithmeticException => SupervisorStrategy.Resume
} override def supervisorStrategy: SupervisorStrategy =
OneForOneStrategy(maxNrOfRetries = , withinTimeRange = seconds){
decider.orElse(SupervisorStrategy.defaultDecider)
}
val calcActor = context.actorOf(Calculator.props,"calculator") override def receive: Receive = {
case msg@ _ => calcActor.forward(msg)
} }

我们看到:Calculator是一个普通的PersisitentActor,内部状态可以实现持久化,Actor重启时可以恢复状态。CalcSupervisor是Calculator的监管,这样做是为了实现新的监管策略SupervisorStrategy。

Calculator就是我们准备集群分片(sharding)的目标enitity。一种Actor的分片是通过Akka的Cluster-Sharding的ClusterSharding.start方法在集群中构建的。我们需要在所有将承载分片的节点上运行这个方法来部署分片:

/**
* Register a named entity type by defining the [[akka.actor.Props]] of the entity actor and
* functions to extract entity and shard identifier from messages. The [[ShardRegion]] actor
* for this type can later be retrieved with the [[#shardRegion]] method.
*
* The default shard allocation strategy [[ShardCoordinator.LeastShardAllocationStrategy]]
* is used. [[akka.actor.PoisonPill]] is used as `handOffStopMessage`.
*
* Some settings can be configured as described in the `akka.cluster.sharding` section
* of the `reference.conf`.
*
* @param typeName the name of the entity type
* @param entityProps the `Props` of the entity actors that will be created by the `ShardRegion`
* @param settings configuration settings, see [[ClusterShardingSettings]]
* @param extractEntityId partial function to extract the entity id and the message to send to the
* entity from the incoming message, if the partial function does not match the message will
* be `unhandled`, i.e. posted as `Unhandled` messages on the event stream
* @param extractShardId function to determine the shard id for an incoming message, only messages
* that passed the `extractEntityId` will be used
* @return the actor ref of the [[ShardRegion]] that is to be responsible for the shard
*/
def start(
typeName: String,
entityProps: Props,
settings: ClusterShardingSettings,
extractEntityId: ShardRegion.ExtractEntityId,
extractShardId: ShardRegion.ExtractShardId): ActorRef = { val allocationStrategy = new LeastShardAllocationStrategy(
settings.tuningParameters.leastShardAllocationRebalanceThreshold,
settings.tuningParameters.leastShardAllocationMaxSimultaneousRebalance) start(typeName, entityProps, settings, extractEntityId, extractShardId, allocationStrategy, PoisonPill)
}

start返回了ShardRegion,是个ActorRef类型。ShardRegion是一个特殊的Actor,负责管理可能多个分片(shard)内称为Entity的Actor实例。这些分片可能是分布在不同的集群节点上的,外界通过ShardRegion与其辖下Entities沟通。从start函数参数entityProps我们看到:每个分片中只容许一个种类的Actor;具体的Entity实例是由另一个内部Actor即shard构建的,shard可以在一个分片中构建多个Entity实例。多shard多entity的特性可以从extractShardId,extractEntityId这两个方法中得到一些信息。我们说过Actor自编码即entity-id是Cluster-Sharding的核心元素。在entity-id这个自编码中还包含了shard-id,所以用户可以通过entity-id的编码规则来设计整个分片系统包括每个ShardRegion下shard和entity的数量。当ShardRegion得到一个entity-id后,首先从中抽取shard-id,如果shard-id在集群中不存在的话就按集群各节点负载情况在其中一个节点上构建新的shard;然后再用entity-id在shard-id分片中查找entity,如果不存在就构建一个新的entity实例。整个shard和entity的构建过程都是通过用户提供的函数extractShardId和extractEntityId实现的,Cluster-Sharding就是通过这两个函数按用户的要求来构建和使用shard和entity的。这个自编码无需按一定的顺序,只需要保证唯一性。下面是一个编码例子:

object CalculatorShard {
import Calculator._ case class CalcCommands(eid: String, msg: Command) //user should use it to talk to shardregion
val shardName = "calcShard"
val getEntityId: ShardRegion.ExtractEntityId = {
case CalcCommands(id,msg) => (id,msg)
}
val getShardId: ShardRegion.ExtractShardId = {
case CalcCommands(id,_) => id.head.toString
}
def entityProps = Props(new CalcSupervisor)
}

用户是用CalcCommands与ShardRegion沟通的。这是一个专门为与分片系统沟通而设的包嵌消息类型,包嵌的信息里除了Calculator正常支持的Command消息外,还包括了目标Entity实例的编号eid。这个eid的第一个字节代表shard-id,这样我们可以直接指定目标entity所在分片或者随意任选一个shard-id如:Random.NextInt(9).toString。由于每个分片只含一种类型的Actor,不同的entity-id代表多个同类Actor实例的同时存在,就像前面讨论的Router一样:所有实例针对不同的输入进行相同功能的运算处理。一般来说用户会通过某种算法任意产生entity-id,希望能做到各分片中entity的均衡部署,Cluster-Sharding可以根据具体的集群负载情况自动调整分片在集群节点层面上的部署。

下面的代码示范了如何在一个集群节点上部署分片:

package clustersharding.shard
import akka.persistence.journal.leveldb._
import akka.actor._
import akka.cluster.sharding._
import com.typesafe.config.ConfigFactory
import akka.util.Timeout
import scala.concurrent.duration._
import akka.pattern._
import clustersharding.entity.CalculatorShard object CalcShards {
def create(port: Int) = {
val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=${port}")
.withFallback(ConfigFactory.load("sharding"))
// Create an Akka system
val system = ActorSystem("ShardingSystem", config) startupSharding(port,system) } def startupSharedJournal(system: ActorSystem, startStore: Boolean, path: ActorPath): Unit = {
// Start the shared journal one one node (don't crash this SPOF)
// This will not be needed with a distributed journal
if (startStore)
system.actorOf(Props[SharedLeveldbStore], "store")
// register the shared journal
import system.dispatcher
implicit val timeout = Timeout(.seconds)
val f = (system.actorSelection(path) ? Identify(None))
f.onSuccess {
case ActorIdentity(_, Some(ref)) =>
SharedLeveldbJournal.setStore(ref, system)
case _ =>
system.log.error("Shared journal not started at {}", path)
system.terminate()
}
f.onFailure {
case _ =>
system.log.error("Lookup of shared journal at {} timed out", path)
system.terminate()
}
} def startupSharding(port: Int, system: ActorSystem) = { startupSharedJournal(system, startStore = (port == ), path =
ActorPath.fromString("akka.tcp://ShardingSystem@127.0.0.1:2551/user/store")) ClusterSharding(system).start(
typeName = CalculatorShard.shardName,
entityProps = CalculatorShard.entityProps,
settings = ClusterShardingSettings(system),
extractEntityId = CalculatorShard.getEntityId,
extractShardId = CalculatorShard.getShardId
) } }

具体的部署代码在startupSharding方法里。下面这段代码示范了如何使用分片里的entity:

package clustersharding.demo
import akka.actor.ActorSystem
import akka.cluster.sharding._
import clustersharding.entity.CalculatorShard.CalcCommands
import clustersharding.entity._
import clustersharding.shard.CalcShards
import com.typesafe.config.ConfigFactory object ClusterShardingDemo extends App { CalcShards.create()
CalcShards.create()
CalcShards.create()
CalcShards.create() Thread.sleep() val shardingSystem = ActorSystem("ShardingSystem",ConfigFactory.load("sharding"))
CalcShards.startupSharding(,shardingSystem) Thread.sleep() val calcRegion = ClusterSharding(shardingSystem).shardRegion(CalculatorShard.shardName) calcRegion ! CalcCommands("",Calculator.Num(13.0)) //shard 1, entity 1012
calcRegion ! CalcCommands("",Calculator.Add(12.0))
calcRegion ! CalcCommands("",Calculator.ShowResult) //shows address too
calcRegion ! CalcCommands("",Calculator.Disconnect) //disengage cluster calcRegion ! CalcCommands("",Calculator.Num(10.0)) //shard 2, entity 2012
calcRegion ! CalcCommands("",Calculator.Mul(3.0))
calcRegion ! CalcCommands("",Calculator.Div(2.0))
calcRegion ! CalcCommands("",Calculator.Div(0.0)) //divide by zero Thread.sleep()
calcRegion ! CalcCommands("",Calculator.ShowResult) //check if restore result on another node
calcRegion ! CalcCommands("",Calculator.ShowResult)
}

以上代码里人为选定了分片和entity-id,其中包括了从集群中抽出一个节点的操作。运算结果如下:

[INFO] [// ::49.414] [ShardingSystem-akka.actor.default-dispatcher-] [akka.tcp://ShardingSystem@127.0.0.1:50456/system/sharding/calcShard/1/1012/calculator] Result on ShardingSystem@127.0.0.1:50456 is: 25.0
[INFO] [// ::49.414] [ShardingSystem-akka.actor.default-dispatcher-] [akka.tcp://ShardingSystem@127.0.0.1:50456/system/sharding/calcShard/1/1012/calculator] akka.tcp://ShardingSystem@127.0.0.1:50456 is leaving cluster!!!
[WARN] [// ::49.431] [ShardingSystem-akka.actor.default-dispatcher-] [akka://ShardingSystem/system/sharding/calcShard/2/2012/calculator] / by zero
[INFO] [// ::01.320] [ShardingSystem-akka.actor.default-dispatcher-] [akka.tcp://ShardingSystem@127.0.0.1:50464/system/sharding/calcShard/2/2012/calculator] Result on ShardingSystem@127.0.0.1:50464 is: 15.0
[INFO] [// ::01.330] [ShardingSystem-akka.actor.default-dispatcher-] [akka.tcp://ShardingSystem@127.0.0.1:50457/system/sharding/calcShard/1/1012/calculator] Result on ShardingSystem@127.0.0.1:50457 is: 25.0

结果显示entity1012在节点50456退出集群后被转移到节点50457上,并行保留了状态。

下面是本次示范的源代码:

build.sbt

name := "cluster-sharding"

version := "1.0"

scalaVersion := "2.11.9"

resolvers += "Akka Snapshot Repository" at "http://repo.akka.io/snapshots/"

val akkaversion = "2.4.8"

libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-actor" % akkaversion,
"com.typesafe.akka" %% "akka-remote" % akkaversion,
"com.typesafe.akka" %% "akka-cluster" % akkaversion,
"com.typesafe.akka" %% "akka-cluster-tools" % akkaversion,
"com.typesafe.akka" %% "akka-cluster-sharding" % akkaversion,
"com.typesafe.akka" %% "akka-persistence" % "2.4.8",
"com.typesafe.akka" %% "akka-contrib" % akkaversion,
"org.iq80.leveldb" % "leveldb" % "0.7",
"org.fusesource.leveldbjni" % "leveldbjni-all" % "1.8")

resources/sharding.conf

akka.actor.warn-about-java-serializer-usage = off
akka.log-dead-letters-during-shutdown = off
akka.log-dead-letters = off akka {
loglevel = INFO
actor {
provider = "akka.cluster.ClusterActorRefProvider"
} remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port =
}
} cluster {
seed-nodes = [
"akka.tcp://ShardingSystem@127.0.0.1:2551"]
log-info = off
} persistence {
journal.plugin = "akka.persistence.journal.leveldb-shared"
journal.leveldb-shared.store {
# DO NOT USE 'native = off' IN PRODUCTION !!!
native = off
dir = "target/shared-journal"
}
snapshot-store.plugin = "akka.persistence.snapshot-store.local"
snapshot-store.local.dir = "target/snapshots"
}
}

Calculator.scala

package clustersharding.entity

import akka.actor._
import akka.cluster._
import akka.persistence._
import scala.concurrent.duration._
import akka.cluster.sharding._ object Calculator {
sealed trait Command
case class Num(d: Double) extends Command
case class Add(d: Double) extends Command
case class Sub(d: Double) extends Command
case class Mul(d: Double) extends Command
case class Div(d: Double) extends Command
case object ShowResult extends Command sealed trait Event
case class SetResult(d: Any) extends Event def getResult(res: Double, cmd: Command) = cmd match {
case Num(x) => x
case Add(x) => res + x
case Sub(x) => res - x
case Mul(x) => res * x
case Div(x) => {
val _ = res.toInt / x.toInt //yield ArithmeticException when /0.00
res / x
}
case _ => new ArithmeticException("Invalid Operation!")
} case class State(result: Double) { def updateState(evt: Event): State = evt match {
case SetResult(n) => copy(result = n.asInstanceOf[Double])
}
} case object Disconnect extends Command //exit cluster def props = Props(new Calcultor) } class Calcultor extends PersistentActor with ActorLogging {
import Calculator._
val cluster = Cluster(context.system) var state: State = State() override def persistenceId: String = self.path.parent.name+"-"+self.path.name override def receiveRecover: Receive = {
case evt: Event => state = state.updateState(evt)
case SnapshotOffer(_,st: State) => state = state.copy(result = st.result)
} override def receiveCommand: Receive = {
case Num(n) => persist(SetResult(getResult(state.result,Num(n))))(evt => state = state.updateState(evt))
case Add(n) => persist(SetResult(getResult(state.result,Add(n))))(evt => state = state.updateState(evt))
case Sub(n) => persist(SetResult(getResult(state.result,Sub(n))))(evt => state = state.updateState(evt))
case Mul(n) => persist(SetResult(getResult(state.result,Mul(n))))(evt => state = state.updateState(evt))
case Div(n) => persist(SetResult(getResult(state.result,Div(n))))(evt => state = state.updateState(evt))
case ShowResult => log.info(s"Result on ${cluster.selfAddress.hostPort} is: ${state.result}")
case Disconnect =>
log.info(s"${cluster.selfAddress} is leaving cluster!!!")
cluster.leave (cluster.selfAddress) } override def preRestart(reason: Throwable, message: Option[Any]): Unit = {
log.info(s"Restarting calculator: ${reason.getMessage}")
super.preRestart(reason, message)
}
} class CalcSupervisor extends Actor {
def decider: PartialFunction[Throwable,SupervisorStrategy.Directive] = {
case _: ArithmeticException => SupervisorStrategy.Resume
} override def supervisorStrategy: SupervisorStrategy =
OneForOneStrategy(maxNrOfRetries = , withinTimeRange = seconds){
decider.orElse(SupervisorStrategy.defaultDecider)
}
val calcActor = context.actorOf(Calculator.props,"calculator") override def receive: Receive = {
case msg@ _ => calcActor.forward(msg)
} } object CalculatorShard {
import Calculator._ case class CalcCommands(eid: String, msg: Command) //user should use it to talk to shardregion
val shardName = "calcShard"
val getEntityId: ShardRegion.ExtractEntityId = {
case CalcCommands(id,msg) => (id,msg)
}
val getShardId: ShardRegion.ExtractShardId = {
case CalcCommands(id,_) => id.head.toString
}
def entityProps = Props(new CalcSupervisor)
}

CalcShard.scala

package clustersharding.shard
import akka.persistence.journal.leveldb._
import akka.actor._
import akka.cluster.sharding._
import com.typesafe.config.ConfigFactory
import akka.util.Timeout
import scala.concurrent.duration._
import akka.pattern._
import clustersharding.entity.CalculatorShard object CalcShards {
def create(port: Int) = {
val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=${port}")
.withFallback(ConfigFactory.load("sharding"))
// Create an Akka system
val system = ActorSystem("ShardingSystem", config) startupSharding(port,system) } def startupSharedJournal(system: ActorSystem, startStore: Boolean, path: ActorPath): Unit = {
// Start the shared journal one one node (don't crash this SPOF)
// This will not be needed with a distributed journal
if (startStore)
system.actorOf(Props[SharedLeveldbStore], "store")
// register the shared journal
import system.dispatcher
implicit val timeout = Timeout(.seconds)
val f = (system.actorSelection(path) ? Identify(None))
f.onSuccess {
case ActorIdentity(_, Some(ref)) =>
SharedLeveldbJournal.setStore(ref, system)
case _ =>
system.log.error("Shared journal not started at {}", path)
system.terminate()
}
f.onFailure {
case _ =>
system.log.error("Lookup of shared journal at {} timed out", path)
system.terminate()
}
} def startupSharding(port: Int, system: ActorSystem) = { startupSharedJournal(system, startStore = (port == ), path =
ActorPath.fromString("akka.tcp://ShardingSystem@127.0.0.1:2551/user/store")) ClusterSharding(system).start(
typeName = CalculatorShard.shardName,
entityProps = CalculatorShard.entityProps,
settings = ClusterShardingSettings(system),
extractEntityId = CalculatorShard.getEntityId,
extractShardId = CalculatorShard.getShardId
) } }

ClusterShardingDemo.scala

package clustersharding.demo
import akka.actor.ActorSystem
import akka.cluster.sharding._
import clustersharding.entity.CalculatorShard.CalcCommands
import clustersharding.entity._
import clustersharding.shard.CalcShards
import com.typesafe.config.ConfigFactory object ClusterShardingDemo extends App { CalcShards.create()
CalcShards.create()
CalcShards.create()
CalcShards.create() Thread.sleep() val shardingSystem = ActorSystem("ShardingSystem",ConfigFactory.load("sharding"))
CalcShards.startupSharding(,shardingSystem) Thread.sleep() val calcRegion = ClusterSharding(shardingSystem).shardRegion(CalculatorShard.shardName) calcRegion ! CalcCommands("",Calculator.Num(13.0)) //shard 1, entity 1012
calcRegion ! CalcCommands("",Calculator.Add(12.0))
calcRegion ! CalcCommands("",Calculator.ShowResult) //shows address too
calcRegion ! CalcCommands("",Calculator.Disconnect) //disengage cluster calcRegion ! CalcCommands("",Calculator.Num(10.0)) //shard 2, entity 2012
calcRegion ! CalcCommands("",Calculator.Mul(3.0))
calcRegion ! CalcCommands("",Calculator.Div(2.0))
calcRegion ! CalcCommands("",Calculator.Div(0.0)) //divide by zero Thread.sleep()
calcRegion ! CalcCommands("",Calculator.ShowResult) //check if restore result on another node
calcRegion ! CalcCommands("",Calculator.ShowResult)
}

Akka(13): 分布式运算:Cluster-Sharding-运算的集群分片的更多相关文章

  1. akka-typed(7) - cluster:sharding, 集群分片

    在使用akka-typed的过程中发现有很多地方都简化了不少,变得更方便了,包括:Supervision,只要用Behaviors.supervise()把Behavior包住,很容易就可以实现这个a ...

  2. Akka Cluster之集群分片

    一.介绍  当您需要在集群中的多个节点之间分配Actor,并希望能够使用其逻辑标识符与它们进行交互时,集群分片是非常有用的.你无需关心Actor在集群中的物理位置,因为这可能也会随着时间的推移而发生变 ...

  3. Akka-Cluster(6)- Cluster-Sharding:集群分片,分布式交互程序核心方式

    在前面几篇讨论里我们介绍了在集群环境里的一些编程模式.分布式数据结构及具体实现方式.到目前为止,我们已经实现了把程序任务分配给处于很多服务器上的actor,能够最大程度的利用整体系统的硬件资源.这是因 ...

  4. akka 集群分片

    akka 集群 Sharding分片 分片上下级结构 集群(多台节点机) —> 每台节点机(1个片区) —> 每个片区(多个分片) —> 每个分片(多个实体) 实体: 分片管理的 A ...

  5. MySQL Cluster 7.3.5 集群配置参数优化(优化篇)

    按照前面的教程:MySQL Cluster 7.3.5 集群配置实例(入门篇),可快速搭建起基础版的MySQL Cluster集群,但是在生成环境中,还是有很多问题的,即配置参数需要优化下, 当前生产 ...

  6. [转贴]CentOS7.5 Kubernetes V1.13(最新版)二进制部署集群

    CentOS7.5 Kubernetes V1.13(最新版)二进制部署集群 http://blog.51cto.com/10880347/2326146   一.概述 kubernetes 1.13 ...

  7. Redis Cluster 4.0.9 集群安装搭建

    Redis Cluster 4.0.9集群搭建步骤:yum install -y gcc g++ gcc-c++ make openssl cd redis-4.0.9 make mkdir -p / ...

  8. MySQL Cluster 7.3.5 集群配置实例(入门篇)

    一.环境说明: CentOS6.3(32位) + MySQL Cluster 7.3.5,规划5台机器,资料如下: 节点分布情况: MGM:192.168.137. NDBD1:192.168.137 ...

  9. Mongodb集群搭建之 Sharding+ Replica Sets集群架构

    1.本例使用1台Linux主机,通过Docker 启动三个容器 IP地址如下: docker run -d -v `pwd`/data/master:/mongodb -p 27017:27017 d ...

随机推荐

  1. C# 特性(Attribute)

    个人定义:不侵入对象的情况下,添加对象附注信息. 官方定义:将预定义的系统信息或用户定义的自定义信息与目标元素相关联.目标元素可以是程序集.类.构造函数.委托.枚举.事件.字段.接口.方法.可移植可执 ...

  2. 刨根究底字符编码之八——Unicode编码方案概述

    Unicode编码方案概述   1. 前面讲过,随着计算机发展到世界各地,于是各个国家和地区各自为政,搞出了很多既兼容ASCII但又互相不兼容的各种编码方案.这样一来同一个二进制编码就有可能被解释成不 ...

  3. Python可视化:Seaborn库热力图使用进阶

    前言 在日常工作中,经常可以见到各种各种精美的热力图,热力图的应用非常广泛,下面一起来学习下Python的Seaborn库中热力图(heatmap)如何来进行使用. 本次运行的环境为: windows ...

  4. Windows 修改电脑属性(一)

    修改电脑属性里的注册信息 修改电脑属性的注册信息 运行注册表的方法:开始→运行→regedit→确定 1.CPU型号可以注册表编辑器中定位到下面的位置: HKEY_LOCAL_MACHINE\HARD ...

  5. poj3207

    poj3207 题意 平面上,一个圆,圆的边上按顺时针放着n个点.现在要连m条边, 比如a,b,那么a到b可以从圆的内部连接,也可以从圆的外部连接. 给你的信息中,每个点最多只会连接的一条边.问能不能 ...

  6. 我的学习之路_第五章_Data,正则

    Date 类 (时间类) 所属包:java.util.Date 构造方法: public Date() 返回的是当前时间 也就是1970-1-1到电脑目前的时间值,用毫秒来表示 public Date ...

  7. ajax 动态添加商品列表

    <!DOCTYPE html><html> <head> <meta charset="utf-8" /> <title> ...

  8. 弹性盒模型 flex box

    弹性盒子模型 布局方案 传统的布局方案大多采用div+css+float+position+display来实现,但是随着css3中弹性盒子模型的推出,在前端布局方案中就又多出了一项彪悍的选项. 而因 ...

  9. javascript中event.clientX和event.clientY用法的注意事项

    今天做项目用到了event.clientX和event.clientY,给元素定位,用定位的时候,让top和left等于事件元素的的坐标 <!DOCTYPE html> <html& ...

  10. 逻辑关系下的NN应用

    ​ 自己好奇搜了几篇别人对Ng视频的的笔记,读下去可观性很强,后回到自己的笔记却觉得矛盾很多,有些地方搞得很模糊,自己没有仔细去想导致写完读起来很怪,此篇之后我决定放慢记笔记的速度,力求尽多地搞清楚模 ...