在很多应用场景中都会出现在系统中需要某类Actor的唯一实例(only instance)。这个实例在集群环境中可能在任何一个节点上,但保证它是唯一的。Akka的Cluster-Singleton提供对这种Singleton Actor模式的支持,能做到当这个实例所在节点出现问题需要脱离集群时自动在另一个节点上构建一个同样的Actor,并重新转交控制。当然,由于涉及了一个新构建的Actor,内部状态会在这个过程中丢失。Single-Actor的主要应用包括某种对外部只能支持一个接入的程序接口,或者一种带有由多个其它Actor运算结果产生的内部状态的累积型Actor(aggregator)。当然,如果使用一种带有内部状态的Singleton-Actor,可以考虑使用PersistenceActor来实现内部状态的自动恢复。如此Cluster-Singleton变成了一种非常实用的模式,可以在许多场合下应用。

Cluster-Singleton模式也恰恰因为它的唯一性特点存在着一些隐忧,需要特别关注。唯一性容易造成的隐忧包括:容易造成超负荷、无法保证稳定在线、无法保证消息投递。这些需要用户在编程时增加特别处理。

好了,我们设计个例子来了解Cluster-Singleton,先看看Singleton-Actor的功能:

class SingletonActor extends PersistentActor with ActorLogging {
import SingletonActor._
val cluster = Cluster(context.system) var freeHoles =
var freeTrees =
var ttlMatches = override def persistenceId = self.path.parent.name + "-" + self.path.name def updateState(evt: Event): Unit = evt match {
case AddHole =>
if (freeTrees > ) {
ttlMatches +=
freeTrees -=
} else freeHoles +=
case AddTree =>
if (freeHoles > ) {
ttlMatches +=
freeHoles -=
} else freeTrees += } override def receiveRecover: Receive = {
case evt: Event => updateState(evt)
case SnapshotOffer(_,ss: State) =>
freeHoles = ss.nHoles
freeTrees = ss.nTrees
ttlMatches = ss.nMatches
} override def receiveCommand: Receive = {
case Dig =>
persist(AddHole){evt =>
updateState(evt)
}
sender() ! AckDig //notify sender message received
log.info(s"State on ${cluster.selfAddress}:freeHoles=$freeHoles,freeTrees=$freeTrees,ttlMatches=$ttlMatches") case Plant =>
persist(AddTree) {evt =>
updateState(evt)
}
sender() ! AckPlant //notify sender message received
log.info(s"State on ${cluster.selfAddress}:freeHoles=$freeHoles,freeTrees=$freeTrees,ttlMatches=$ttlMatches") case Disconnect => //this node exits cluster. expect switch to another node
log.info(s"${cluster.selfAddress} is leaving cluster ...")
cluster.leave(cluster.selfAddress)
case CleanUp =>
//clean up ...
self ! PoisonPill
} }

这个SingletonActor就是一种特殊的Actor,它继承了PersistentActor,所以需要实现PersistentActor的抽象函数。SingletonActor维护了几个内部状态,分别是各类运算的当前累积结果freeHoles,freeTrees,ttlMatches。SingletonActor模拟的是一个种树场景:当收到Dig指令后产生登记树坑AddHole事件,在这个事件中更新当前状态值;当收到Plant指令后产生AddTree事件并更新状态。因为Cluster-Singleton模式无法保证消息安全投递所以应该加个回复机制AckDig,AckPlant让消息发送者可用根据情况补发消息。我们是用Cluster.selfAddress来确认当前集群节点的转换。

我们需要在所有承载SingletonActor的集群节点上构建部署ClusterSingletonManager,如下:

  def create(port: Int) = {
val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=$port")
.withFallback(ConfigFactory.parseString("akka.cluster.roles=[singleton]"))
.withFallback(ConfigFactory.load())
val singletonSystem = ActorSystem("SingletonClusterSystem",config) startupSharedJournal(singletonSystem, (port == ), path =
ActorPath.fromString("akka.tcp://SingletonClusterSystem@127.0.0.1:2551/user/store")) val singletonManager = singletonSystem.actorOf(ClusterSingletonManager.props(
singletonProps = Props[SingletonActor],
terminationMessage = CleanUp,
settings = ClusterSingletonManagerSettings(singletonSystem).withRole(Some("singleton"))
), name = "singletonManager") }

可以看的出来,ClusterSingletonManager也是一种Actor,通过ClusterSingletonManager.props配置其所管理的SingletonActor。我们的目的主要是去求证当前集群节点出现故障需要退出集群时,这个SingletonActor是否能够自动转移到其它在线的节点上。ClusterSingletonManager的工作原理是首先在所有选定的集群节点上构建和部署,然后在最先部署的节点上启动SingletonActor,当这个节点不可使用时(unreachable)自动在次先部署的节点上重新构建部署SingletonActor。

同样作为一种Actor,ClusterSingletonProxy是通过与ClusterSingletonManager消息沟通来调用SingletonActor的。ClusterSingletonProxy动态跟踪在线的SingletonActor,为用户提供它的ActorRef。我们可以通过下面的代码来具体调用SingletonActor:

object SingletonUser {
def create = {
val config = ConfigFactory.parseString("akka.cluster.roles=[frontend]")
.withFallback(ConfigFactory.load())
val suSystem = ActorSystem("SingletonClusterSystem",config) val singletonProxy = suSystem.actorOf(ClusterSingletonProxy.props(
singletonManagerPath = "/user/singletonManager",
settings = ClusterSingletonProxySettings(suSystem).withRole(None)
), name= "singletonUser") import suSystem.dispatcher
//send Dig messages every 2 seconds to SingletonActor through prox
suSystem.scheduler.schedule(.seconds,.second,singletonProxy,SingletonActor.Dig) //send Plant messages every 3 seconds to SingletonActor through prox
suSystem.scheduler.schedule(.seconds,.second,singletonProxy,SingletonActor.Plant) //send kill message to hosting node every 30 seconds
suSystem.scheduler.schedule(.seconds,.seconds,singletonProxy,SingletonActor.Disconnect)
} }

我们分不同的时间段通过ClusterSingletonProxy向SingletonActor发送Dig和Plant消息。然后每隔30秒向SingletonActor发送一个Disconnect消息通知它所在节点开始脱离集群。然后我们用下面的代码来试着运行:

package clustersingleton.demo

import clustersingleton.sa.SingletonActor
import clustersingleton.frontend.SingletonUser object ClusterSingletonDemo extends App { SingletonActor.create() //seed-node SingletonActor.create() //ClusterSingletonManager node
SingletonActor.create()
SingletonActor.create()
SingletonActor.create() SingletonUser.create //ClusterSingletonProxy node }

运算结果如下:

[INFO] [// ::28.210] [main] [akka.remote.Remoting] Starting remoting
[INFO] [// ::28.334] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://SingletonClusterSystem@127.0.0.1:2551]
[INFO] [// ::28.489] [main] [akka.remote.Remoting] Starting remoting
[INFO] [// ::28.493] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://SingletonClusterSystem@127.0.0.1:55839]
[INFO] [// ::28.514] [main] [akka.remote.Remoting] Starting remoting
[INFO] [// ::28.528] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://SingletonClusterSystem@127.0.0.1:55840]
[INFO] [// ::28.566] [main] [akka.remote.Remoting] Starting remoting
[INFO] [// ::28.571] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://SingletonClusterSystem@127.0.0.1:55841]
[INFO] [// ::28.595] [main] [akka.remote.Remoting] Starting remoting
[INFO] [// ::28.600] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://SingletonClusterSystem@127.0.0.1:55842]
[INFO] [// ::28.620] [main] [akka.remote.Remoting] Starting remoting
[INFO] [// ::28.624] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://SingletonClusterSystem@127.0.0.1:55843]
[INFO] [// ::28.794] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55843/user/singletonUser] Singleton identified at [akka.tcp://SingletonClusterSystem@127.0.0.1:2551/user/singletonManager/singleton]
[INFO] [// ::28.817] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:2551/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:2551:freeHoles=0,freeTrees=0,ttlMatches=0
[INFO] [// ::29.679] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:2551/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:2551:freeHoles=1,freeTrees=0,ttlMatches=0
...
[INFO] [// ::38.676] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:2551/user/singletonManager/singleton] akka.tcp://SingletonClusterSystem@127.0.0.1:2551 is leaving cluster ...
[INFO] [// ::39.664] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:2551/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:2551:freeHoles=0,freeTrees=1,ttlMatches=4
[INFO] [// ::40.654] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:2551/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:2551:freeHoles=0,freeTrees=2,ttlMatches=4
[INFO] [// ::41.664] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:2551/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:2551:freeHoles=0,freeTrees=1,ttlMatches=5
[INFO] [// ::42.518] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55843/user/singletonUser] Singleton identified at [akka.tcp://SingletonClusterSystem@127.0.0.1:55839/user/singletonManager/singleton]
[INFO] [// ::43.653] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55839/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:55839:freeHoles=0,freeTrees=2,ttlMatches=5
[INFO] [// ::43.672] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55839/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:55839:freeHoles=0,freeTrees=1,ttlMatches=6
[INFO] [// ::45.665] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55839/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:55839:freeHoles=0,freeTrees=2,ttlMatches=6
[INFO] [// ::46.654] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55839/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:55839:freeHoles=0,freeTrees=3,ttlMatches=6
...
[INFO] [// ::53.673] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55839/user/singletonManager/singleton] akka.tcp://SingletonClusterSystem@127.0.0.1:55839 is leaving cluster ...
[INFO] [// ::55.654] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55839/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:55839:freeHoles=0,freeTrees=4,ttlMatches=9
[INFO] [// ::55.664] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55839/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:55839:freeHoles=0,freeTrees=3,ttlMatches=10
[INFO] [// ::56.646] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55843/user/singletonUser] Singleton identified at [akka.tcp://SingletonClusterSystem@127.0.0.1:55840/user/singletonManager/singleton]
[INFO] [// ::57.662] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55840/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:55840:freeHoles=0,freeTrees=4,ttlMatches=10
[INFO] [// ::58.652] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55840/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:55840:freeHoles=0,freeTrees=5,ttlMatches=10

从结果显示里我们可以观察到随着节点脱离集群,SingletonActor自动转换到其它的集群节点上继续运行。

值得再三注意的是:以此等简单的编码就可以实现那么复杂的集群式分布运算程序,说明Akka是一种具有广阔前景的实用编程工具!

下面是本次示范的完整源代码:

build.sbt

name := "cluster-singleton"

version := "1.0"

scalaVersion := "2.11.9"

resolvers += "Akka Snapshot Repository" at "http://repo.akka.io/snapshots/"

val akkaversion = "2.4.8"

libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-actor" % akkaversion,
"com.typesafe.akka" %% "akka-remote" % akkaversion,
"com.typesafe.akka" %% "akka-cluster" % akkaversion,
"com.typesafe.akka" %% "akka-cluster-tools" % akkaversion,
"com.typesafe.akka" %% "akka-cluster-sharding" % akkaversion,
"com.typesafe.akka" %% "akka-persistence" % "2.4.8",
"com.typesafe.akka" %% "akka-contrib" % akkaversion,
"org.iq80.leveldb" % "leveldb" % "0.7",
"org.fusesource.leveldbjni" % "leveldbjni-all" % "1.8")

application.conf

akka.actor.warn-about-java-serializer-usage = off
akka.log-dead-letters-during-shutdown = off
akka.log-dead-letters = off akka {
loglevel = INFO
actor {
provider = "akka.cluster.ClusterActorRefProvider"
} remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port =
}
} cluster {
seed-nodes = [
"akka.tcp://SingletonClusterSystem@127.0.0.1:2551"]
log-info = off
} persistence {
journal.plugin = "akka.persistence.journal.leveldb-shared"
journal.leveldb-shared.store {
# DO NOT USE 'native = off' IN PRODUCTION !!!
native = off
dir = "target/shared-journal"
}
snapshot-store.plugin = "akka.persistence.snapshot-store.local"
snapshot-store.local.dir = "target/snapshots"
}
}

SingletonActor.scala

package clustersingleton.sa

import akka.actor._
import akka.cluster._
import akka.persistence._
import com.typesafe.config.ConfigFactory
import akka.cluster.singleton._
import scala.concurrent.duration._
import akka.persistence.journal.leveldb._
import akka.util.Timeout
import akka.pattern._ object SingletonActor {
sealed trait Command
case object Dig extends Command
case object Plant extends Command
case object AckDig extends Command //acknowledge
case object AckPlant extends Command //acknowledge case object Disconnect extends Command //force node to leave cluster
case object CleanUp extends Command //clean up when actor ends sealed trait Event
case object AddHole extends Event
case object AddTree extends Event case class State(nHoles: Int, nTrees: Int, nMatches: Int) def create(port: Int) = {
val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=$port")
.withFallback(ConfigFactory.parseString("akka.cluster.roles=[singleton]"))
.withFallback(ConfigFactory.load())
val singletonSystem = ActorSystem("SingletonClusterSystem",config) startupSharedJournal(singletonSystem, (port == ), path =
ActorPath.fromString("akka.tcp://SingletonClusterSystem@127.0.0.1:2551/user/store")) val singletonManager = singletonSystem.actorOf(ClusterSingletonManager.props(
singletonProps = Props[SingletonActor],
terminationMessage = CleanUp,
settings = ClusterSingletonManagerSettings(singletonSystem).withRole(Some("singleton"))
), name = "singletonManager") } def startupSharedJournal(system: ActorSystem, startStore: Boolean, path: ActorPath): Unit = {
// Start the shared journal one one node (don't crash this SPOF)
// This will not be needed with a distributed journal
if (startStore)
system.actorOf(Props[SharedLeveldbStore], "store")
// register the shared journal
import system.dispatcher
implicit val timeout = Timeout(.seconds)
val f = (system.actorSelection(path) ? Identify(None))
f.onSuccess {
case ActorIdentity(_, Some(ref)) =>
SharedLeveldbJournal.setStore(ref, system)
case _ =>
system.log.error("Shared journal not started at {}", path)
system.terminate()
}
f.onFailure {
case _ =>
system.log.error("Lookup of shared journal at {} timed out", path)
system.terminate()
}
} } class SingletonActor extends PersistentActor with ActorLogging {
import SingletonActor._
val cluster = Cluster(context.system) var freeHoles =
var freeTrees =
var ttlMatches = override def persistenceId = self.path.parent.name + "-" + self.path.name def updateState(evt: Event): Unit = evt match {
case AddHole =>
if (freeTrees > ) {
ttlMatches +=
freeTrees -=
} else freeHoles +=
case AddTree =>
if (freeHoles > ) {
ttlMatches +=
freeHoles -=
} else freeTrees += } override def receiveRecover: Receive = {
case evt: Event => updateState(evt)
case SnapshotOffer(_,ss: State) =>
freeHoles = ss.nHoles
freeTrees = ss.nTrees
ttlMatches = ss.nMatches
} override def receiveCommand: Receive = {
case Dig =>
persist(AddHole){evt =>
updateState(evt)
}
sender() ! AckDig //notify sender message received
log.info(s"State on ${cluster.selfAddress}:freeHoles=$freeHoles,freeTrees=$freeTrees,ttlMatches=$ttlMatches") case Plant =>
persist(AddTree) {evt =>
updateState(evt)
}
sender() ! AckPlant //notify sender message received
log.info(s"State on ${cluster.selfAddress}:freeHoles=$freeHoles,freeTrees=$freeTrees,ttlMatches=$ttlMatches") case Disconnect => //this node exits cluster. expect switch to another node
log.info(s"${cluster.selfAddress} is leaving cluster ...")
cluster.leave(cluster.selfAddress)
case CleanUp =>
//clean up ...
self ! PoisonPill
} }

SingletonUser.scala

package clustersingleton.frontend
import akka.actor._
import clustersingleton.sa.SingletonActor
import com.typesafe.config.ConfigFactory
import akka.cluster.singleton._
import scala.concurrent.duration._ object SingletonUser {
def create = {
val config = ConfigFactory.parseString("akka.cluster.roles=[frontend]")
.withFallback(ConfigFactory.load())
val suSystem = ActorSystem("SingletonClusterSystem",config) val singletonProxy = suSystem.actorOf(ClusterSingletonProxy.props(
singletonManagerPath = "/user/singletonManager",
settings = ClusterSingletonProxySettings(suSystem).withRole(None)
), name= "singletonUser") import suSystem.dispatcher
//send Dig messages every 2 seconds to SingletonActor through prox
suSystem.scheduler.schedule(.seconds,.second,singletonProxy,SingletonActor.Dig) //send Plant messages every 3 seconds to SingletonActor through prox
suSystem.scheduler.schedule(.seconds,.second,singletonProxy,SingletonActor.Plant) //send kill message to hosting node every 30 seconds
suSystem.scheduler.schedule(.seconds,.seconds,singletonProxy,SingletonActor.Disconnect)
} }

ClusterSingletonDemo.scala

package clustersingleton.demo

import clustersingleton.sa.SingletonActor
import clustersingleton.frontend.SingletonUser object ClusterSingletonDemo extends App { SingletonActor.create() //seed-node SingletonActor.create() //ClusterSingletonManager node
SingletonActor.create()
SingletonActor.create()
SingletonActor.create() SingletonUser.create //ClusterSingletonProxy node }

Akka(12): 分布式运算:Cluster-Singleton-让运算在集群节点中自动转移的更多相关文章

  1. 分布式架构高可用架构篇_01_zookeeper集群的安装、配置、高可用测试

    参考: 龙果学院http://www.roncoo.com/share.html?hamc=hLPG8QsaaWVOl2Z76wpJHp3JBbZZF%2Bywm5vEfPp9LbLkAjAnB%2B ...

  2. 实现Redis Cluster并实现Python链接集群

    目录 一.Redis Cluster简单介绍 二.背景 三.环境准备 3.1 主机环境 3.2 主机规划 四.部署Redis 4.1 安装Redis软件 4.2 编辑Redis配置文件 4.3 启动R ...

  3. 分布式ID系列(4)——Redis集群实现的分布式ID适合做分布式ID吗

    首先是项目地址: https://github.com/maqiankun/distributed-id-redis-generator 关于Redis集群生成分布式ID,这里要先了解redis使用l ...

  4. Redis Cluster 集群节点维护 (三)

    Redis Cluster 集群节点维护: 集群运行很久之后,难免由于硬件故障,网络规划,业务增长,等原因对已有集群进行相应的调整,比如增加redis nodes 节点,减少节点,节点迁移,更换服务器 ...

  5. Centos7的安装、Docker1.12.3的安装,以及Docker Swarm集群的简单实例

    目录 [TOC] 1.环境准备 ​ 本文中的案例会有四台机器,他们的Host和IP地址如下 c1 -> 10.0.0.31 c2 -> 10.0.0.32 c3 -> 10.0.0. ...

  6. 分布式架构高可用架构篇_03-redis3集群的安装高可用测试

    参考文档 Redis 官方集群指南:http://redis.io/topics/cluster-tutorial Redis 官方集群规范:http://redis.io/topics/cluste ...

  7. [转载] Centos7的安装、Docker1.12.3的安装,以及Docker Swarm集群的简单实例

    1.环境准备 ​ 本文中的案例会有四台机器,他们的Host和IP地址如下 c1 -> 10.0.0.31 c2 -> 10.0.0.32 c3 -> 10.0.0.33 c4 -&g ...

  8. Redis Cluster 4.0高可用集群安装、在线迁移操作记录

    之前介绍了redis cluster的结构及高可用集群部署过程,今天这里简单说下redis集群的迁移.由于之前的redis cluster集群环境部署的服务器性能有限,需要迁移到高配置的服务器上.考虑 ...

  9. Redis Cluster 集群节点信息 维护篇(二)

    集群信息文件: # cluster 集群内部信息对应文件,由集群自动维护. /data/soft/redis/6379data/nodes-6379.conf 集群信息查看: ./redis-trib ...

随机推荐

  1. 利用shell脚本监控目录内文件改动

    #! /bin/bash webroot="/home/www/" cp /dev/null rsync_file if [ ! -f   file.md5 ];then      ...

  2. 关于vue生命周期中的同步异步的理解

    在vue官网中介绍生命周期的图如下: 主要测试代码如下: 主要是测试前四个生命周期beforeCreate,created,beforeMount,mounted,里面同步和异步的执行顺序,其它的类似 ...

  3. JavaSE教程-04Java中循环语句for,while,do···while-练习2

    1.编写一个剪子石头布对战小程序 该法是穷举法:将所有情况列出来 import java.util.*; public class Game{ public static void main(Stri ...

  4. Python进制转换(二进制、十进制和十六进制)

    #!/usr/bin/env python # -*- coding: utf-8 -*- # 2/10/16 base trans. wrote by srcdog on 20th, April, ...

  5. SQL Server AG集群启动不起来的临时自救大招

    SQL Server AG集群启动不起来的临时自救大招 背景 前晚一朋友遇到AG集群发生来回切换不稳定的情况,情急之下,朋友在命令行使用命令重启WSFC集群 结果重启WSFC集群之后,非但没有好转,导 ...

  6. Core ML 机器学习

    在WWDC 2017开发者大会上,苹果宣布了一系列新的面向开发者的机器学习 API,包括面部识别的视觉 API.自然语言处理 API,这些 API 集成了苹果所谓的 Core ML 框架.Core M ...

  7. ssh隧道

    最近有需求使用ssh隧道,顺便研究了下,以下记录一下大概说明 ssh隧道顾名思义在可以通过ssh连接的server之间建立加密隧道,常用于突破网络限制 常用三种端口转发模式:本地端口转发,远程端口转发 ...

  8. dedecms的热门标签在那里修改

    很多人都在用dedecms,因为它不但开源,而且功能还很强大.有会员功能,评论功能,问答功能,积分功能,充值卡等.那么我们来看看很多同学在优黔图里面的提的问题-dedecms的热门标签在那里修改? 其 ...

  9. chrome谷歌浏览器-DevTool开发者工具-详细总结

    目录: 一.概述 1.官方文档 2.打开方法: 3.前言: 二.九个模块: 1.设备模式Device Mode 2.元素面板Elements 3.控制台面板Console 4.源代码面板Sources ...

  10. 简谈java 中的 继承和多态

    继承(extends) : 1:object 是所有类的父(基)类. 2:子类继承父类所有的内容除了(private修饰的和构造方法). 3:子类在手动创建构造方法时,必须调用父类构造方法. 4:在J ...