flink Standalone Cluster
Requirements
Software Requirements
Flink runs on all UNIX-like environments, e.g. Linux, Mac OS X, and Cygwin (for Windows) and expects the cluster to consist of one master node and one or more worker nodes. Before you start to setup the system, make sure you have the following software installed on each node:
- Java 1.8.x or higher,
- ssh (sshd must be running to use the Flink scripts that manage remote components)
If your cluster does not fulfill these software requirements you will need to install/upgrade it.
Having passwordless SSH and the same directory structure on all your cluster nodes will allow you to use our scripts to control everything.
ssh免密登录:
在源机器执行
ssh-keygen //一路回车,生成公钥,位置~/.ssh/id_rsa.pub
ssh-copy-id -i ~/.ssh/id_rsa.pub root@目的机器 //将源机器生成的公钥拷贝到目的机器的~/.ssh/authorized_keys
或者 手动将源机器生成的公钥拷贝到目的机器的~/.ssh/authorized_keys。注意:不要有换行
完成!
JAVA_HOME Configuration
系统环境配置了JAVA_HOME即可,无需如下操作
Flink requires the JAVA_HOME environment variable to be set on the master and all worker nodes and point to the directory of your Java installation.
You can set this variable in conf/flink-conf.yaml via the env.java.home key.
Flink Setup
Go to the downloads page and get the ready-to-run package. Make sure to pick the Flink package matching your Hadoop version. If you don’t plan to use Hadoop, pick any version.
After downloading the latest release, copy the archive to your master node and extract it:
tar xzf flink-*.tgz
cd flink-*
Configuring Flink (不做任何设置,在单节点上直接运行bin/start-cluster.sh,可在单机启动单个jobmanager和单个taskmanager,方便debug代码)
After having extracted the system files, you need to configure Flink for the cluster by editing conf/flink-conf.yaml.
Set the jobmanager.rpc.address key to point to your master node. You should also define the maximum amount of main memory the JVM is allowed to allocate on each node by setting the jobmanager.heap.mb and taskmanager.heap.mb keys.
These values are given in MB. If some worker nodes have more main memory which you want to allocate to the Flink system you can overwrite the default value by setting the environment variable FLINK_TM_HEAP on those specific nodes.
Finally, you must provide a list of all nodes in your cluster which shall be used as worker nodes. Therefore, similar to the HDFS configuration, edit the file conf/slaves and enter the IP/host name of each worker node. Each worker node will later run a TaskManager.
The following example illustrates the setup with three nodes (with IP addresses from 10.0.0.1 to 10.0.0.3 and hostnames master, worker1, worker2) and shows the contents of the configuration files (which need to be accessible at the same path on all machines):

jobmanager.rpc.address: 10.0.0.1
注意:需要在所有需要启动taskmanager的机器进行如上配置
/path/to/flink/conf/slaves
10.0.0.2
10.0.0.3
The Flink directory must be available on every worker under the same path. You can use a shared NFS directory, or copy the entire Flink directory to every worker node.
Please see the configuration page for details and additional configuration options.
In particular,
- the amount of available memory per JobManager (
jobmanager.heap.mb), - the amount of available memory per TaskManager (
taskmanager.heap.mb), - the number of available CPUs per machine (
taskmanager.numberOfTaskSlots), - the total number of CPUs in the cluster (
parallelism.default) and - the temporary directories (
taskmanager.tmp.dirs)
are very important configuration values.
Starting Flink
The following script starts a JobManager on the local node and connects via SSH to all worker nodes listed in the slaves file to start the TaskManager on each node. Now your Flink system is up and running. The JobManager running on the local node will now accept jobs at the configured RPC port.
Assuming that you are on the master node and inside the Flink directory:
bin/start-cluster.sh
To stop Flink, there is also a stop-cluster.sh script.
Adding JobManager/TaskManager Instances to a Cluster
You can add both JobManager and TaskManager instances to your running cluster with the bin/jobmanager.sh and bin/taskmanager.shscripts.
Adding a JobManager
bin/jobmanager.sh ((start|start-foreground) cluster)|stop|stop-all
Adding a TaskManager
bin/taskmanager.sh start|start-foreground|stop|stop-all
Make sure to call these scripts on the hosts on which you want to start/stop the respective instance.
flink Standalone Cluster的更多相关文章
- flink初识及安装flink standalone集群
flink architecture 1.可以看出,flink可以运行在本地,也可以类似spark一样on yarn或者standalone模式(与spark standalone也很相似),此外fl ...
- Apache Spark源码走读之19 -- standalone cluster模式下资源的申请与释放
欢迎转载,转载请注明出处,徽沪一郎. 概要 本文主要讲述在standalone cluster部署模式下,Spark Application在整个运行期间,资源(主要是cpu core和内存)的申请与 ...
- Spark Standalone cluster try
Spark Standalone cluster node*-- stop firewalldsystemctl stop firewalldsystemctl disable firewalld-- ...
- flink部署操作-flink standalone集群安装部署
flink集群安装部署 standalone集群模式 必须依赖 必须的软件 JAVA_HOME配置 flink安装 配置flink 启动flink 添加Jobmanager/taskmanager 实 ...
- Spark运行模式_spark自带cluster manager的standalone cluster模式(集群)
这种运行模式和"Spark自带Cluster Manager的Standalone Client模式(集群)"还是有很大的区别的.使用如下命令执行应用程序(前提是已经启动了spar ...
- Flink standalone模式作业执行流程
宏观流程如下图: client端 生成StreamGraph env.addSource(new SocketTextStreamFunction(...)) .flatMap(new FlatMap ...
- 【转帖】两年Flink迁移之路:从standalone到on yarn,处理能力提升五倍
两年Flink迁移之路:从standalone到on yarn,处理能力提升五倍 https://segmentfault.com/a/1190000020209179 flink 1.7k 次阅读 ...
- Apache Flink 的迁移之路,2 年处理效果提升 5 倍
一.背景与痛点 在 2017 年上半年以前,TalkingData 的 App Analytics 和 Game Analytics 两个产品,流式框架使用的是自研的 td-etl-framework ...
- 重磅!解锁Apache Flink读写Apache Hudi新姿势
感谢阿里云 Blink 团队Danny Chan的投稿及完善Flink与Hudi集成工作. 1. 背景 Apache Hudi 是目前最流行的数据湖解决方案之一,Data Lake Analytics ...
随机推荐
- 【朝花夕拾】Handler篇
如果您的app中没有使用过Handler,那您一定是写了个假app:如果您笔试题中没有遇到Handler相关的题目,那您可能做了份假笔试题:如果您面试中没被技术官问到Handler的问题,那您也许碰到 ...
- Solr 09 - SolrJ操作Solr单机服务 (Solr的Java API)
目录 1 SolrJ是什么 2 SolrJ对索引的CRUD操作 2.1 创建Maven工程(打包方式选择为jar) 2.2 配置pom.xml文件, 加入SolrJ的依赖 2.3 添加和修改索引 2. ...
- 带着萌新看springboot源码06
这节来说个大家都比较熟悉的东西,就是servlet三大组件,servlet.filter.listener,再说说springboot的内置tomcat. 也许还会说一下tomcat的运行原理,还有, ...
- 在Java中使用redisTemplate操作缓存
背景 在最近的项目中,有一个需求是对一个很大的数据库进行查询,数据量大概在几千万条.但同时对查询速度的要求也比较高. 这个数据库之前在没有使用Presto的情况下,使用的是Hive,使用Hive进行一 ...
- Spring boot 之自动生成API文档swagger2
目前解决API的方案一般有两种 1.编写文档接口.2.利用一些现成的api系统.3.如我一般想搞点特色的就自己写个api系统:http://api.zhaobaolin.vip/ ,这个还支持多用户. ...
- Config Server高可用
一 简介构建高可用的Config Server集群,包括Config Server的高可用,以及依赖Git仓库的高可用. 二 Git仓库的高可用由于配置的内容都存储在Git仓库中,所以要想实现Conf ...
- Once More
Topic Link http://ctf5.shiyanbar.com/web/more.php 1)源代码分析 发现 ereg()函数使得password必须是数字或字母同时长度必须是小于8val ...
- leetcode — best-time-to-buy-and-sell-stock-ii
/** * Source : https://oj.leetcode.com/problems/best-time-to-buy-and-sell-stock-ii/ * * * * Say you ...
- RDIFramework.NET平台代码生成器V3.2版本全新发布(提供下载-免费使用)
回顾V3.1版本更新内容如下: 1.增加对Oracle表创建语句的查看. 2.新增对MySql的代码生成支持. 3.全面重构对多线程的支持,改变以前会无故退出的现象. 本次在V3.1版本的基础上,增加 ...
- springboot情操陶冶-jmx解析
承接前文springboot情操陶冶-@Configuration注解解析,近期笔者接触的项目中有使用到了jmx的协议框架,遂在前文的基础上讲解下springboot中是如何整合jmx的 知识储备 J ...