一、kafka-manager简介

为了简化开发者和服务工程师维护Kafka集群的工作,yahoo构建了一个叫做Kafka管理器的基于Web工具,叫做 Kafka Manager。这个管理工具可以很容易地发现分布在集群中的哪些topic分布不均匀,或者是分区在整个集群分布不均匀的的情况。它支持管理多个集群、选择副本、副本重新分配以及创建Topic。同时,这个管理工具也是一个非常好的可以快速浏览这个集群的工具,有如下功能:

  • 管理多个集群
  • 轻松检查群集状态(主题,消费者,偏移,代理,副本分发,分区分发)
  • 运行首选副本选举
  • 使用选项生成分区分配以选择要使用的代理
  • 运行分区重新分配(基于生成的分配)
  • 使用可选主题配置创建主题(0.8.1.1具有与0.8.2+不同的配置)
  • 删除主题(仅支持0.8.2+并记住在代理配​​置中设置delete.topic.enable = true)
  • 主题列表现在指示标记为删除的主题(仅支持0.8.2+)
  • 批量生成多个主题的分区分配,并可选择要使用的代理
  • 批量运行重新分配多个主题的分区
  • 将分区添加到现有主题
  • 更新现有主题的配置
  • kafka-manager 项目地址:https://github.com/yahoo/kafka-manager/

二、kafka-manager安装

1.下载安装包

使用Git或者直接从Releases中下载,这里下载 1.3.3.18 版本:https://github.com/yahoo/kafka-manager/releases

 wget https://github.com/yahoo/kafka-manager/archive/1.3.3.18.zip

2.解压安装包

Last login: Thu Sep   ::  from 192.168.0.103
[spark@master ~]$ cd /opt/
[spark@master opt]$ wget https://github.com/yahoo/kafka-manager/archive/1.3.3.18.zip
[spark@master opt]$ ll
total
drwxr-xr-x. root root Jul : apache-maven-3.5.
drwxr-xr-x. spark spark Jul : elasticsearch-6.2.
drwxr-xr-x. spark spark Jul : elasticsearch-head-master
drwxr-xr-x. spark spark Jul : hadoop-2.9.
drwxr-xr-x. root root Jul : hdfs-over-ftp-master
drwxr-xr-x. Mar : jdk1..0_171
drwxrwxrwx. root root Jul : kafka_2.-1.1.
-rw-r--r--. root root Sep : kafka-manager-1.3.3.18.zip
drwxrwxr-x. spark spark Feb kibana-6.2.-linux-x86_64
-rw-r--r--. root root Apr mysql57-community-release-el7-.noarch.rpm
drwxrwxr-x. spark spark Aug : nifi-1.7.
drwxr-xr-x. spark spark Jul : node-8.9.
drwxr-xr-x. spark spark Jul : node-v8.9.1
drwxrwxr-x. Apr scala-2.11.
drwxrwxrwx. hadoop hadoop Sep : spark-2.2.-bin-hadoop2.
drwxrwxrwx. spark spark Aug : zookeeper-3.4.
[spark@master opt]$ su root
Password:
[root@master opt]# unzip kafka-manager-1.3.3.18.zip
Archive: kafka-manager-1.3.3.18.zip
8dcdbf8fabb0001691c9b52b447b656f498b4d7b
creating: kafka-manager-1.3.3.18/
。。。。
inflating: kafka-manager-1.3.3.18/test/kafka/test/SeededBroker.scala
[root@master opt]# ls
apache-maven-3.5. jdk1..0_171 mysql57-community-release-el7-.noarch.rpm spark-2.2.-bin-hadoop2.
elasticsearch-6.2. kafka_2.-1.1. nifi-1.7. zookeeper-3.4.
elasticsearch-head-master kafka-manager-1.3.3.18 node-8.9.
hadoop-2.9. kafka-manager-1.3.3.18.zip node-v8.9.1
hdfs-over-ftp-master kibana-6.2.-linux-x86_64 scala-2.11.

3.sbt编译

1)yum安装sbt(因为kafka-manager需要sbt编译)

[root@master opt]# cd kafka-manager-1.3.3.18
[root@master kafka-manager-1.3.3.18]# ls
app build.sbt conf img LICENCE project public README.md sbt src test
[root@master kafka-manager-1.3.3.18]# sbt
bash: sbt: command not found
[root@master kafka-manager-1.3.3.18]# cd ..
[root@master opt]# curl https://bintray.com/sbt/rpm/rpm > bintray-sbt-rpm.repo
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
--:--:-- :: --:--:--
[root@master opt]# ll
total
drwxr-xr-x. root root Jul : apache-maven-3.5.
-rw-r--r--. root root Sep : bintray-sbt-rpm.repo
drwxr-xr-x. spark spark Jul : elasticsearch-6.2.
drwxr-xr-x. spark spark Jul : elasticsearch-head-master
drwxr-xr-x. spark spark Jul : hadoop-2.9.
drwxr-xr-x. root root Jul : hdfs-over-ftp-master
drwxr-xr-x. Mar : jdk1..0_171
drwxrwxrwx. root root Jul : kafka_2.-1.1.
drwxr-xr-x. root root Jul : kafka-manager-1.3.3.18
-rw-r--r--. root root Sep : kafka-manager-1.3.3.18.zip
drwxrwxr-x. spark spark Feb kibana-6.2.-linux-x86_64
-rw-r--r--. root root Apr mysql57-community-release-el7-.noarch.rpm
drwxrwxr-x. spark spark Aug : nifi-1.7.
drwxr-xr-x. spark spark Jul : node-8.9.
drwxr-xr-x. spark spark Jul : node-v8.9.1
drwxrwxr-x. Apr scala-2.11.
drwxrwxrwx. hadoop hadoop Sep : spark-2.2.-bin-hadoop2.
drwxrwxrwx. spark spark Aug : zookeeper-3.4.
[root@master opt]# mv bintray-sbt-rpm.repo /etc/yum.repos.d/
[root@master opt]# yum install sbt
Loaded plugins: fastestmirror
Determining fastest mirrors
* base: mirrors.zju.edu.cn
* extras: mirrors.shu.edu.cn
* updates: mirrors.zju.edu.cn
base | 3.6 kB ::
bintray--sbt-rpm | 1.3 kB ::
extras | 3.4 kB ::
mysql-connectors-community | 2.5 kB ::
mysql-tools-community | 2.5 kB ::
mysql57-community | 2.5 kB ::
updates | 3.4 kB ::
(/): extras//x86_64/primary_db | kB ::
(/): mysql-connectors-community/x86_64/primary_db | kB ::
(/): bintray--sbt-rpm/primary | 3.8 kB ::
(/): updates//x86_64/primary_db | 5.2 MB ::
bintray--sbt-rpm /
Resolving Dependencies
--> Running transaction check
---> Package sbt.noarch :1.2.- will be installed
--> Finished Dependency Resolution Dependencies Resolved ====================================================================================================================================
Package Arch Version Repository Size
====================================================================================================================================
Installing:
sbt noarch 1.2.- bintray--sbt-rpm 1.1 M Transaction Summary
====================================================================================================================================
Install Package Total download size: 1.1 M
Installed size: 1.2 M
Is this ok [y/d/N]: y
Downloading packages:
sbt-1.2..rpm | 1.1 MB ::
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : sbt-1.2.-.noarch /
Verifying : sbt-1.2.-.noarch / Installed:
sbt.noarch :1.2.- Complete!

改仓库地址:(sbt 默认下载库文件很慢, 还时不时被打断),我们可以在用户目录下创建 touch ~/.sbt/repositories, 填上阿里云的镜像   # vi ~/.sbt/repositories

[root@master opt]# ls
apache-maven-3.5. jdk1..0_171 mysql57-community-release-el7-.noarch.rpm spark-2.2.-bin-hadoop2.
elasticsearch-6.2. kafka_2.-1.1. nifi-1.7. zookeeper-3.4.
elasticsearch-head-master kafka-manager-1.3.3.18 node-8.9.
hadoop-2.9. kafka-manager-1.3.3.18.zip node-v8.9.1
hdfs-over-ftp-master kibana-6.2.-linux-x86_64 scala-2.11.
[root@master opt]# sbt
Getting org.scala-sbt sbt 1.2. (this may take some time)...
^C[root@master opt]# sbt -version
Getting org.scala-sbt sbt 1.2. (this may take some time)...
^C[root@master opt]# cd kafka-manager-1.3.3.18
[root@master kafka-manager-1.3.3.18]# ./sbt clean dist
Downloading sbt launcher for 0.13.:
From http://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt-launch/0.13.9/sbt-launch.jar
To /root/.sbt/launchers/0.13./sbt-launch.jar
Download failed. Obtain the jar manually and place it at /root/.sbt/launchers/0.13./sbt-launch.jar
[root@master kafka-manager-1.3.3.18]# vi ~/.sbt/repositories
[repositories]
local
aliyun-nexus: http://maven.aliyun.com/nexus/content/groups/public/
jcenter: https://jcenter.bintray.com/
typesafe-ivy-releases: https://repo.typesafe.com/typesafe/ivy-releases/, [organization]/[module]/[revision]/[type]s/[artifact](-[c
lassifier]).[ext], bootOnly
maven-central
~
~
~
~
~
~

2)编译kafka-manager

[root@master kafka-manager-1.3.3.18]# ./sbt clean dist
Downloading sbt launcher for 0.13.:
From http://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt-launch/0.13.9/sbt-launch.jar
To /root/.sbt/launchers/0.13./sbt-launch.jar
[info] Loading project definition from /opt/kafka-manager-1.3.3.18/project
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
[info] Set current project to kafka-manager (in build file:/opt/kafka-manager-1.3.3.18/)
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
[warn] Credentials file /root/.bintray/.credentials does not exist
[success] Total time: s, completed Sep , :: PM
。。。。。
[info] [SUCCESSFUL ] jline#jline;2.12.!jline.jar (340ms)
[info] Done updating.
[warn] Scala version was updated by one of library dependencies:
[warn] * org.scala-lang:scala-library:(2.11., 2.11., 2.11., 2.11., 2.11., 2.11.) -> 2.11.
[warn] To force scalaVersion, add the following:
[warn] ivyScala := ivyScala.value map { _.copy(overrideScalaVersion = true) }
[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * org.webjars:jquery:1.11. -> 2.1.
[warn] Run 'evicted' to see detailed eviction warnings
[info] Wrote /opt/kafka-manager-1.3.3.18/target/scala-2.11/kafka-manager_2.-1.3.3.18.pom
[info] Compiling Scala sources and Java sources to /opt/kafka-manager-1.3.3.18/target/scala-2.11/classes...
[info] 'compiler-interface' not yet compiled for Scala 2.11.. Compiling...
[info] Main Scala API documentation to /opt/kafka-manager-1.3.3.18/target/scala-2.11/api...
[info] Compilation completed in 8.323 s
model contains documentable templates
[info] Main Scala API documentation successful.
[info] Packaging /opt/kafka-manager-1.3.3.18/target/scala-2.11/kafka-manager_2.-1.3.3.18-javadoc.jar ...
[info] Done packaging.
[info] LESS compiling on source(s)
[info] Packaging /opt/kafka-manager-1.3.3.18/target/scala-2.11/kafka-manager_2.-1.3.3.18-web-assets.jar ...
[info] Done packaging.
[info] Packaging /opt/kafka-manager-1.3.3.18/target/scala-2.11/kafka-manager_2.-1.3.3.18.jar ...
[info] Done packaging.
[info] Packaging /opt/kafka-manager-1.3.3.18/target/scala-2.11/kafka-manager_2.-1.3.3.18-sans-externalized.jar ...
[info] Done packaging.
[info]
[info] Your package is ready in /opt/kafka-manager-1.3.3.18/target/universal/kafka-manager-1.3.3.18.zip
[info]
[success] Total time: s, completed Sep , :: PM
[root@master kafka-manager-1.3.3.18]# ls
app build.sbt conf img LICENCE project public README.md sbt src target test
[root@master kafka-manager-1.3.3.18]# cd target/universal
[root@master universal]# ls
kafka-manager-1.3.3.18.zip scripts
[root@master universal]# scp kafka-manager-1.3.3.18.zip /opt/kafka-manager-1.3.3.18.zip

4.安装

重新解压编译好的kafka-manager-1.3.3.18.zip&修改配置文件

[root@master universal]# cd /opt/
[root@master opt]# ls
apache-maven-3.5. hadoop-2.9. kafka_2.-1.1. kibana-6.2.-linux-x86_64 node-8.9. spark-2.2.-bin-hadoop2.
elasticsearch-6.2. hdfs-over-ftp-master kafka-manager-1.3.3.18 mysql57-community-release-el7-.noarch.rpm node-v8.9.1 zookeeper-3.4.
elasticsearch-head-master jdk1..0_171 kafka-manager-1.3.3.18.zip nifi-1.7. scala-2.11.
[root@master opt]# mv kafka-manager-1.3.3.18 kafka-manager-1.3.3.18-source
[root@master opt]# ll
total
drwxr-xr-x. root root Jul : apache-maven-3.5.
drwxr-xr-x. spark spark Jul : elasticsearch-6.2.
drwxr-xr-x. spark spark Jul : elasticsearch-head-master
drwxr-xr-x. spark spark Jul : hadoop-2.9.
drwxr-xr-x. root root Jul : hdfs-over-ftp-master
drwxr-xr-x. Mar : jdk1..0_171
drwxrwxrwx. root root Jul : kafka_2.-1.1.
drwxr-xr-x. root root Sep : kafka-manager-1.3.3.18-source
-rw-r--r--. root root Sep : kafka-manager-1.3.3.18.zip
drwxrwxr-x. spark spark Feb kibana-6.2.-linux-x86_64
-rw-r--r--. root root Apr mysql57-community-release-el7-.noarch.rpm
drwxrwxr-x. spark spark Aug : nifi-1.7.
drwxr-xr-x. spark spark Jul : node-8.9.
drwxr-xr-x. spark spark Jul : node-v8.9.1
drwxrwxr-x. Apr scala-2.11.
drwxrwxrwx. hadoop hadoop Sep : spark-2.2.-bin-hadoop2.
drwxrwxrwx. spark spark Aug : zookeeper-3.4.
[root@master opt]# unzip kafka-manager-1.3.3.18.zip
Archive: kafka-manager-1.3.3.18.zip
inflating: kafka-manager-1.3.3.18/lib/kafka-manager.kafka-manager-1.3.3.18-sans-externalized.jar
。。。。
inflating: kafka-manager-1.3.3.18/share/doc/api/index/index-d.html
inflating: kafka-manager-1.3.3.18/README.md
[root@master opt]# ll
total
drwxr-xr-x. root root Jul : apache-maven-3.5.
drwxr-xr-x. spark spark Jul : elasticsearch-6.2.
drwxr-xr-x. spark spark Jul : elasticsearch-head-master
drwxr-xr-x. spark spark Jul : hadoop-2.9.
drwxr-xr-x. root root Jul : hdfs-over-ftp-master
drwxr-xr-x. Mar : jdk1..0_171
drwxrwxrwx. root root Jul : kafka_2.-1.1.
drwxr-xr-x. root root Sep : kafka-manager-1.3.3.18
drwxr-xr-x. root root Sep : kafka-manager-1.3.3.18-source
-rw-r--r--. root root Sep : kafka-manager-1.3.3.18.zip
drwxrwxr-x. spark spark Feb kibana-6.2.-linux-x86_64
-rw-r--r--. root root Apr mysql57-community-release-el7-.noarch.rpm
drwxrwxr-x. spark spark Aug : nifi-1.7.
drwxr-xr-x. spark spark Jul : node-8.9.
drwxr-xr-x. spark spark Jul : node-v8.9.1
drwxrwxr-x. Apr scala-2.11.
drwxrwxrwx. hadoop hadoop Sep : spark-2.2.-bin-hadoop2.
drwxrwxrwx. spark spark Aug : zookeeper-3.4.
[root@master opt]# cd kafka-manager-1.3.3.18
[root@master kafka-manager-1.3.3.18]# ls
bin conf lib README.md share
[root@master kafka-manager-1.3.3.18]# cd conf/
[root@master conf]# ls
application.conf consumer.properties logback.xml logger.xml routes
[root@master conf]# vim application.conf # Secret key
# ~~~~~
# The secret key is used to secure cryptographics functions.
# If you deploy your application to several instances be sure to use the same key!
play.crypto.secret="^<csmm5Fx4d=r2HEX8pelM3iBkFVv?k[mc;IZE<_Qoq8EkX_/7@Zt6dP05Pzea3U"
play.crypto.secret=${?APPLICATION_SECRET} # The application languages
# ~~~~~
play.i18n.langs=["en"] play.http.requestHandler = "play.http.DefaultHttpRequestHandler"
play.http.context = "/"
play.application.loader=loader.KafkaManagerLoader #kafka-manager.zkhosts="kafka-manager-zookeeper:2181"
kafka-manager.zkhosts="192.168.0.120:2181,192.168.0.121:2181,192.168.0.122:2181"
kafka-manager.zkhosts=${?ZK_HOSTS}
pinned-dispatcher.type="PinnedDispatcher"
pinned-dispatcher.executor="thread-pool-executor"
application.features=["KMClusterManagerFeature","KMTopicManagerFeature","KMPreferredReplicaElectionFeature","KMReassignPartitionsFea
ture"] akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "INFO"
} akka.logger-startup-timeout = 60s basicAuthentication.enabled=false
basicAuthentication.enabled=${?KAFKA_MANAGER_AUTH_ENABLED}
basicAuthentication.username="admin"
basicAuthentication.username=${?KAFKA_MANAGER_USERNAME}
basicAuthentication.password="password"
basicAuthentication.password=${?KAFKA_MANAGER_PASSWORD}
basicAuthentication.realm="Kafka-Manager"
basicAuthentication.excluded=["/api/health"] # ping the health of your instance without authentification kafka-manager.consumer.properties.file=${?CONSUMER_PROPERTIES_FILE}
~
~
~
~
~
~
~
~
~
"application.conf" 46L, 1682C written

5.启动服务

启动zk集群,kafka集群,再启动kafka-manager服务。

bin/kafka-manager 默认的端口是9000,可通过 -Dhttp.port,指定端口; -Dconfig.file=conf/application.conf指定配置文件:

[root@master conf]# cd ..
[root@master kafka-manager-1.3.3.18]# ls
bin conf lib README.md share
[root@master kafka-manager-1.3.3.18]#
[root@master kafka-manager-1.3.3.18]# nohup bin/kafka-manager -Dconfig.file=conf/application.conf -Dhttp.port= &

jps查看:

[root@master spark]# jps
QuorumPeerMain
Kafka
ProdServerStart
Jps
[root@master spark]#

WebUI查看:http://192.168.0.120:19093/ 出现如下界面则启动成功。

其他就是在ui上新建cluster,新建topic,查看topic等操作。

参考:

http://www.cnblogs.com/frankdeng/p/9584870.html

https://www.cnblogs.com/dadonggg/p/8205302.html

Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二十七):kafka manager安装的更多相关文章

  1. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十七)Elasticsearch-6.2.2集群安装,组件安装

    1.集群安装es ES内部索引原理: <时间序列数据库的秘密(1)—— 介绍> <时间序列数据库的秘密 (2)——索引> <时间序列数据库的秘密(3)——加载和分布式计算 ...

  2. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网。

    使用VMW安装四台CentOS-7-x86_64-DVD-1804.iso虚拟机: 计划配置三台centos虚拟机: master:192.168.0.120 slave1:192.168.0.121 ...

  3. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二十一)NIFI1.7.1安装

    一.nifi基本配置 1. 修改各节点主机名,修改/etc/hosts文件内容. 192.168.0.120 master 192.168.0.121 slave1 192.168.0.122 sla ...

  4. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十二)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网。

    Centos7出现异常:Failed to start LSB: Bring up/down networking. 按照<Kafka:ZK+Kafka+Spark Streaming集群环境搭 ...

  5. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十)安装hadoop2.9.0搭建HA

    如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...

  6. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(九)安装kafka_2.11-1.1.0

    如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...

  7. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(八)安装zookeeper-3.4.12

    如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...

  8. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(三)安装spark2.2.1

    如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...

  9. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二)安装hadoop2.9.0

    如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...

  10. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十三)kafka+spark streaming打包好的程序提交时提示虚拟内存不足(Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical memory used; 2.2 GB of 2.1 G)

    异常问题:Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical mem ...

随机推荐

  1. jQuery自己定义绑定的魔法升级版

    jQuery自己定义绑定 首先让我们来看看jQuery的自己定义绑定的用法,你能够使用bind或者live来订阅一个事件(当然1.7以后也能够使用on了),代码例如以下: $("#myEle ...

  2. linux-socket connect阻塞和非阻塞模式 示例

    ~/cpp$ ./connect 192.168.1.234 1234 kkkk block mode:  ubuntu 14.04 : time used:21.0.001053s connect ...

  3. Revit API选择三维视图上一点

    start [TransactionAttribute(Autodesk.Revit.Attributes.TransactionMode.Manual)] public class cmdPickP ...

  4. 基於tiny4412的Linux內核移植 --- 实例学习中断背后的知识(2)

    作者:彭东林 邮箱:pengdonglin137@163.com QQ:405728433 平台 tiny4412 ADK Linux-4.9 概述 前面一篇博文基於tiny4412的Linux內核移 ...

  5. ECShop 调用自定义广告

    原文地址:http://www.ecshoptemplate.com/article-1348.html ECShop中关于广告的调用方法,网上有很多,现在要介绍的不同于其他,根据实际情况选择使用,以 ...

  6. 多线程UI

    遇到过要在工作线程中去更新UI以让用户知道进度,而在多线程中直接调用UI控件操作是错误的做法. 最后解决方法是将操作UI的代码封装,通过Invoke / BeginInvoke 去委托调用. priv ...

  7. MQ:Introducing Advanced Messaging

    原文地址:http://www.yourenterprisearchitect.com/2011/11/introducing-advanced-messaging.html. Introducing ...

  8. 迷宫问题的C语言求解

    1 .Preface /** * There have been many data to introduce the algorithm. So I will try to simply expla ...

  9. Library drmframework_jni not found

    http://piotrbuda.eu/2012/06/trying-to-solve-error-491-in-play-store-on-android-emulator.html http:// ...

  10. HTML5 filesystem: 网址

    FileSystem API 使用新的网址机制,(即 filesystem:),可用于填充 src 或 href 属性.例如,如果您要显示某幅图片且拥有相应的 fileEntry,您可以调用 toUR ...