Kafka:ZK+Kafka+Spark Streaming集群环境搭建(三)安装spark2.2.1
如何搭建配置centos虚拟机请参考《Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网。》
如何安装hadoop2.9.0请参考《Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二)安装hadoop2.9.0》
如何配置hadoop2.9.0 HA 请参考《Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十)安装hadoop2.9.0搭建HA》
安装spark的服务器:
192.168.0.120 master
192.168.0.121 slave1
192.168.0.122 slave2
192.168.0.123 slave3
从spark官网下载spark安装包:
官网地址:http://spark.apache.org/downloads.html
注意:上一篇文章中我们安装了hadoop2.9.0,但是这里没有发现待下载spark对应的hadoop版本可选项中发现hadoop2.9.0,因此也只能选择“Pre-built for Apache Hadoop 2.7 and later”。
这spark可选版本比较多,就选择“2.2.1(Dec 01 2017)”。
选中后,此时带下来的spark安装包版本信息为:
下载“spark-2.2.1-bin-hadoop2.7.tgz”,上传到master的/opt目录下,并解压:
[root@master opt]# tar -zxvf spark-2.2.-bin-hadoop2..tgz
[root@master opt]# ls
hadoop-2.9. hadoop-2.9..tar.gz jdk1..0_171 jdk-8u171-linux-x64.tar.gz scala-2.11. scala-2.11..tgz spark-2.2.-bin-hadoop2. spark-2.2.-bin-hadoop2..tgz
[root@master opt]#
配置Spark
[root@master opt]# ls
hadoop-2.9. hadoop-2.9..tar.gz jdk1..0_171 jdk-8u171-linux-x64.tar.gz scala-2.11. scala-2.11..tgz spark-2.2.-bin-hadoop2. spark-2.2.-bin-hadoop2..tgz
[root@master opt]# cd spark-2.2.-bin-hadoop2./conf/
[root@master conf]# ls
docker.properties.template metrics.properties.template spark-env.sh.template
fairscheduler.xml.template slaves.template
log4j.properties.template spark-defaults.conf.template
[root@master conf]# scp spark-env.sh.template spark-env.sh
[root@master conf]# ls
docker.properties.template metrics.properties.template spark-env.sh
fairscheduler.xml.template slaves.template spark-env.sh.template
log4j.properties.template spark-defaults.conf.template
[root@master conf]# vi spark-env.sh
在spark-env.sh末尾添加以下内容(这是我的配置,你需要根据自己安装的环境情况自行修改):
export SCALA_HOME=/opt/scala-2.11.0
export JAVA_HOME=/opt/jdk1.8.0_171
export HADOOP_HOME=/opt/hadoop-2.9.0
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
SPARK_MASTER_IP=master
SPARK_LOCAL_DIRS=/opt/spark-2.2.1-bin-hadoop2.7
SPARK_DRIVER_MEMORY=1G
注:在设置Worker进程的CPU个数和内存大小,要注意机器的实际硬件条件,如果配置的超过当前Worker节点的硬件条件,Worker进程会启动失败。
vi slaves
在slaves文件下填上slave主机名:
[root@master conf]# scp slaves.template slaves
[root@master conf]# vi slaves
配置内容为:
#localhost
slave1
slave2
slave3
将配置好的spark-2.2.1-bin-hadoop2.7文件夹分发给所有slaves吧
scp -r /opt/spark-2.2.-bin-hadoop2. spark@slave1:/opt/
scp -r /opt/spark-2.2.-bin-hadoop2. spark@slave2:/opt/
scp -r /opt/spark-2.2.-bin-hadoop2. spark@slave3:/opt/
注意:此时默认slave1,slave2,slave3上是没有/opt/spark-2.2.1-bin-hadoop2.7,因此直接拷贝可能会出现无权限操作 。
解决方案,分别在slave1,slave2,slave3的/opt下创建spark-2.2.1-bin-hadoop2.7,并分配777权限。
[root@slave1 opt]# mkdir spark-2.2.1-bin-hadoop2.7
[root@slave1 opt]# chmod 777 spark-2.2.1-bin-hadoop2.7
[root@slave1 opt]#
之后,再次操作拷贝就有权限操作了。
启动Spark
在spark安装目录下执行下面命令才行 , 目前的master安装目录在/opt/spark-2.2.1-bin-hadoop2.7
sbin/start-all.sh
此时,我使用非root账户(spark用户名的用户)启动spark,出现master上spark无权限写日志的问题:
[spark@master opt]$ cd /opt/spark-2.2.-bin-hadoop2.
[spark@master spark-2.2.-bin-hadoop2.]$ sbin/start-all.sh
mkdir: cannot create directory ‘/opt/spark-2.2.-bin-hadoop2./logs’: Permission denied
chown: cannot access ‘/opt/spark-2.2.-bin-hadoop2./logs’: No such file or directory
starting org.apache.spark.deploy.master.Master, logging to /opt/spark-2.2.-bin-hadoop2./logs/spark-spark-org.apache.spark.deploy.master.Master--master.out
/opt/spark-2.2.-bin-hadoop2./sbin/spark-daemon.sh: line : /opt/spark-2.2.-bin-hadoop2./logs/spark-spark-org.apache.spark.deploy.master.Master--master.out: No such file or directory
failed to launch: nice -n /opt/spark-2.2.-bin-hadoop2./bin/spark-class org.apache.spark.deploy.master.Master --host master --port --webui-port
tail: cannot open ‘/opt/spark-2.2.-bin-hadoop2./logs/spark-spark-org.apache.spark.deploy.master.Master--master.out’ for reading: No such file or directory
full log in /opt/spark-2.2.-bin-hadoop2./logs/spark-spark-org.apache.spark.deploy.master.Master--master.out
slave1: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark-2.2.-bin-hadoop2./logs/spark-spark-org.apache.spark.deploy.worker.Worker--slave1.out
slave3: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark-2.2.-bin-hadoop2./logs/spark-spark-org.apache.spark.deploy.worker.Worker--slave3.out
slave2: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark-2.2.-bin-hadoop2./logs/spark-spark-org.apache.spark.deploy.worker.Worker--slave2.out
[spark@master spark-2.2.-bin-hadoop2.]$ cd ..
[spark@master opt]$ su root
Password:
[root@master opt]# chmod spark-2.2.-bin-hadoop2.
[root@master opt]# su spark
[spark@master opt]$ cd spark-2.2.-bin-hadoop2.
[spark@master spark-2.2.-bin-hadoop2.]$ sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /opt/spark-2.2.-bin-hadoop2./logs/spark-spark-org.apache.spark.deploy.master.Master--master.out
slave2: org.apache.spark.deploy.worker.Worker running as process . Stop it first.
slave3: org.apache.spark.deploy.worker.Worker running as process . Stop it first.
slave1: org.apache.spark.deploy.worker.Worker running as process . Stop it first.
[spark@master spark-2.2.-bin-hadoop2.]$ sbin/stop-all.sh
slave1: stopping org.apache.spark.deploy.worker.Worker
slave3: stopping org.apache.spark.deploy.worker.Worker
slave2: stopping org.apache.spark.deploy.worker.Worker
stopping org.apache.spark.deploy.master.Master
[spark@master spark-2.2.-bin-hadoop2.]$ sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /opt/spark-2.2.-bin-hadoop2./logs/spark-spark-org.apache.spark.deploy.master.Master--master.out
slave1: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark-2.2.-bin-hadoop2./logs/spark-spark-org.apache.spark.deploy.worker.Worker--slave1.out
slave3: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark-2.2.-bin-hadoop2./logs/spark-spark-org.apache.spark.deploy.worker.Worker--slave3.out
slave2: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark-2.2.-bin-hadoop2./logs/spark-spark-org.apache.spark.deploy.worker.Worker--slave2.out
解决方案:给master的spark安装目录也分配777操作权限。
验证 Spark 是否安装成功
启动过程发现问题:
1)以spark on yarn方式运行spark-shell抛出异常:ERROR cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Sending RequestExecutors(0,0,Map(),Set()) to AM was unsuccessful:解决方案参考《Kafka:ZK+Kafka+Spark Streaming集群环境搭建(六)针对spark2.2.1以yarn方式启动spark-shell抛出异常:ERROR cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Sending RequestExecutors(0,0,Map(),Set()) to AM was unsuccessful》
用jps
检查,在 master 上正常启动包含以下几个进程:
$ jps
Jps
SecondaryNameNode
Master
NameNode
ResourceManager
在 slave 上正常启动包含以下几个进程:
$jps
DataNode
Worker
Jps
NodeManager
进入Spark的Web管理页面: http://192.168.0.120:8080
运行示例
本地方式两线程运行测试:
[spark@master spark-2.2.-bin-hadoop2.]$ cd /opt/spark-2.2.-bin-hadoop2.
[spark@master spark-2.2.-bin-hadoop2.]$ ./bin/run-example SparkPi --master local[]
Spark Standalone 集群模式运行
[spark@master spark-2.2.-bin-hadoop2.]$ cd /opt/spark-2.2.-bin-hadoop2.
[spark@master spark-2.2.-bin-hadoop2.]$ ./bin/spark-submit \
> --class org.apache.spark.examples.SparkPi \
> --master spark://master:7077 \
> examples/jars/spark-examples_2.-2.2..jar \
>
此时是可以从spark监控界面查看到运行状况:
Spark on YARN 集群上 yarn-cluster 模式运行
[spark@master spark-2.2.-bin-hadoop2.]$ cd /opt/spark-2.2.-bin-hadoop2.
[spark@master spark-2.2.-bin-hadoop2.]$ ./bin/spark-submit \
> --class org.apache.spark.examples.SparkPi \
> --master yarn-cluster \
> /opt/spark-2.2.-bin-hadoop2./examples/jars/spark-examples_2.-2.2..jar \
>
执行日志信息:
[spark@master hadoop-2.9.]$ cd /opt/spark-2.2.-bin-hadoop2.
[spark@master spark-2.2.-bin-hadoop2.]$ ./bin/spark-submit \
> --class org.apache.spark.examples.SparkPi \
> --master yarn-cluster \
> /opt/spark-2.2.-bin-hadoop2./examples/jars/spark-examples_2.-2.2..jar \
>
Warning: Master yarn-cluster is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead.
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO client.RMProxy: Connecting to ResourceManager at master/192.168.0.120:
// :: INFO yarn.Client: Requesting a new application from cluster with NodeManagers
// :: INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster ( MB per container)
// :: INFO yarn.Client: Will allocate AM container, with MB memory including MB overhead
// :: INFO yarn.Client: Setting up container launch context for our AM
// :: INFO yarn.Client: Setting up the launch environment for our AM container
// :: INFO yarn.Client: Preparing resources for our AM container
// :: WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
// :: INFO yarn.Client: Uploading resource file:/opt/spark-2.2.-bin-hadoop2./spark-f46b4dc7--4bb3-babd-c3124d1a7e07/__spark_libs__1523582418834894726.zip -> hdfs://master:9000/user/spark/.sparkStaging/application_1530369937777_0001/__spark_libs__1523582418834894726.zip
// :: INFO yarn.Client: Uploading resource file:/opt/spark-2.2.-bin-hadoop2./examples/jars/spark-examples_2.-2.2..jar -> hdfs://master:9000/user/spark/.sparkStaging/application_1530369937777_0001/spark-examples_2.11-2.2.1.jar
// :: INFO yarn.Client: Uploading resource file:/opt/spark-2.2.-bin-hadoop2./spark-f46b4dc7--4bb3-babd-c3124d1a7e07/__spark_conf__4967231916988729566.zip -> hdfs://master:9000/user/spark/.sparkStaging/application_1530369937777_0001/__spark_conf__.zip
// :: INFO spark.SecurityManager: Changing view acls to: spark
// :: INFO spark.SecurityManager: Changing modify acls to: spark
// :: INFO spark.SecurityManager: Changing view acls groups to:
// :: INFO spark.SecurityManager: Changing modify acls groups to:
// :: INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); groups with view permissions: Set(); users with modify permissions: Set(spark); groups with modify permissions: Set()
// :: INFO yarn.Client: Submitting application application_1530369937777_0001 to ResourceManager
// :: INFO impl.YarnClientImpl: Submitted application application_1530369937777_0001
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: ACCEPTED)
// :: INFO yarn.Client:
client token: N/A
diagnostics: AM container is launched, waiting for AM container to Register with RM
ApplicationMaster host: N/A
ApplicationMaster RPC port: -
queue: default
start time:
final status: UNDEFINED
tracking URL: http://master:8088/proxy/application_1530369937777_0001/
user: spark
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: ACCEPTED)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 192.168.0.121
ApplicationMaster RPC port:
queue: default
start time:
final status: UNDEFINED
tracking URL: http://master:8088/proxy/application_1530369937777_0001/
user: spark
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: RUNNING)
// :: INFO yarn.Client: Application report for application_1530369937777_0001 (state: FINISHED)
// :: INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 192.168.0.121
ApplicationMaster RPC port:
queue: default
start time:
final status: SUCCEEDED
tracking URL: http://master:8088/proxy/application_1530369937777_0001/
user: spark
// :: INFO util.ShutdownHookManager: Shutdown hook called
// :: INFO util.ShutdownHookManager: Deleting directory /opt/spark-2.2.-bin-hadoop2./spark-f46b4dc7--4bb3-babd-c3124d1a7e07
从hadoop yarn监控界面查看执行任务:
另外也可以进入http://slave1:8042查看slave1的信息:
注意:Spark on YARN 支持两种运行模式,分别为yarn-cluster和yarn-client,具体的区别可以看这篇博文,从广义上讲,yarn-cluster适用于生产环境;而yarn-client适用于交互和调试,也就是希望快速地看到application的输出。
Kafka:ZK+Kafka+Spark Streaming集群环境搭建(三)安装spark2.2.1的更多相关文章
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二十一)NIFI1.7.1安装
一.nifi基本配置 1. 修改各节点主机名,修改/etc/hosts文件内容. 192.168.0.120 master 192.168.0.121 slave1 192.168.0.122 sla ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十三)kafka+spark streaming打包好的程序提交时提示虚拟内存不足(Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical memory used; 2.2 GB of 2.1 G)
异常问题:Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical mem ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十二)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网。
Centos7出现异常:Failed to start LSB: Bring up/down networking. 按照<Kafka:ZK+Kafka+Spark Streaming集群环境搭 ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十一)定制一个arvo格式文件发送到kafka的topic,通过Structured Streaming读取kafka的数据
将arvo格式数据发送到kafka的topic 第一步:定制avro schema: { "type": "record", "name": ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十)安装hadoop2.9.0搭建HA
如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(九)安装kafka_2.11-1.1.0
如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(八)安装zookeeper-3.4.12
如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二)安装hadoop2.9.0
如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(七)针对hadoop2.9.0启动DataManager失败问题
DataManager启动失败 启动过程中发现一个问题:slave1,slave2,slave3都是只启动了DataNode,而DataManager并没有启动: [spark@slave1 hado ...
随机推荐
- j.u.c系列(02)---线程池ThreadPoolExecutor---tomcat实现策略
写在前面 本文是以同tomcat 7.0.57. jdk版本1.7.0_80为例. 线程池在tomcat中的创建实现为: public abstract class AbstractEndpoint& ...
- CentOS 7 下编译安装lnmp之PHP篇详解
一.安装环境 宿主机=> win7,虚拟机 centos => 系统版本:centos-release-7-5.1804.el7.centos.x86_64 二.PHP下载 官网 http ...
- word删除空白行
情况一:如果粘贴后,word页面既有表格又有文字(有时网页中选定时看不到表格,粘贴后却有表格),还有许多空行! 硬回车: “编辑--替换” -查找内容为“^p^p”,替换成“^p”--然后全部替换! ...
- HDU 4731 Minimum palindrome (2013成都网络赛,找规律构造)
Minimum palindrome Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Other ...
- sTM32 使用TIMx_CH1作为 Tx1F_ED 计数器时钟
环境:iar arm 5.3 stm32f103vbt6 使用PA.8 外部输入10Mhz的方波.可从systick中断得到数据4. 4×5000(预分频值)×1000(tick中断时间)=20MHz ...
- STM32F4 Timer simplified block diagram
Timers TIM1 and TIM8 use 16-bit counters and are the most complex timers of all timers included in t ...
- hdu 2546 饭卡(背包)
设饭卡余额为total 此题经分析 可以得出:要求选出一些饭菜 时消费量尽量接近total-5元 然后再买一个饭菜 以达到透支... 可以证明 最后买的那个饭菜是饭菜中价值最大的. 证明 设a1 ...
- Android Service总结04 之被绑定的服务 -- Bound Service
Android Service总结04 之被绑定的服务 -- Bound Service 版本 版本说明 发布时间 发布人 V1.0 添加了Service的介绍和示例 2013-03-17 Skywa ...
- CSS id 和 class 选择器
如果你要在HTML元素中设置CSS样式,你需要在元素中设置"id" 和 "class"选择器. id 选择器 id 选择器可以为标有特定 id 的 HTML 元 ...
- ibatis.net:在VS中支持xml智能提示
下载ibatis.net,在其解压目录下有几个后缀为“.xsd”的文件,将他们拷贝到如下目录: