背景描述

  最近在进行安全扫描的时候,说hadoop存在漏洞,Hadoop 未授权访问【原理扫描】,然后就参考官方文档及一些资料,在测试环境中进行了开启,中间就遇到了很多的坑,或者说自己没有想明白的问题,在此记录下吧,这个问题搞了2天。

环境描述

  hadoop版本:2.6.2

操作步骤

1.想要开启服务级认证,需要在core-site.xml文件中开启参数hadoop.security.authorization,将其设置为true

<property>
<name>hadoop.security.authorization</name>
<value>true</value>
<description>Is service-level authorization enabled?</description>
</property>

备注:根据官方文档的解释,设置为true就是simple类型的认证,基于OS用户的认证.现在服务级的认证已经开启了。

增加此参数之后,需要重启namenode:

sbin/hadoop-daemon.sh stop namenode
sbin/hadoop-daemon.sh start namenode

如何知道是否真正的开启了该配置,查看hadoop安全日志SecurityAuth-aiprd.audit,如果有新日志增加,里面带有认证信息,说明开启成功。

2.针对具体的各个服务的认证,在配置文件hadoop-policy.xml中

<configuration>
<property>
<name>security.client.protocol.acl</name>
<value>*</value>
<description>ACL for ClientProtocol, which is used by user code
via the DistributedFileSystem.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.client.datanode.protocol.acl</name>
<value>*</value>
<description>ACL for ClientDatanodeProtocol, the client-to-datanode protocol
for block recovery.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.datanode.protocol.acl</name>
<value>*</value>
<description>ACL for DatanodeProtocol, which is used by datanodes to
communicate with the namenode.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.inter.datanode.protocol.acl</name>
<value>*</value>
<description>ACL for InterDatanodeProtocol, the inter-datanode protocol
for updating generation timestamp.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.namenode.protocol.acl</name>
<value>*</value>
<description>ACL for NamenodeProtocol, the protocol used by the secondary
namenode to communicate with the namenode.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.admin.operations.protocol.acl</name>
<value>*</value>
<description>ACL for AdminOperationsProtocol. Used for admin commands.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.refresh.user.mappings.protocol.acl</name>
<value>*</value>
<description>ACL for RefreshUserMappingsProtocol. Used to refresh
users mappings. The ACL is a comma-separated list of user and
group names. The user and group list is separated by a blank. For
e.g. "alice,bob users,wheel". A special value of "*" means all
users are allowed.</description>
</property> <property>
<name>security.refresh.policy.protocol.acl</name>
<value>*</value>
<description>ACL for RefreshAuthorizationPolicyProtocol, used by the
dfsadmin and mradmin commands to refresh the security policy in-effect.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.ha.service.protocol.acl</name>
<value>*</value>
<description>ACL for HAService protocol used by HAAdmin to manage the
active and stand-by states of namenode.</description>
</property> <property>
<name>security.zkfc.protocol.acl</name>
<value>*</value>
<description>ACL for access to the ZK Failover Controller
</description>
</property> <property>
<name>security.qjournal.service.protocol.acl</name>
<value>*</value>
<description>ACL for QJournalProtocol, used by the NN to communicate with
JNs when using the QuorumJournalManager for edit logs.</description>
</property> <property>
<name>security.mrhs.client.protocol.acl</name>
<value>*</value>
<description>ACL for HSClientProtocol, used by job clients to
communciate with the MR History Server job status etc.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <!-- YARN Protocols --> <property>
<name>security.resourcetracker.protocol.acl</name>
<value>*</value>
<description>ACL for ResourceTrackerProtocol, used by the
ResourceManager and NodeManager to communicate with each other.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.resourcemanager-administration.protocol.acl</name>
<value>*</value>
<description>ACL for ResourceManagerAdministrationProtocol, for admin commands.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.applicationclient.protocol.acl</name>
<value>*</value>
<description>ACL for ApplicationClientProtocol, used by the ResourceManager
and applications submission clients to communicate with each other.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.applicationmaster.protocol.acl</name>
<value>*</value>
<description>ACL for ApplicationMasterProtocol, used by the ResourceManager
and ApplicationMasters to communicate with each other.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.containermanagement.protocol.acl</name>
<value>*</value>
<description>ACL for ContainerManagementProtocol protocol, used by the NodeManager
and ApplicationMasters to communicate with each other.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.resourcelocalizer.protocol.acl</name>
<value>*</value>
<description>ACL for ResourceLocalizer protocol, used by the NodeManager
and ResourceLocalizer to communicate with each other.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.job.task.protocol.acl</name>
<value>*</value>
<description>ACL for TaskUmbilicalProtocol, used by the map and reduce
tasks to communicate with the parent tasktracker.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.job.client.protocol.acl</name>
<value>*</value>
<description>ACL for MRClientProtocol, used by job clients to
communciate with the MR ApplicationMaster to query job status etc.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property> <property>
<name>security.applicationhistory.protocol.acl</name>
<value>*</value>
<description>ACL for ApplicationHistoryProtocol, used by the timeline
server and the generic history service client to communicate with each other.
The ACL is a comma-separated list of user and group names. The user and
group list is separated by a blank. For e.g. "alice,bob users,wheel".
A special value of "*" means all users are allowed.</description>
</property>
</configuration>

备注:默认有10个服务,每个服务的默认值都是*,表示的就是任何的用户都可以对其进行访问。

3.目前只需要针对客户端哪些用户能够访问namenode即可,即修改参数security.client.protocol.acl的值

  <property>
<name>security.zkfc.protocol.acl</name>
<value>aiprd</value>
<description>ACL for access to the ZK Failover Controller
</description>
</property>

备注:表示客户端进行对应的用户是aiprd的就可以访问namenode。

刷新ACL配置:

bin/hdfs dfsadmin -refreshServiceAcl

修改格式如下:

<property>
<name>security.job.submission.protocol.acl</name>
<value>user1,user2 group1,group2</value>
</property>

备注:该值是,用户之间逗号隔开,用户组之间用逗号隔开,用户和用户组之间用空格分开,如果没有用户,要以空格开头后面接用户组。

4.远程客户端访问hdfs中文件进行验证

[aiprd@localhost ~]$ hdfs dfs -ls hdfs://hadoop1:9000/
Found items
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/hbase
drwxr-xr-x - aiprd hadoop -- : hdfs://hadoop1:9000/test01
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test02
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test03
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test07
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test08
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test09
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test10
drwxrwx--- - aiprd supergroup -- : hdfs://hadoop1:9000/test11
drwxr-xr-x - aiprd1 supergroup -- : hdfs://hadoop1:9000/test12

备注:在客户端上,将hadoop的程序部署在aiprd用户下,执行命令能够查看其中的文件、文件夹信息。同时,aiprd用户也是启动namenode的用户即hadoop中的超级用户,所以,查看到的文件的用户组都是aiprd.

5.测试,如果增加或者使用其他的用户是否可以

  <property>
<name>security.zkfc.protocol.acl</name>
<value>aiprd1</value>
<description>ACL for access to the ZK Failover Controller
</description>
</property>

刷新ACL配置。

bin/hdfs dfsadmin -refreshServiceAcl

将用户修改aiprd1。即只有客户端的程序用户是aiprd1才能访问。

6.在客户端中,继续使用之前部署在aiprd用户下的hadoop客户端进行访问

[aiprd@localhost ~]$ hdfs dfs -ls hdfs://hadoop1:9000/
ls: User aiprd (auth:SIMPLE) is not authorized for protocol interface org.apache.hadoop.hdfs.protocol.ClientProtocol, expected client Kerberos principal is null

备注:发现aiprd用户是不能访问的了

7.客户端中,在aiprd1用户下,在部署hadoop客户端,然后进行访问

[aiprd1@localhost ~]$ hdfs dfs -ls hdfs://hadoop1:9000/test12
Found items
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/01
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/02
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/03
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/04
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/05
drwxr-xr-x - aiprd1 supergroup -- : hdfs://hadoop1:9000/test12/10

备注:是能够访问的,所以,如果要使用用户来进行认证,那么客户端程序对应的OS用户,必须要和hadoop-policy.xml中配置的用户一致否则不能访问。

既然,服务级参数的值,可以是用户,也可以是用户组,用户验证完了,那么来验证用户组吧,此时,就遇到了很多的坑。

1.还是之前的参数security.zkfc.protocol.acl,这次使用,用户组

  <property>
<name>security.zkfc.protocol.acl</name>
<value>aiprd hadoop</value>
<description>ACL for access to the ZK Failover Controller
</description>
</property>

刷新ACL配置:

/bin/hdfs dfsadmin -refreshServiceAcl

那么问题来了,之前的用户是基于OS级别的判断,这个应该也是,也就是判断我这个用户到底是不是这个用户组里面的。

2.在客户端上aiprd用户下的程序是可以访问的,经过之前的验证没有问题

3.在客户端上,在aiprd1下部署hadoop客户端程序,正常是访问不了hdfs的,那么将aiprd1加入到这个hadoop组下,理论上是可以访问的

[aiprd1@localhost ~]$ id aiprd1
uid=(aiprd1) gid=(aiprd1) groups=(aiprd1),(hadoop)
[aiprd1@localhost ~]$ hdfs dfs -ls hdfs://hadoop1:9000/test12
ls: User aiprd1 (auth:SIMPLE) is not authorized for protocol interface org.apache.hadoop.hdfs.protocol.ClientProtocol, expected client Kerberos principal is null

经过验证,是不可以的,说明这个hadoop分组并没有起作用。

试了如下的办法:

  • --1.hadoop.security.group.mapping 改了这个参数的值,其实这个参数有默认的值,不需要进行设置的
  • --2.在hdfs所有的节点都建了hadoop用户组,还是没有解决问题
  • --3.默认的hdfs中文件的用户组是supergroup,也尝试将aiprd1加入到supergroup中,还是没有作用
  • --4.使用aiprd这个超级用户,将hdfs中文件的用户组改为hadoop还是没有效果
  • --5.尝试在namenode上将aiprd加入到hadoop组还是没有效果。

实在没有办法,开启DEBUG吧,开启之后,获得信息如下:

-- ::, WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user aiprd1: id: aiprd1: No such user

-- ::, WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user aiprd1
adoop.hdfs.protocol.ClientProtocol, expected client Kerberos principal is null
:SIMPLE)
-- ::, DEBUG org.apache.hadoop.ipc.Server: Socket Reader # for port : responding to null from 192.168.30.1: Call#- Retry#-
-- ::, DEBUG org.apache.hadoop.ipc.Server: Socket Reader # for port : responding to null from 192.168.30.1: Call#- Retry#- Wrote bytes.
izationException: User aiprd1 (auth:SIMPLE) is not authorized for protocol interface

意思是说,当试着为这个用户查找用户组的时候,没有这个用户,就很奇怪,明明是有用户的啊。然后就基于这个报错各种查找,然后在下面的文章中获得了点启示:

https://www.e-learn.cn/content/wangluowenzhang/1136832
To accomplish your goal you'd need to add your user account (clott) on the NameNode machine and add it to hadoop group there.

If you are going to run MapReduce with your user, you'd need your user account to be configured on NodeManager hosts as well.

4.按照这个意思,在Namenode节点上,创建aiprd1用户,并加入到hadoop用户组里面。

[root@hadoop1 ~]# useradd -G hadoop aiprd1
[root@hadoop1 ~]# id aiprd1
uid=503(aiprd1) gid=503(aiprd1) groups=503(aiprd1),502(hadoop)
[root@hadoop1 ~]# su - aiprd
[aiprd@hadoop1 ~]$ jps
15289 NameNode
15644 Jps

备注:此节点运行了NameNode.

5.再次在hadoop客户端上,aiprd1用户下执行查询操作

[aiprd1@localhost ~]$ hdfs dfs -ls hdfs://hadoop1:9000/test12
Found items
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/01
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/02
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/03
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/04
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/05
drwxr-xr-x - aiprd1 supergroup -- : hdfs://hadoop1:9000/test12/10

可以进行查询了。

在客户端上,将aiprd1对应的用户组hadoop去掉。

[aiprd1@localhost ~]$ id
uid=(aiprd1) gid=(aiprd1) groups=(aiprd1)

再次执行查询:

[aiprd1@localhost ~]$ hdfs dfs -ls hdfs://hadoop1:9000/test12
Found items
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/01
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/02
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/03
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/04
drwxr-xr-x - aiprd supergroup -- : hdfs://hadoop1:9000/test12/05
drwxr-xr-x - aiprd1 supergroup -- : hdfs://hadoop1:9000/test12/10

还是可以查询的,可以看出来,用户组和客户端上用户所在的组没有关系,需要在Namenode节点设置。

查看官方,有如下解释:

Once a username has been determined as described above, the list of groups is determined by a group mapping service, configured by the hadoop.security.group.mapping property. The default implementation, org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback, will determine if the Java Native Interface (JNI) is available. If JNI is available, the implementation will use the API within hadoop to resolve a list of groups for a user. If JNI is not available then the shell implementation, org.apache.hadoop.security.ShellBasedUnixGroupsMapping, is used. This implementation shells out with the bash -c groups command (for a Linux/Unix environment) or the net group command (for a Windows environment) to resolve a list of groups for a user.

An alternate implementation, which connects directly to an LDAP server to resolve the list of groups, is available via org.apache.hadoop.security.LdapGroupsMapping. However, this provider should only be used if the required groups reside exclusively in LDAP, and are not materialized on the Unix servers. More information on configuring the group mapping service is available in the Javadocs.

For HDFS, the mapping of users to groups is performed on the NameNode. Thus, the host system configuration of the NameNode determines the group mappings for the users.

Note that HDFS stores the user and group of a file or directory as strings; there is no conversion from user and group identity numbers as is conventional in Unix.

对于HDFS来说,用户到组的映射关系是在NameNode上执行的,因此,NameNode的主机系统配置决定了用户组的映射。

实验之后才看明白,之前根本没有理解,以为是从客户端拿到用户对应的用户组信息,然后到NameNode来进行判断呢。

所以,到这里,基于服务级的ACL,用户、用户组的都已经可以配置了,对于其他的服务,可以根据实际情况进行配置。这里面只要求哪些用户、用户组可以连接上来就好了。

小结

  1.hadoop.security.authorization设置为true,开启simple认证,即基于os用户的认证,配置之后,重启namenode

  2.acl为用户认证的,保证服务acl中配置的值与客户端进程对应的用户一致即可访问。

  3.acl为用户组的,客户端如果使用A访问,那么要在NameNode上创建用户A,将A加入到acl用户组,验证过程:获取客户端的用户,比如为A,NameNode节点上,通过用户A,到NameNode的主机上来查找用户A对应的用户组信息,如果NameNode上没有用户A,认证失败,如果有用户A,没有在acl用户组上,认证失败,有用户A,用户A在acl配置的组里面,认证成功。

  4.acl配置的用户组与客户端程序用户,所在的用户组没有关系。

  5.每次修改hadoop-policy.xml中的值,记得要执行刷新操作。

另外:要注意,不同版本的参数,配置可能不同,要看和自己hadoop版本一致的文档。

https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/ServiceLevelAuth.html

https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Group_Mapping

文档创建时间:2019年8月15日17:30:24

hadoop开启Service Level Authorization 服务级认证-SIMPLE认证-过程中遇到的坑的更多相关文章

  1. 【FAQ】运动健康服务REST API接口使用过程中常见问题和解决方法总结

    华为运动健康服务(HUAWEI Health Kit)为三方生态应用提供了REST API接口,通过其接口可访问数据库,为用户提供运动健康类数据服务.在实际的集成过程中,开发者们可能会遇到各种问题,这 ...

  2. 为Secure Store Service生成新密钥,解决“生成密钥过程中发现错误”的问题

    我们集成TFS和SharePoint Server以后,一个最常见的需求是通过SharePoint Server的Excel Service读取TFS报表中的信息,利用Excel Service的强大 ...

  3. loadrunner Analysis :SLA(Service Level Agreement服务水平协议)

    SLA是为负载场景定义的具体目标,用于与实际负载结果比较,确定系统是否达到性能目标. 1.1.1     设置SLA(以Transaction Response Time(Average)为例) 可以 ...

  4. debug 查询服务日志,用于定位服务在运行和启动过程中出现的问题

    vim /usr/lib/systemd/system/sshd.service [Unit] Description=OpenSSH server daemon Documentation=man: ...

  5. Service之三种服务方式

    (一)StartService 运行Service的方法之一.任何继承于android.content.Context的Android组件(component)都可以使用一个Intent(androi ...

  6. S3 服务(Simple Storage Service简单存储服务) 简介(与hdfs同一级)

    图1  spark 相关 亚马逊云存储之S3(Simple Storage Service简单存储服务) (转 ) S3是Simple Storage Service的缩写,即简单存储服务.亚马逊的名 ...

  7. Service Mesh(服务网格)

    Service Mesh(服务网格) 什么是Service Mesh(服务网格)Service mesh 又译作 "服务网格",作为服务间通信的基础设施层.Buoyant 公司的 ...

  8. 【知识点】业务连接服务(BCS)认证概念整理

    业务连接服务(BCS)认证概念整理 I. BDC认证模型 BDC服务支持两种认证模型:信任的子系统,模拟和代理. 在信任的子系统模型中,中间层(通常是Web服务器)通过一个固定的身份来向后端服务器取得 ...

  9. Web Service实现分布式服务的基本原理

    简单的说, 就是客户端根据WSDL 生成 SOAP 的请求消息, 通过 HTTP 传输方式(也可以是其它传输方式, 如 FTP 或STMP 等,目前 HTTP 传输方式已经成为 J2EE Web Se ...

随机推荐

  1. SQL Server 函数的定义及使用

    一.定义函数 1. 标量值函数: 返回一个确定类型的标量值,例如:int,char,bit等 --创建标量值函数 create function func_1(@func_parameter_1 in ...

  2. 基于SpringCloud的Microservices架构实战案例-配置文件属性内容加解密

    使用过SpringBoot配置文件的朋友都知道,资源文件中的内容通常情况下是明文显示,安全性就比较低一些.打开application.properties或application.yml,比如mysq ...

  3. webpack-dev-server 小记 原理介绍 概念解读

    使用 DevServer 提供 HTTP 服务而不是使用本地文件预览 监听文件的变化并自动刷新网页,做到实时预览 支持 Source Map,以方便调试 对于这些,Webpack 都为我们考虑好了.W ...

  4. python文件下载

    1. 场景描述 刚好总结Java项目的web文件下载(附方案及源码配置),想起python项目也有用到文件下载,就也介绍下吧. 2. 解决方案 使用python的第三方组件Flask来实现文件下载功能 ...

  5. 9.18考试 第三题chess题解

    在讲这道题之前我们先明确一个丝薄出题人根本没有半点提示却坑死了无数人的注意点: 走敌人和不走敌人直接到时两种走法,但只走一个敌人和走一大坨敌人到同一个点只算一种方案(当然,前提是步骤一致). 当时看完 ...

  6. 算法与数据结构基础 - 堆(Heap)和优先级队列(Priority queue)

    堆基础 堆(Heap)是具有这样性质的数据结构:1/完全二叉树 2/所有节点的值大于等于(或小于等于)子节点的值: 图片来源:这里 堆可以用数组存储,插入.删除会触发节点shift_down.shif ...

  7. 哈夫曼编码与解码的C++实现:建立哈夫曼树、进行哈夫曼编码与解码

    最近完成了数据结构课程设计,被分到的题目是<哈夫曼编码和解码>,现在在这篇博文里分享一下自己的成果. 我在设计时,在网上参考了很多老师和前辈的算法和代码,向他们表示感谢!他们的成果给了我很 ...

  8. javascript之正则表达式(二)

    js正则贪婪模式与非贪婪模式 类似于贪吃蛇游戏,越吃越长.而贪婪模式就是尽可能多的匹配. 默认是贪婪模式      (尽可能多的匹配)                           例子: va ...

  9. Preface Numbering序言页码

    题面 (preface.pas/c/cpp) 一类书的序言是以罗马数字标页码的.传统罗马数字用单个字母表示特定的数值,以下是标准数字表: I 1 V 5 X 10 L 50 C 100 D 500 M ...

  10. [USACO07FEB]银牛派对Silver Cow Party

    题目简叙: 寒假到了,N头牛都要去参加一场在编号为X(1≤X≤N)的牛的农场举行的派对(1≤N≤1000),农场之间有M(1≤M≤100000)条有向路,每条路长Ti(1≤Ti≤100). 每头牛参加 ...