hadoop之 hadoop 2.2.X 弃用的配置属性名称及其替换名称对照表
Deprecated Properties 弃用属性
The following table lists the configuration property names that are deprecated in this version of Hadoop, and their replacements.
下表列出了在这个版本的Hadoop中被弃用的配置属性名称及其替换。
说明: 基于 Hadoop 2.7.4
|
Deprecated property name 已经被遗弃属性的名称 |
New property name 新的属性名称 |
|---|---|
| create.empty.dir.if.nonexist | mapreduce.jobcontrol.createdir.ifnotexist |
| dfs.access.time.precision | dfs.namenode.accesstime.precision |
| dfs.backup.address | dfs.namenode.backup.address |
| dfs.backup.http.address | dfs.namenode.backup.http-address |
| dfs.balance.bandwidthPerSec | dfs.datanode.balance.bandwidthPerSec |
| dfs.block.size | dfs.blocksize |
| dfs.data.dir | dfs.datanode.data.dir |
| dfs.datanode.max.xcievers | dfs.datanode.max.transfer.threads |
| dfs.df.interval | fs.df.interval |
| dfs.federation.nameservice.id | dfs.nameservice.id |
| dfs.federation.nameservices | dfs.nameservices |
| dfs.http.address | dfs.namenode.http-address |
| dfs.https.address | dfs.namenode.https-address |
| dfs.https.client.keystore.resource | dfs.client.https.keystore.resource |
| dfs.https.need.client.auth | dfs.client.https.need-auth |
| dfs.max.objects | dfs.namenode.max.objects |
| dfs.max-repl-streams | dfs.namenode.replication.max-streams |
| dfs.name.dir | dfs.namenode.name.dir |
| dfs.name.dir.restore | dfs.namenode.name.dir.restore |
| dfs.name.edits.dir | dfs.namenode.edits.dir |
| dfs.permissions | dfs.permissions.enabled |
| dfs.permissions.supergroup | dfs.permissions.superusergroup |
| dfs.read.prefetch.size | dfs.client.read.prefetch.size |
| dfs.replication.considerLoad | dfs.namenode.replication.considerLoad |
| dfs.replication.interval | dfs.namenode.replication.interval |
| dfs.replication.min | dfs.namenode.replication.min |
| dfs.replication.pending.timeout.sec | dfs.namenode.replication.pending.timeout-sec |
| dfs.safemode.extension | dfs.namenode.safemode.extension |
| dfs.safemode.threshold.pct | dfs.namenode.safemode.threshold-pct |
| dfs.secondary.http.address | dfs.namenode.secondary.http-address |
| dfs.socket.timeout | dfs.client.socket-timeout |
| dfs.umaskmode | fs.permissions.umask-mode |
| dfs.write.packet.size | dfs.client-write-packet-size |
| fs.checkpoint.dir | dfs.namenode.checkpoint.dir |
| fs.checkpoint.edits.dir | dfs.namenode.checkpoint.edits.dir |
| fs.checkpoint.period | dfs.namenode.checkpoint.period |
| fs.default.name | fs.defaultFS |
| hadoop.configured.node.mapping | net.topology.configured.node.mapping |
| hadoop.job.history.location | mapreduce.jobtracker.jobhistory.location |
| hadoop.native.lib | io.native.lib.available |
| hadoop.net.static.resolutions | mapreduce.tasktracker.net.static.resolutions |
| hadoop.pipes.command-file.keep | mapreduce.pipes.commandfile.preserve |
| hadoop.pipes.executable.interpretor | mapreduce.pipes.executable.interpretor |
| hadoop.pipes.executable | mapreduce.pipes.executable |
| hadoop.pipes.java.mapper | mapreduce.pipes.isjavamapper |
| hadoop.pipes.java.recordreader | mapreduce.pipes.isjavarecordreader |
| hadoop.pipes.java.recordwriter | mapreduce.pipes.isjavarecordwriter |
| hadoop.pipes.java.reducer | mapreduce.pipes.isjavareducer |
| hadoop.pipes.partitioner | mapreduce.pipes.partitioner |
| heartbeat.recheck.interval | dfs.namenode.heartbeat.recheck-interval |
| io.bytes.per.checksum | dfs.bytes-per-checksum |
| io.sort.factor | mapreduce.task.io.sort.factor |
| io.sort.mb | mapreduce.task.io.sort.mb |
| io.sort.spill.percent | mapreduce.map.sort.spill.percent |
| jobclient.completion.poll.interval | mapreduce.client.completion.pollinterval |
| jobclient.output.filter | mapreduce.client.output.filter |
| jobclient.progress.monitor.poll.interval | mapreduce.client.progressmonitor.pollinterval |
| job.end.notification.url | mapreduce.job.end-notification.url |
| job.end.retry.attempts | mapreduce.job.end-notification.retry.attempts |
| job.end.retry.interval | mapreduce.job.end-notification.retry.interval |
| job.local.dir | mapreduce.job.local.dir |
| keep.failed.task.files | mapreduce.task.files.preserve.failedtasks |
| keep.task.files.pattern | mapreduce.task.files.preserve.filepattern |
| key.value.separator.in.input.line | mapreduce.input.keyvaluelinerecordreader.key.value.separator |
| local.cache.size | mapreduce.tasktracker.cache.local.size |
| map.input.file | mapreduce.map.input.file |
| map.input.length | mapreduce.map.input.length |
| map.input.start | mapreduce.map.input.start |
| map.output.key.field.separator | mapreduce.map.output.key.field.separator |
| map.output.key.value.fields.spec | mapreduce.fieldsel.map.output.key.value.fields.spec |
| mapred.acls.enabled | mapreduce.cluster.acls.enabled |
| mapred.binary.partitioner.left.offset | mapreduce.partition.binarypartitioner.left.offset |
| mapred.binary.partitioner.right.offset | mapreduce.partition.binarypartitioner.right.offset |
| mapred.cache.archives | mapreduce.job.cache.archives |
| mapred.cache.archives.timestamps | mapreduce.job.cache.archives.timestamps |
| mapred.cache.files | mapreduce.job.cache.files |
| mapred.cache.files.timestamps | mapreduce.job.cache.files.timestamps |
| mapred.cache.localArchives | mapreduce.job.cache.local.archives |
| mapred.cache.localFiles | mapreduce.job.cache.local.files |
| mapred.child.tmp | mapreduce.task.tmp.dir |
| mapred.cluster.average.blacklist.threshold | mapreduce.jobtracker.blacklist.average.threshold |
| mapred.cluster.map.memory.mb | mapreduce.cluster.mapmemory.mb |
| mapred.cluster.max.map.memory.mb | mapreduce.jobtracker.maxmapmemory.mb |
| mapred.cluster.max.reduce.memory.mb | mapreduce.jobtracker.maxreducememory.mb |
| mapred.cluster.reduce.memory.mb | mapreduce.cluster.reducememory.mb |
| mapred.committer.job.setup.cleanup.needed | mapreduce.job.committer.setup.cleanup.needed |
| mapred.compress.map.output | mapreduce.map.output.compress |
| mapred.data.field.separator | mapreduce.fieldsel.data.field.separator |
| mapred.debug.out.lines | mapreduce.task.debugout.lines |
| mapred.healthChecker.interval | mapreduce.tasktracker.healthchecker.interval |
| mapred.healthChecker.script.args | mapreduce.tasktracker.healthchecker.script.args |
| mapred.healthChecker.script.path | mapreduce.tasktracker.healthchecker.script.path |
| mapred.healthChecker.script.timeout | mapreduce.tasktracker.healthchecker.script.timeout |
| mapred.heartbeats.in.second | mapreduce.jobtracker.heartbeats.in.second |
| mapred.hosts.exclude | mapreduce.jobtracker.hosts.exclude.filename |
| mapred.hosts | mapreduce.jobtracker.hosts.filename |
| mapred.inmem.merge.threshold | mapreduce.reduce.merge.inmem.threshold |
| mapred.input.dir.formats | mapreduce.input.multipleinputs.dir.formats |
| mapred.input.dir.mappers | mapreduce.input.multipleinputs.dir.mappers |
| mapred.input.dir | mapreduce.input.fileinputformat.inputdir |
| mapred.input.pathFilter.class | mapreduce.input.pathFilter.class |
| mapred.jar | mapreduce.job.jar |
| mapred.job.classpath.archives | mapreduce.job.classpath.archives |
| mapred.job.classpath.files | mapreduce.job.classpath.files |
| mapred.job.id | mapreduce.job.id |
| mapred.jobinit.threads | mapreduce.jobtracker.jobinit.threads |
| mapred.job.map.memory.mb | mapreduce.map.memory.mb |
| mapred.job.name | mapreduce.job.name |
| mapred.job.priority | mapreduce.job.priority |
| mapred.job.queue.name | mapreduce.job.queuename |
| mapred.job.reduce.input.buffer.percent | mapreduce.reduce.input.buffer.percent |
| mapred.job.reduce.markreset.buffer.percent | mapreduce.reduce.markreset.buffer.percent |
| mapred.job.reduce.memory.mb | mapreduce.reduce.memory.mb |
| mapred.job.reduce.total.mem.bytes | mapreduce.reduce.memory.totalbytes |
| mapred.job.reuse.jvm.num.tasks | mapreduce.job.jvm.numtasks |
| mapred.job.shuffle.input.buffer.percent | mapreduce.reduce.shuffle.input.buffer.percent |
| mapred.job.shuffle.merge.percent | mapreduce.reduce.shuffle.merge.percent |
| mapred.job.tracker.handler.count | mapreduce.jobtracker.handler.count |
| mapred.job.tracker.history.completed.location | mapreduce.jobtracker.jobhistory.completed.location |
| mapred.job.tracker.http.address | mapreduce.jobtracker.http.address |
| mapred.jobtracker.instrumentation | mapreduce.jobtracker.instrumentation |
| mapred.jobtracker.job.history.block.size | mapreduce.jobtracker.jobhistory.block.size |
| mapred.job.tracker.jobhistory.lru.cache.size | mapreduce.jobtracker.jobhistory.lru.cache.size |
| mapred.job.tracker | mapreduce.jobtracker.address |
| mapred.jobtracker.maxtasks.per.job | mapreduce.jobtracker.maxtasks.perjob |
| mapred.job.tracker.persist.jobstatus.active | mapreduce.jobtracker.persist.jobstatus.active |
| mapred.job.tracker.persist.jobstatus.dir | mapreduce.jobtracker.persist.jobstatus.dir |
| mapred.job.tracker.persist.jobstatus.hours | mapreduce.jobtracker.persist.jobstatus.hours |
| mapred.jobtracker.restart.recover | mapreduce.jobtracker.restart.recover |
| mapred.job.tracker.retiredjobs.cache.size | mapreduce.jobtracker.retiredjobs.cache.size |
| mapred.job.tracker.retire.jobs | mapreduce.jobtracker.retirejobs |
| mapred.jobtracker.taskalloc.capacitypad | mapreduce.jobtracker.taskscheduler.taskalloc.capacitypad |
| mapred.jobtracker.taskScheduler | mapreduce.jobtracker.taskscheduler |
| mapred.jobtracker.taskScheduler.maxRunningTasksPerJob | mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob |
| mapred.join.expr | mapreduce.join.expr |
| mapred.join.keycomparator | mapreduce.join.keycomparator |
| mapred.lazy.output.format | mapreduce.output.lazyoutputformat.outputformat |
| mapred.line.input.format.linespermap | mapreduce.input.lineinputformat.linespermap |
| mapred.linerecordreader.maxlength | mapreduce.input.linerecordreader.line.maxlength |
| mapred.local.dir | mapreduce.cluster.local.dir |
| mapred.local.dir.minspacekill | mapreduce.tasktracker.local.dir.minspacekill |
| mapred.local.dir.minspacestart | mapreduce.tasktracker.local.dir.minspacestart |
| mapred.map.child.env | mapreduce.map.env |
| mapred.map.child.java.opts | mapreduce.map.java.opts |
| mapred.map.child.log.level | mapreduce.map.log.level |
| mapred.map.max.attempts | mapreduce.map.maxattempts |
| mapred.map.output.compression.codec | mapreduce.map.output.compress.codec |
| mapred.mapoutput.key.class | mapreduce.map.output.key.class |
| mapred.mapoutput.value.class | mapreduce.map.output.value.class |
| mapred.mapper.regex.group | mapreduce.mapper.regexmapper..group |
| mapred.mapper.regex | mapreduce.mapper.regex |
| mapred.map.task.debug.script | mapreduce.map.debug.script |
| mapred.map.tasks | mapreduce.job.maps |
| mapred.map.tasks.speculative.execution | mapreduce.map.speculative |
| mapred.max.map.failures.percent | mapreduce.map.failures.maxpercent |
| mapred.max.reduce.failures.percent | mapreduce.reduce.failures.maxpercent |
| mapred.max.split.size | mapreduce.input.fileinputformat.split.maxsize |
| mapred.max.tracker.blacklists | mapreduce.jobtracker.tasktracker.maxblacklists |
| mapred.max.tracker.failures | mapreduce.job.maxtaskfailures.per.tracker |
| mapred.merge.recordsBeforeProgress | mapreduce.task.merge.progress.records |
| mapred.min.split.size | mapreduce.input.fileinputformat.split.minsize |
| mapred.min.split.size.per.node | mapreduce.input.fileinputformat.split.minsize.per.node |
| mapred.min.split.size.per.rack | mapreduce.input.fileinputformat.split.minsize.per.rack |
| mapred.output.compression.codec | mapreduce.output.fileoutputformat.compress.codec |
| mapred.output.compression.type | mapreduce.output.fileoutputformat.compress.type |
| mapred.output.compress | mapreduce.output.fileoutputformat.compress |
| mapred.output.dir | mapreduce.output.fileoutputformat.outputdir |
| mapred.output.key.class | mapreduce.job.output.key.class |
| mapred.output.key.comparator.class | mapreduce.job.output.key.comparator.class |
| mapred.output.value.class | mapreduce.job.output.value.class |
| mapred.output.value.groupfn.class | mapreduce.job.output.group.comparator.class |
| mapred.permissions.supergroup | mapreduce.cluster.permissions.supergroup |
| mapred.pipes.user.inputformat | mapreduce.pipes.inputformat |
| mapred.reduce.child.env | mapreduce.reduce.env |
| mapred.reduce.child.java.opts | mapreduce.reduce.java.opts |
| mapred.reduce.child.log.level | mapreduce.reduce.log.level |
| mapred.reduce.max.attempts | mapreduce.reduce.maxattempts |
| mapred.reduce.parallel.copies | mapreduce.reduce.shuffle.parallelcopies |
| mapred.reduce.slowstart.completed.maps | mapreduce.job.reduce.slowstart.completedmaps |
| mapred.reduce.task.debug.script | mapreduce.reduce.debug.script |
| mapred.reduce.tasks | mapreduce.job.reduces |
| mapred.reduce.tasks.speculative.execution | mapreduce.reduce.speculative |
| mapred.seqbinary.output.key.class | mapreduce.output.seqbinaryoutputformat.key.class |
| mapred.seqbinary.output.value.class | mapreduce.output.seqbinaryoutputformat.value.class |
| mapred.shuffle.connect.timeout | mapreduce.reduce.shuffle.connect.timeout |
| mapred.shuffle.read.timeout | mapreduce.reduce.shuffle.read.timeout |
| mapred.skip.attempts.to.start.skipping | mapreduce.task.skip.start.attempts |
| mapred.skip.map.auto.incr.proc.count | mapreduce.map.skip.proc-count.auto-incr |
| mapred.skip.map.max.skip.records | mapreduce.map.skip.maxrecords |
| mapred.skip.on | mapreduce.job.skiprecords |
| mapred.skip.out.dir | mapreduce.job.skip.outdir |
| mapred.skip.reduce.auto.incr.proc.count | mapreduce.reduce.skip.proc-count.auto-incr |
| mapred.skip.reduce.max.skip.groups | mapreduce.reduce.skip.maxgroups |
| mapred.speculative.execution.slowNodeThreshold | mapreduce.job.speculative.slownodethreshold |
| mapred.speculative.execution.slowTaskThreshold | mapreduce.job.speculative.slowtaskthreshold |
| mapred.speculative.execution.speculativeCap | mapreduce.job.speculative.speculativecap |
| mapred.submit.replication | mapreduce.client.submit.file.replication |
| mapred.system.dir | mapreduce.jobtracker.system.dir |
| mapred.task.cache.levels | mapreduce.jobtracker.taskcache.levels |
| mapred.task.id | mapreduce.task.attempt.id |
| mapred.task.is.map | mapreduce.task.ismap |
| mapred.task.partition | mapreduce.task.partition |
| mapred.task.profile | mapreduce.task.profile |
| mapred.task.profile.maps | mapreduce.task.profile.maps |
| mapred.task.profile.params | mapreduce.task.profile.params |
| mapred.task.profile.reduces | mapreduce.task.profile.reduces |
| mapred.task.timeout | mapreduce.task.timeout |
| mapred.tasktracker.dns.interface | mapreduce.tasktracker.dns.interface |
| mapred.tasktracker.dns.nameserver | mapreduce.tasktracker.dns.nameserver |
| mapred.tasktracker.events.batchsize | mapreduce.tasktracker.events.batchsize |
| mapred.tasktracker.expiry.interval | mapreduce.jobtracker.expire.trackers.interval |
| mapred.task.tracker.http.address | mapreduce.tasktracker.http.address |
| mapred.tasktracker.indexcache.mb | mapreduce.tasktracker.indexcache.mb |
| mapred.tasktracker.instrumentation | mapreduce.tasktracker.instrumentation |
| mapred.tasktracker.map.tasks.maximum | mapreduce.tasktracker.map.tasks.maximum |
| mapred.tasktracker.memory_calculator_plugin | mapreduce.tasktracker.resourcecalculatorplugin |
| mapred.tasktracker.memorycalculatorplugin | mapreduce.tasktracker.resourcecalculatorplugin |
| mapred.tasktracker.reduce.tasks.maximum | mapreduce.tasktracker.reduce.tasks.maximum |
| mapred.task.tracker.report.address | mapreduce.tasktracker.report.address |
| mapred.task.tracker.task-controller | mapreduce.tasktracker.taskcontroller |
| mapred.tasktracker.taskmemorymanager.monitoring-interval | mapreduce.tasktracker.taskmemorymanager.monitoringinterval |
| mapred.tasktracker.tasks.sleeptime-before-sigkill | mapreduce.tasktracker.tasks.sleeptimebeforesigkill |
| mapred.temp.dir | mapreduce.cluster.temp.dir |
| mapred.text.key.comparator.options | mapreduce.partition.keycomparator.options |
| mapred.text.key.partitioner.options | mapreduce.partition.keypartitioner.options |
| mapred.textoutputformat.separator | mapreduce.output.textoutputformat.separator |
| mapred.tip.id | mapreduce.task.id |
| mapreduce.combine.class | mapreduce.job.combine.class |
| mapreduce.inputformat.class | mapreduce.job.inputformat.class |
| mapreduce.job.counters.limit | mapreduce.job.counters.max |
| mapreduce.jobtracker.permissions.supergroup | mapreduce.cluster.permissions.supergroup |
| mapreduce.map.class | mapreduce.job.map.class |
| mapreduce.outputformat.class | mapreduce.job.outputformat.class |
| mapreduce.partitioner.class | mapreduce.job.partitioner.class |
| mapreduce.reduce.class | mapreduce.job.reduce.class |
| mapred.used.genericoptionsparser | mapreduce.client.genericoptionsparser.used |
| mapred.userlog.limit.kb | mapreduce.task.userlog.limit.kb |
| mapred.userlog.retain.hours | mapreduce.job.userlog.retain.hours |
| mapred.working.dir | mapreduce.job.working.dir |
| mapred.work.output.dir | mapreduce.task.output.dir |
| min.num.spills.for.combine | mapreduce.map.combine.minspills |
| reduce.output.key.value.fields.spec | mapreduce.fieldsel.reduce.output.key.value.fields.spec |
| security.job.submission.protocol.acl | security.job.client.protocol.acl |
| security.task.umbilical.protocol.acl | security.job.task.protocol.acl |
| sequencefile.filter.class | mapreduce.input.sequencefileinputfilter.class |
| sequencefile.filter.frequency | mapreduce.input.sequencefileinputfilter.frequency |
| sequencefile.filter.regex | mapreduce.input.sequencefileinputfilter.regex |
| session.id | dfs.metrics.session-id |
| slave.host.name | dfs.datanode.hostname |
| slave.host.name | mapreduce.tasktracker.host.name |
| tasktracker.contention.tracking | mapreduce.tasktracker.contention.tracking |
| tasktracker.http.threads | mapreduce.tasktracker.http.threads |
| topology.node.switch.mapping.impl | net.topology.node.switch.mapping.impl |
| topology.script.file.name | net.topology.script.file.name |
| topology.script.number.args | net.topology.script.number.args |
| user.name | mapreduce.job.user.name |
| webinterface.private.actions | mapreduce.jobtracker.webinterface.trusted |
| yarn.app.mapreduce.yarn.app.mapreduce.client-am.ipc.max-retries-on-timeouts | yarn.app.mapreduce.client-am.ipc.max-retries-on-timeouts |
The following table lists additional changes to some configuration properties:
| Deprecated property name | New property name |
|---|---|
| mapred.create.symlink | NONE - symlinking is always on |
| mapreduce.job.cache.symlink.create | NONE - symlinking is always on |
hadoop之 hadoop 2.2.X 弃用的配置属性名称及其替换名称对照表的更多相关文章
- [Linux][Hadoop] 将hadoop跑起来
前面安装过程待补充,安装完成hadoop安装之后,开始执行相关命令,让hadoop跑起来 使用命令启动所有服务: hadoop@ubuntu:/usr/local/gz/hadoop-$ ./sb ...
- Hadoop学习笔记—22.Hadoop2.x环境搭建与配置
自从2015年花了2个多月时间把Hadoop1.x的学习教程学习了一遍,对Hadoop这个神奇的小象有了一个初步的了解,还对每次学习的内容进行了总结,也形成了我的一个博文系列<Hadoop学习笔 ...
- hadoop+tachyon+spark的zybo cluster集群综合配置
1.zybo cluster 架构简述: 1.1 zybo cluster 包含5块zybo 开发板组成一个集群,zybo的boot文件为digilent zybo reference design提 ...
- Hadoop: Hadoop Cluster配置文件
Hadoop配置文件 Hadoop的配置文件: 只读的默认配置文件:core-default.xml, hdfs-default.xml, yarn-default.xml 和 mapred-defa ...
- Hadoop:搭建hadoop集群
操作系统环境准备: 准备几台服务器(我这里是三台虚拟机): linux ubuntu 14.04 server x64(下载地址:http://releases.ubuntu.com/14.04.2/ ...
- [Hadoop 周边] Hadoop资料收集【转】
原文网址: http://www.iteblog.com/archives/851 最直接的学习参考网站当然是官网啦: http://hadoop.apache.org/ Hadoop http:// ...
- [Hadoop 周边] Hadoop和大数据:60款顶级大数据开源工具(2015-10-27)【转】
说到处理大数据的工具,普通的开源解决方案(尤其是Apache Hadoop)堪称中流砥柱.弗雷斯特调研公司的分析师Mike Gualtieri最近预测,在接下来几年,“100%的大公司”会采用Hado ...
- Hadoop 2.4.0全然分布式平台搭建、配置、安装
一:系统安装与配置 虚拟机软件:Virtualbox 4.3.10(已安装对应扩展包) 虚拟机:Ubuntu 13.04 LTS 32位(至于为什么选择13.04,是由于最新的版本号装上后开机会出现错 ...
- hadoop数据[Hadoop] 实际应用场景之 - 阿里
上班之余抽点时间出来写写博文,希望对新接触的朋友有帮助.明天在这里和大家一起学习一下hadoop数据 Hadoop在淘宝和支付宝的应用从09年开始,用于对海量数据的离线处置,例如对日志的分析,也涉及内 ...
随机推荐
- 团队作业9——展示博客(Beta版本)
展示博客 1.团队成员的简介和个人博客地址,团队的源码仓库地址. 何琴琴(http://www.cnblogs.com/vviane/): 领导项目进行,协调各队员之间的矛盾合作,负责测试与需求分析. ...
- 201521123106 《Java程序设计》第14周学习总结
1. 本周学习总结 1.1 以你喜欢的方式(思维导图或其他)归纳总结多数据库相关内容. 2. 书面作业 1. MySQL数据库基本操作 建立数据库,将自己的姓名.学号作为一条记录插入.(截图,需出现自 ...
- AJAX多级下拉联动【JSON】
前言 前面我们已经使用过了XML作为数据载体在AJAX中与服务器进行交互.当时候我们的案例是二级联动,使用Servlet进行控制 这次我们使用JSON作为数据载体在AJAX与服务器交互,使用三级联动, ...
- Java数据库连接泄漏应对办法-基于Weblogic服务器
临时解决连接泄漏方案 当连接泄漏真的发生了,无可避免时,我们采取以下方案,可临时解决连接问题,以争取修改代码的时间. 步骤1:选择待分析的JNDI数据源 步骤2(可选):可配置最大数据连接数量 步骤3 ...
- apache-beanutil工具类的使用
BeanUtil工具类是apache commons中的项目 使用BeanUtil除了需要 commons-beanutils-1.8.3.jar 外,可能需要记录错误日志信息,再加入 commons ...
- 源码安装H2O Http 服务端程序到Ubuntu服务器
首先安装全家桶 apt install -y build-essential zlib1g-dev libpcre3 libpcre3-dev unzip cmake libncurses5-dev ...
- <c:forEach>+<c:if>
<c:forEach>:用来做循环<c:if>:相当于if语句用于判断执行,如果表达式的值为 true 则执行其主体内容. <c:forEach var="每个 ...
- AngularJS [ 快速入门教程 ]
前 序 S N AngularJS是什么? 我想既然大家查找AngularJS就证明大家多多少少对AngularJS都会有了解. AngularJS就是,使用JavaScript编写的客户 ...
- Redis的安装以及在项目中使用Redis的一些总结和体会
第一部分:为什么我的项目中要使用Redis 我知道有些地方没说到位,希望大神们提出来,我会吸取教训,大家共同进步! 注册时邮件激活的部分使用Redis 发送邮件时使用Redis的消息队列,减轻网站压力 ...
- MySQL基本语法(一):和SQL Server语法的差异小归纳
html { font-family: sans-serif } body { margin: 0 } article,aside,details,figcaption,figure,footer,h ...