Java网络爬虫 HttpClient
简介 : HttpClient是Apache Jakarta Common下的子项目,用于提供高效的,功能丰富的支持HTTP协议的客户编程工具包,其主要功能如下:
- 实现了所有HTTP的方法 : GET,POST,PUT,HEAD ..
- 支持自动重定向
- 支持HTTPS协议
- 支持代理服务器
关于Http请求的方法说明,参考大佬整理的博客:
https://www.cnblogs.com/williamjie/p/9099940.html
一、环境准备
1 JDK1.8
2 IntelliJ IDEA
3 IDEA自带的Maven
创建Maven工程itcast-crawler-first并给pom.xml加入依赖
<dependencies>
<!-- HttpClient -->
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.5.3</version>
</dependency> <!-- 日志 -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.25</version>
</dependency>
</dependencies>
关于日志的配置文件
log4j.rootLogger=DEBUG,A1
log4j.logger.cn.itcast = DEBUG log4j.appender.A1=org.apache.log4j.ConsoleAppender
log4j.appender.A1.layout=org.apache.log4j.PatternLayout
log4j.appender.A1.layout.ConversionPattern=%-d{yyyy-MM-dd HH:mm:ss,SSS} [%t] [%c]-[%p] %m%n
log4j可以将日志以文件的形式输出,也可以输出打印在控制台上,同时可以设置输出的日志内容显示格式、日志文件的生成方式(追加、覆盖、设置日志文件大小等等)。我这里就是直接将日志打印到控制台上。org.apache.log4j.ConsoleAppender
二、编写代码
编写最简单的爬虫,抓取传智播客首页:http://www.itcast.cn/
public class CrawlerFirst {
public static void main(String[] args) throws Exception {
//1. 打开浏览器,创建HttpClient对象
CloseableHttpClient httpClient = HttpClients.createDefault();
//2. 输入网址,发起get请求创建HttpGet对象
HttpGet httpGet = new HttpGet("http://www.itcast.cn");
//3.按回车,发起请求,返回响应,使用HttpClient对象发起请求
CloseableHttpResponse response = httpClient.execute(httpGet);
//4. 解析响应,获取数据
//判断状态码是否是200
if (response.getStatusLine().getStatusCode() == 200) {
HttpEntity httpEntity = response.getEntity();
String content = EntityUtils.toString(httpEntity, "utf8");
System.out.println(content);
}
}
}
三、Get请求
package cn.itcast.crawler.test; import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils; import java.io.IOException; public class HttpGetTest { public static void main(String[] args) {
//创建HttpClient对象
CloseableHttpClient httpClient = HttpClients.createDefault(); //创建HttpGet对象,设置url访问地址
HttpGet httpGet = new HttpGet("http://www.itcast.cn"); CloseableHttpResponse response = null;
try {
//使用HttpClient发起请求,获取response
response = httpClient.execute(httpGet); //解析响应
if (response.getStatusLine().getStatusCode() == 200) {
String content = EntityUtils.toString(response.getEntity(), "utf8");
System.out.println(content.length());
} } catch (IOException e) {
e.printStackTrace();
}finally {
//关闭response
try {
response.close();
} catch (IOException e) {
e.printStackTrace();
}
try {
httpClient.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
四、带参数的GET请求
public class HttpGetParamTest {
public static void main(String[] args) throws Exception {
//创建HttpClient对象
CloseableHttpClient httpClient = HttpClients.createDefault();
//设置请求地址是:http://yun.itheima.com/search?keys=Java
//创建URIBuilder
URIBuilder uriBuilder = new URIBuilder("http://yun.itheima.com/search");
//设置参数
uriBuilder.setParameter("keys","Java");
//创建HttpGet对象,设置url访问地址
HttpGet httpGet = new HttpGet(uriBuilder.build());
System.out.println("发起请求的信息:"+httpGet);
CloseableHttpResponse response = null;
try {
//使用HttpClient发起请求,获取response
response = httpClient.execute(httpGet);
//解析响应
if (response.getStatusLine().getStatusCode() == 200) {
String content = EntityUtils.toString(response.getEntity(), "utf8");
System.out.println(content.length());
}
} catch (IOException e) {
e.printStackTrace();
}finally {
//关闭response
try {
response.close();
} catch (IOException e) {
e.printStackTrace();
}
try {
httpClient.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
五、POST请求
public class HttpPostTest {
public static void main(String[] args) {
//创建HttpClient对象
CloseableHttpClient httpClient = HttpClients.createDefault();
//创建HttpPost对象,设置url访问地址
HttpPost httpPost = new HttpPost("http://www.itcast.cn");
CloseableHttpResponse response = null;
try {
//使用HttpClient发起请求,获取response
response = httpClient.execute(httpPost);
//解析响应
if (response.getStatusLine().getStatusCode() == 200) {
String content = EntityUtils.toString(response.getEntity(), "utf8");
System.out.println(content.length());
}
} catch (IOException e) {
e.printStackTrace();
}finally {
//关闭response
try {
response.close();
} catch (IOException e) {
e.printStackTrace();
}
try {
httpClient.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
六、带参数的POST请求
public class HttpPostParamTest {
public static void main(String[] args) throws Exception {
//创建HttpClient对象
CloseableHttpClient httpClient = HttpClients.createDefault();
//创建HttpPost对象,设置url访问地址
HttpPost httpPost = new HttpPost("http://yun.itheima.com/search");
//声明List集合,封装表单中的参数
List<NameValuePair> params = new ArrayList<NameValuePair>();
//设置请求地址是:http://yun.itheima.com/search?keys=Java
params.add(new BasicNameValuePair("keys","Java"));
//创建表单的Entity对象,第一个参数就是封装好的表单数据,第二个参数就是编码
UrlEncodedFormEntity formEntity = new UrlEncodedFormEntity(params,"utf8");
//设置表单的Entity对象到Post请求中
httpPost.setEntity(formEntity);
CloseableHttpResponse response = null;
try {
//使用HttpClient发起请求,获取response
response = httpClient.execute(httpPost);
//解析响应
if (response.getStatusLine().getStatusCode() == 200) {
String content = EntityUtils.toString(response.getEntity(), "utf8");
System.out.println(content.length());
}
} catch (IOException e) {
e.printStackTrace();
}finally {
//关闭response
try {
response.close();
} catch (IOException e) {
e.printStackTrace();
}
try {
httpClient.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
七、HTTP连接池
如果每次请求都要创建HttpClient,会有频繁创建和销毁的问题,可以使用连接池来解决这个问题。
测试以下代码,并断点查看每次获取的HttpClient都是不一样的。
public class HttpClientPoolTest {
public static void main(String[] args) {
//创建连接池管理器
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
//设置最大连接数
cm.setMaxTotal(100);
//设置每个主机的最大连接数
cm.setDefaultMaxPerRoute(10);
//使用连接池管理器发起请求
doGet(cm);
doGet(cm);
}
private static void doGet(PoolingHttpClientConnectionManager cm) {
//不是每次创建新的HttpClient,而是从连接池中获取HttpClient对象
CloseableHttpClient httpClient = HttpClients.custom().setConnectionManager(cm).build();
HttpGet httpGet = new HttpGet("http://www.itcast.cn");
CloseableHttpResponse response = null;
try {
response = httpClient.execute(httpGet);
if (response.getStatusLine().getStatusCode() == 200) {
String content = EntityUtils.toString(response.getEntity(), "utf8");
System.out.println(content.length());
}
} catch (IOException e) {
e.printStackTrace();
}finally {
if (response != null) {
try {
response.close();
} catch (IOException e) {
e.printStackTrace();
}
//不能关闭HttpClient,由连接池管理HttpClient
//httpClient.close();
}
}
}
}
八、RequestConfig配置连接信息
在构建网络爬虫时,经常需要配置很多信息,例如RequestTimeout(连接池获取到连接的超时时间)、ConnectTimeout(建立连接的超时)、SocketTimeout(获取数据的超时时间)、代理、是否允许重定向等信息。
在HttpClient,实现这些配置需要使用到RequestConfig类的一个内部类Builder。
如下为Builder的源码如下,代码太长了,我直接折叠了。
public static class Builder {
private boolean expectContinueEnabled;
private HttpHost proxy;
private InetAddress localAddress;
private boolean staleConnectionCheckEnabled;
private String cookieSpec;
private boolean redirectsEnabled;
private boolean relativeRedirectsAllowed;
private boolean circularRedirectsAllowed;
private int maxRedirects;
private boolean authenticationEnabled;
private Collection<String> targetPreferredAuthSchemes;
private Collection<String> proxyPreferredAuthSchemes;
private int connectionRequestTimeout;
private int connectTimeout;
private int socketTimeout;
private boolean contentCompressionEnabled;
Builder() {
super();
this.staleConnectionCheckEnabled = false;
this.redirectsEnabled = true;
this.maxRedirects = 50;
this.relativeRedirectsAllowed = true;
this.authenticationEnabled = true;
this.connectionRequestTimeout = -1;
this.connectTimeout = -1;
this.socketTimeout = -1;
this.contentCompressionEnabled = true;
}
public Builder setExpectContinueEnabled(final boolean expectContinueEnabled) {
this.expectContinueEnabled = expectContinueEnabled;
return this;
}
public Builder setProxy(final HttpHost proxy) {
this.proxy = proxy;
return this;
}
public Builder setLocalAddress(final InetAddress localAddress) {
this.localAddress = localAddress;
return this;
}
/**
* @deprecated (4.4) Use {@link
* org.apache.http.impl.conn.PoolingHttpClientConnectionManager#setValidateAfterInactivity(int)}
*/
@Deprecated
public Builder setStaleConnectionCheckEnabled(final boolean staleConnectionCheckEnabled) {
this.staleConnectionCheckEnabled = staleConnectionCheckEnabled;
return this;
}
public Builder setCookieSpec(final String cookieSpec) {
this.cookieSpec = cookieSpec;
return this;
}
public Builder setRedirectsEnabled(final boolean redirectsEnabled) {
this.redirectsEnabled = redirectsEnabled;
return this;
}
public Builder setRelativeRedirectsAllowed(final boolean relativeRedirectsAllowed) {
this.relativeRedirectsAllowed = relativeRedirectsAllowed;
return this;
}
public Builder setCircularRedirectsAllowed(final boolean circularRedirectsAllowed) {
this.circularRedirectsAllowed = circularRedirectsAllowed;
return this;
}
public Builder setMaxRedirects(final int maxRedirects) {
this.maxRedirects = maxRedirects;
return this;
}
public Builder setAuthenticationEnabled(final boolean authenticationEnabled) {
this.authenticationEnabled = authenticationEnabled;
return this;
}
public Builder setTargetPreferredAuthSchemes(final Collection<String> targetPreferredAuthSchemes) {
this.targetPreferredAuthSchemes = targetPreferredAuthSchemes;
return this;
}
public Builder setProxyPreferredAuthSchemes(final Collection<String> proxyPreferredAuthSchemes) {
this.proxyPreferredAuthSchemes = proxyPreferredAuthSchemes;
return this;
}
public Builder setConnectionRequestTimeout(final int connectionRequestTimeout) {
this.connectionRequestTimeout = connectionRequestTimeout;
return this;
}
public Builder setConnectTimeout(final int connectTimeout) {
this.connectTimeout = connectTimeout;
return this;
}
public Builder setSocketTimeout(final int socketTimeout) {
this.socketTimeout = socketTimeout;
return this;
}
/**
* @deprecated (4.5) Set {@link #setContentCompressionEnabled(boolean)} to {@code false} and
* add the {@code Accept-Encoding} request header.
*/
@Deprecated
public Builder setDecompressionEnabled(final boolean decompressionEnabled) {
this.contentCompressionEnabled = decompressionEnabled;
return this;
}
public Builder setContentCompressionEnabled(final boolean contentCompressionEnabled) {
this.contentCompressionEnabled = contentCompressionEnabled;
return this;
}
public RequestConfig build() {
return new RequestConfig(
expectContinueEnabled,
proxy,
localAddress,
staleConnectionCheckEnabled,
cookieSpec,
redirectsEnabled,
relativeRedirectsAllowed,
circularRedirectsAllowed,
maxRedirects,
authenticationEnabled,
targetPreferredAuthSchemes,
proxyPreferredAuthSchemes,
connectionRequestTimeout,
connectTimeout,
socketTimeout,
contentCompressionEnabled);
}
}
超时相关配置
HttpClient中可设置三个超时:RequestTimeout(连接池获取到连接的超时时间)、ConnectTimeout(建立连接的超时)、SocketTimeout(获取数据的超时时间)。使用RequestConfig进行配置的示例程序如下:
//全部设置为10秒
RequestConfig requestConfig = RequestConfig.custom()
.setSocketTimeout(10000)
.setConnectTimeout(10000)
.setConnectionRequestTimeout(10000)
.build();
//配置httpClient
HttpClient httpClient = HttpClients.custom()
.setDefaultRequestConfig(requestConfig)
.build();
代理配置
RequestConfig defaultRequestConfig = RequestConfig.custom()
.setProxy(new HttpHost("171.97.67.160", 3128, null))
.build(); //添加代理
HttpClient httpClient = HttpClients.custom().
setDefaultRequestConfig(defaultRequestConfig).build(); //配置httpClient
Java网络爬虫 HttpClient的更多相关文章
- 学 Java 网络爬虫,需要哪些基础知识?
说起网络爬虫,大家想起的估计都是 Python ,诚然爬虫已经是 Python 的代名词之一,相比 Java 来说就要逊色不少.有不少人都不知道 Java 可以做网络爬虫,其实 Java 也能做网络爬 ...
- Java 网络爬虫,就是这么的简单
这是 Java 网络爬虫系列文章的第一篇,如果你还不知道 Java 网络爬虫系列文章,请参看 学 Java 网络爬虫,需要哪些基础知识.第一篇是关于 Java 网络爬虫入门内容,在该篇中我们以采集虎扑 ...
- Java网络爬虫笔记
Java网络爬虫笔记 HttpClient来代替浏览器发起请求. select找到的是元素,也就是elements,你想要获取具体某一个属性的值,还是要用attr("")方法.标签 ...
- Java 网络爬虫获取网页源代码原理及实现
Java 网络爬虫获取网页源代码原理及实现 1.网络爬虫是一个自动提取网页的程序,它为搜索引擎从万维网上下载网页,是搜索引擎的重要组成.传统爬虫从一个或若干初始网页的URL开始,获得初始网页上的URL ...
- java网络爬虫基础学习(三)
尝试直接请求URL获取资源 豆瓣电影 https://movie.douban.com/explore#!type=movie&tag=%E7%83%AD%E9%97%A8&sort= ...
- java网络爬虫基础学习(一)
刚开始接触java爬虫,在这里是搜索网上做一些理论知识的总结 主要参考文章:gitchat 的java 网络爬虫基础入门,好像要付费,也不贵,感觉内容对新手很友好. 一.爬虫介绍 网络爬虫是一个自动提 ...
- 开源的49款Java 网络爬虫软件
参考地址 搜索引擎 Nutch Nutch 是一个开源Java 实现的搜索引擎.它提供了我们运行自己的搜索引擎所需的全部工具.包括全文搜索和Web爬虫. Nutch的创始人是Doug Cutting, ...
- 【转】44款Java 网络爬虫开源软件
原帖地址 http://www.oschina.net/project/lang/19?tag=64&sort=time 极简网络爬虫组件 WebFetch WebFetch 是无依赖极简网页 ...
- [原创]一款基于Reactor线程模型的java网络爬虫框架
AJSprider 概述 AJSprider是笔者基于Reactor线程模式+Jsoup+HttpClient封装的一款轻量级java多线程网络爬虫框架,简单上手,小白也能玩爬虫, 使用本框架,只需要 ...
随机推荐
- springboot+微信小程序实现微信支付【统一下单】
说明: 1)微信支付必须有营业执照才可以申请 2)微信支付官方api是全套的,我这是抽取其中的统一下单api,做了一个简单的封装 首先看看微信支付 商户系统和微信支付系统主要交互: 1.小程序内调用登 ...
- JQuery动态添加控件并取值
<!doctype html> <html> <head> <meta charset="utf-8"> <title> ...
- Spring Boot SpringApplication启动类(一)
目录 目录 前言 1.起源 2.SpringApplication 准备阶段 2.1.推断 Web 应用类型 2.2.加载应用上下文初始器 ApplicationContextInitializer ...
- 机器学习笔记(六) ---- 支持向量机(SVM)【华为云技术分享】
版权声明:本文为博主原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明. 本文链接:https://blog.csdn.net/devcloud/article/detai ...
- 硬核评测:企业上云的极速存储挑战,华为云全新极速IO云硬盘性能评测
来源:至顶网 作者:董培欣 借助华为云全新一代极速IO云硬盘开启邀测的时机,至顶网评测实验室展开了一次华为云极速IO云硬盘与超高IO云硬盘的性能对比测试活动,并且尝试通过相关测试成绩,对云硬盘的应用能 ...
- IPV6-ONLY
1.ipv4地址已经耗尽,未来可能只支持ipv6-only. 2.在一个纯IPV6环境下,路由器会自动将IPV4地址转成IPv6地址. 苹果这样要求,对于大多数开发者而言,并不困难.目前大多数应用无需 ...
- docker实践之docker-compose部署mysql
文章目录 docker实践之docker-compose部署mysql 1.安装部署docker 2.编写docker-compose文件 3.编写配置文件和初始化文件 4.启动数据库 5.检查初始化 ...
- SpringBoot与JPA
JPA是什么 JPA是Java Persistence API的简称,中文名Java持久层API,是JDK 5.0注解或XML描述对象-关系表的映射关系,并将运行期的实体对象持久化到数据库中. JPA ...
- UVA-156
Most crossword puzzle fans are used to anagrams - groups of words with the same letters in different ...
- 玩转摄像头之 基于SDRAM缓冲 USB2.0视频采集系统 MT9T001、MT9P031 演示 展示
玩转摄像头之 基于SDRAM缓冲 USB视频采集系统 MT9T001.MT9P031 最新设计的系统: 核心板(FPGA+SDRAM)+底板(68013+DVP)+sensor 先看图 核心板 正 ...