Jsoup简介
Jsoup是用于解析HTML,就类似XML解析器用于解析XML。 Jsoup它解析HTML成为真实世界的HTML。
能用Jsoup实现什么?
●从URL,文件或字符串中刮取并解析HTML
●查找和提取数据,使用DOM遍历或CSS选择器
●操纵HTML元素,属性和文本
●根据安全的白名单清理用户提交的内容,以防止XSS攻击
●输出整洁的HTML
安装
<dependency>
<groupId>org.jsoup</groupId>
<artifactId>jsoup</artifactId>
<version>1.12.1</version>
</dependency>
Htmluiit简介
htmlunit 是一款开源的java 页面分析工具,优点就是得到js执行后的值。用于抓取动态页面。
安装
<dependency>
<groupId>net.sourceforge.htmlunit</groupId>
<artifactId>htmlunit</artifactId>
<version>2.35.0</version>
</dependency>
使用Jsoup+Htmlunit
public String getHtmlPageResponse(String url) throws Exception {
//请求超时时间,默认20秒
int timeout = 20000;
//等待异步JS执行时间,默认20秒
int waitForBackgroundJavaScript = 20000;
String result = "";
final WebClient webClient = new WebClient(BrowserVersion.CHROME);
webClient.getOptions().setThrowExceptionOnScriptError(false);//当JS执行出错的时候是否抛出异常
webClient.getOptions().setThrowExceptionOnFailingStatusCode(false);//当HTTP的状态非200时是否抛出异常
webClient.getOptions().setActiveXNative(false);
webClient.getOptions().setCssEnabled(false);//是否启用CSS
webClient.getOptions().setJavaScriptEnabled(true); //很重要,启用JS
webClient.setAjaxController(new NicelyResynchronizingAjaxController());//很重要,设置支持AJAX
webClient.getOptions().setTimeout(timeout);//设置“浏览器”的请求超时时间
webClient.setJavaScriptTimeout(timeout);//设置JS执行的超时时间
HtmlPage page;
try {
page = webClient.getPage(url);
} catch (Exception e) {
webClient.close();
throw e;
}
webClient.waitForBackgroundJavaScript(waitForBackgroundJavaScript);//该方法阻塞线程
result = page.asXml();
webClient.close();
return result;
}
下载网页中我们想要的图片
public void getHtmlContent(String url){
// 发起请求
String content = null;
try {
content = getHtmlPageResponse(url);
} catch (Exception e) {
e.printStackTrace();
}
// 解析网页 得到文档对象
Document doc = Jsoup.parse(content);
// 获取指定的 <img />
Elements elements = doc.select("img[src$=.png]");
List<String> picList = new ArrayList<>();
for (Element element :elements){
// 获取 <img /> 的 src
String imageUrl = element.attr("src");
// 取到我们想要的图片
if (imageUrl.contains(WEATHER_PIC_BASE_URL)) {
picList.add(imageUrl);
}
}
// 获取文件夹下所有文件名
List<String> fileNameList = getFileList(savePath);
for(String imgUrl:picList){
String fileName = getFileNameWihtUrl(imgUrl);
// 图片是否存在
boolean isPic = fileNameList.contains(fileName);
if (!isPic) {
// 下载图片
}
}
}
保存图片到本地
public void downloadImages(String url, String fileName,String savePath,String imageFormat,Integer headerType) {
// 创建httpclient实例
CloseableHttpClient httpclient = HttpClients.createDefault();
// Http请求
try {
HttpGet httpGet = new HttpGet(url);
CloseableHttpResponse pictureResponse = httpclient.execute(httpGet);
HttpEntity pictureEntity = pictureResponse.getEntity();
InputStream inputStream = pictureEntity.getContent();
// 使用 common-io 下载图片到本地,注意图片名不能重复
FileUtils.copyToFile(inputStream, new File( savePath+ fileName + imageFormat));
pictureResponse.close(); // pictureResponse关闭
} catch (IOException e) {
e.printStackTrace();
}
}
下载图片安装commons-io
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.6</version>
</dependency>
对于一般的网站来说。到这里能够爬取数据了,但是今天遇到一个问题,我获取了网页上所有JS执行后的动态图片链接,但是下载到本地图片损坏打不开。调试,把抓取的图片地址复制到浏览器中显示链接无效。what??
打开网页调试工具,
复制Request URL重新打开一个页面一样的显示链接无效。
猜想
网页调试工具Network调试,抓取的图片双击无数次都有显示图片,但是把 Request URL复制到一个新页面就显示无效。猜想应该是访问页面的时候返回了Cookie,下载图片的时候也要把Cookie带回去验证。
解决
修改HttptClient doGet方法,addHeader尽量模拟网页的请求
public static String doGetExpansion(String url, String charset) {
CloseableHttpClient httpClient = HttpClients.createDefault();
HttpGet httpGet = null;
String result = null;
try {
httpGet = new HttpGet(url);
// 设置通用的请求属性
httpGet.addHeader("Cache-Control", "no-cache");
httpGet.addHeader("Pragma", "no-cache");
httpGet.addHeader("User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36");
httpGet.addHeader("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8");
httpGet.addHeader("Accept-Encoding", "gzip, deflate");
httpGet.addHeader("accept-secret", "");
httpGet.addHeader("Accept-Language", "zh-CN,zh;q=0.9");
httpGet.addHeader("Upgrade-Insecure-Requests", "1");
httpGet.addHeader("Connection", "keep-alive");
for (int i = 0; i < cookies.size(); i++) {
httpGet.addHeader("Cookie", cookies.get(i));
}
HttpResponse response = httpClient.execute(httpGet);
if (response != null) {
for (Header header : response.getAllHeaders()) {
if (header.getName().equalsIgnoreCase("Set-Cookie")) {
cookies.add(header.getValue());
}
}
HttpEntity resEntity = response.getEntity();
if (resEntity != null) {
result = EntityUtils.toString(resEntity, charset);
httpGet.releaseConnection();
}
}
} catch (Exception ex) {
ex.printStackTrace();
httpGet.releaseConnection();
}
return result;
}
下载图片如果也需要验证按照上面的方法修改即可下载图片。