有时候看到一些文档想保存为PDF,但是太多页,手动保存也太麻烦。于是考虑寻找Python实现的方法—— pdfkit
更多关注:
http://www.mknight.cn
wkhtmltopdf
- wkhtmltopdf主要用于HTML生成PDF。
- pdfkit是基于wkhtmltopdf的python封装,支持URL,本地文件,文本内容到PDF的转换,其最终还是调用wkhtmltopdf命令。是目前接触到的python生成pdf效果较好的。
安装
yum and pip
yum install wkhtmltopdf
pip install pdfkit
tar.xz
如果yum找不到,可以手动下载
wget https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/0.12.4/wkhtmltox-0.12.4_linux-generic-amd64.tar.xz
tar -xvf wkhtmltox-0.12.4_linux-generic-amd64.tar.xz
cd wkhtmltox/bin
cp ./* /usr/sbin/
验证
[root@xxx tmp]# wkhtmltopdf -V
wkhtmltopdf 0.12.4 (with patched qt)
相关详细介绍参考pdfkit + wkhtmltopdf
Scrapy
新建项目
scrapy starproject fox
目录结构:
.
├── scrapy.cfg
└── fox
├── __init__.py
├── items.py
├── middlewares.py
├── pipelines.py
├── settings.py
└── spiders
├── __init__.py
流程
要抓取的网站
[图片上传失败...(image-2e3267-1512379852014)]
提取URL-》提取内容-》保存HTML-》生成PDF
编辑爬虫文件
spiders/read.py
import scrapy, os
from scrapy.selector import Selector
from scrapy.http import Request
from fox import items as ReadItem
base_url = 'http://python3-cookbook.readthedocs.io/zh_CN/latest/'
class ReadSpider(scrapy.spiders.Spider):
name = "read"
start_urls = [
'http://python3-cookbook.readthedocs.io/zh_CN/latest/',
]
def parse(self, response):
links = []
s = Selector(response)
items = s.xpath('//li[@class="toctree-l2"]/a')
#循环出所有的章节URL
for i in range(len(items)):
url = s.xpath('//li[@class="toctree-l2"]/a/@href').extract()[i]
#取出href
if 'c0' in url or 'c1' in url:
#排除其他无关URL
c_url = base_url + url
#拼接
links.append(c_url)
else:
print('no', url)
for link in links:
print(link)
yield Request(link, callback=self.get_content)
def get_content(self, response):
#根据URL,获取内容
print('#########################获取HTML#########################')
item = ReadItem.ReadItem()
content = response.xpath('//div[@class="section"]').extract()[0]
item['content'] = content
item['url'] = response.url
yield item
items.py
import scrapy
class ReadItem(scrapy.Item):
content = scrapy.Field()
url = scrapy.Field()
piplines.py
import os
html_template = """
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
</head>
<body>
{content}
</body>
</html>
"""
class ReadPipeline(object):
def process_item(self, item, spider):
print('#########################获取Content#########################')
url = item['url']
content = item['content']
html = html_template.format(content=content)
file_name = url.split('/')[5][1:] + url.split('/')[6][1:3] + '.html'
#拼接要保存的HTML文件名
file_name = os.path.join(os.path.abspath('.'), 'htmls', file_name)
print(file_name)
#将拼接好的html写入文件
with open(file_name, 'a+', encoding='utf-8') as f:
f.write(html)
settings.py
ITEM_PIPELINES = {
'fox.pipelines.ReadPipeline': 300,
#启用该piplines
}
HTML ——》 PDF
import os
import pdfkit
options = {
'page-size': 'Letter',
'margin-top': '0.75in',
'margin-right': '0.75in',
'margin-bottom': '0.75in',
'margin-left': '0.75in',
'encoding': "UTF-8",
'custom-header': [
('Accept-Encoding', 'gzip')
],
'cookie': [
('cookie-name1', 'cookie-value1'),
('cookie-name2', 'cookie-value2'),
],
'outline-depth': 10,
}
filedir = os.path.join(os.path.abspath('.'), 'htmls')
files = os.listdir(filedir)
desc_file = os.path.join(os.path.abspath('.'), 'all.html')
#
for i in files:
# 遍历单个文件,读取行数
print(i)
cc = os.path.join(os.path.abspath('.'), 'htmls', i)
with open(cc, 'r', encoding='utf-8') as f:
with open(desc_file, 'a+', encoding='utf-8') as new:
new.write(f.read())
pdf = pdfkit.from_file('all.html', 'out.pdf', options=options)
验证
就这样生成了PDF文件,如果确认HTML文件带<h1>、<h2>标签但是没有生成目录,那就说明是python版本问题,亲身经历的坑!Python 3.6!!!