接着上节做,新建一个爬虫项目。
在python的工作目录使用命令新建一个scrapy项目,其实和创建Djingo项目一样,只是命令不一样。
D:\untitled>scrapy startproject zufang
New Scrapy project 'zufang', using template directory 'D:\\Python35\\Lib\\site-packages\\scrapy\\templates\\project', created in:
D:\untitled\zufang
You can start your first spider with:
cd zufang
scrapy genspider example example.com
D:\untitled>
这时候工作目录已经存在zufang项目,使用 PyCharm工具打开该项目。如下:代码:
import scrapy
class GanjiSpider(scrapy.Spider):
name = "zufang"
start_urls = ['http://sh.ganji.com/fang1/']
def parse(self, response):
print(response)
title_list = response.xpath(".//*[@class='f-list-item ershoufang-list']/dl/dd[1]/a[1]/text()").extract()
money_list = response.xpath(".//*[@class='f-list-item ershoufang-list']/dl/dd[5]/div[1]/span[1]/text()").extract()
for i,j in zip(title_list,money_list):
print(i,":",j)
里面用到的主要方法都是上一篇文章讲到的。
在 PyCharm的控制台输入:scrapy list
,是查询含有几个爬虫项目。scrapy crawl zufang
命令是正式爬虫。效果如下:
视频中讲解的是使用SQLite 数据库,我自己改为了 mysql数据库,因为我感觉 mysql经常使用。
一、新建数据库&配置
在本机创建数据库和表:
MariaDB [(none)]> create database zufang default character set 'utf8';
Query OK, 1 row affected (0.05 sec)
MariaDB [(none)]> use zufang
Database changed
MariaDB [zufang]> create table zufang (title varchar(512),money varchar(128)) ;
Query OK, 0 rows affected (0.11 sec)
在 settings.py文件中新增配置:
# start MySQL database configure setting
MYSQL_HOSTS = '127.0.0.1'
MYSQL_DB = 'zufang'
MYSQL_USER = 'root'
MYSQL_PASSWORD = '123456'
MYSQL_PORT = '3306'
CHARSET='utf8'
# end of MySQL database configure setting
一、编写入库代码
1.修改settings.py文件如下配置
ITEM_PIPELINES = {
# 'zufang.pipelines.ZufangPipeline': 300,
'zufang.pipelines.ZufangPipeline': 300
}
2.items.py代码:
import scrapy
class ZufangItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
title = scrapy.Field()
money = scrapy.Field()
pass
3.pipelines.py代码:
from scrapy import signals
from scrapy.conf import settings
import MySQLdb
import MySQLdb.cursors
class ZufangPipeline(object):
def open_spider(self,spider):
self.con = MySQLdb.connect(host="127.0.0.1",user="root",passwd="123456",db="zufang",charset="utf8")
self.cu = self.con.cursor()
def process_item(self, item, spider):
print(spider.name,'pipelines')
insert_sql = "insert into zufang (title , money) values ('{}','{}')".format(item['title'], item['money'])
print(insert_sql)
self.cu.execute(insert_sql)
self.con.commit()
return item
def spider_close(self,spider):
self.con.close()
4.运行
在控制台运行爬虫:scrapy crawl zufang
,运行成功,数据库已经有数据。
<完!>