项目需要抓取某网站的全部数据,这是一个防爬虫相当变态的网站。
已知的反爬手段包括:
- 账号不接受公开注册,必定电话回访;
- 整站异步、动态加载;
- 翻页状态不可恢复,只要中断就必须从头翻(lll¬ω¬)
- 还没有可见接口;
- 登陆成功后cookies半小时过期;
- 单纯翻页、下拉等非点击其他分类按钮的动作不会触发cookies有效期更新;
- cookies过期后有弹出框提示session过期,需要点击确认;
- 多次过期后会强制登出 ... ...
大致上爬虫一直在执行这样的步骤:
for cat in category:
enter cat
for splr in cat:
enter splr
for p in pages:
grab data
next page
下面只记录了本次程序中用来记录分类--供应商--型号对应关系的表结构,用来生成爬虫路径。
import datetime
from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relation, sessionmaker
class SECategory(Base):
__tablename__ = 'se_category'
id = Column(Integer, primary_key=True)
name = Column(String(length=100), nullable=False)
def __repr__(self):
return "<Category(name='{}')>".format(self.name)
class SESupplier(Base):
__tablename__ = 'se_supplier'
id = Column(Integer, primary_key=True)
category = Column(String(length=100))
name = Column(String(length=100))
has_visited = Column(Boolean, server_default=false())
def __repr__(self):
return "<Supplier(name='{}')>".format(self.name)
class SEPart(Base):
__tablename__ = 'se_part'
id = Column(Integer, primary_key=True)
partnumber = Column(String(length=100))
manufacturer = Column(String(length=100))
descp = Column(String(length=400))
crosses_options = Column(String(length=100))
budgetary_prices = Column(String(length=100))
supplier = Column(String(length=100))
category = Column(String(length=100))
created_at = Column(DateTime, default=datetime.datetime.now())
def __repr__(self):
return "<Part(partnumber='{}', manufacturer='{}', descp='{}', crosses_options='{}', budgetary_prices='{}')>".format(
self.partnumber, self.manufacturer, self.descp, self.crosses_options, self.budgetary_prices)
engine = create_engine('dbms://user:pwd@host/dbname')
Base.metadata.create_all(engine)