免费代理ip爬虫

免费代理ip爬取(仅供参考!别干坏事哦)

使用Crawler4j开源工具爬取整个网站

HttpHelper工具类,自动切换user-agent

    /**
     * 获取ip归属地
     * @param ip
     * @return
     */
    public static String getIpLocation(String ip){
        String api = "http://www.ip138.com/ips138.asp?ip=";
        String url = api + ip + "&datatype=text";
        String location = null;
        try {
            Document document = Jsoup.connect(url).get();
            location = document.select("li").get(0).text().substring(5);
        } catch (IOException e) {
            e.printStackTrace();
        }
        return location;
    }

检测代理是否可用 分别检测百度和谷歌

/**
     * 快速检测一个代理是否能用
     * @param proxy
     * @return
     */
    public static boolean checkGoogleProxy(Proxy proxy){
        URL url = null;
        try {
            url = new URL("https://www.google.com/");
        } catch (MalformedURLException e) {
            System.out.println(e.getMessage());
        }
        // 创建代理服务器
        InetSocketAddress addr = new InetSocketAddress(proxy.getIp(), Integer.parseInt(proxy.getPort()));
        java.net.Proxy mProxy = new java.net.Proxy(java.net.Proxy.Type.HTTP, addr);
        HttpURLConnection conn = null;
        try {
            if (url != null) {
                conn = (HttpURLConnection)url.openConnection(mProxy);
                conn.setConnectTimeout(3000);
                conn.connect();
                int code = conn.getResponseCode();
                if(code == 200){
                    return true;
                }else {
                    System.out.println(code);
                    return false;
                }
            }
        } catch (IOException e) {
            System.out.println(e.getMessage());
        }
        return false;
    }

    public static boolean checkBaiduProxy(Proxy proxy){
        URL url = null;
        try {
            url = new URL("http://www.baidu.com/");
        } catch (MalformedURLException e) {
            System.out.println(e.getMessage());
        }
        // 创建代理服务器
        InetSocketAddress addr = new InetSocketAddress(proxy.getIp(), Integer.parseInt(proxy.getPort()));
        java.net.Proxy mProxy = new java.net.Proxy(java.net.Proxy.Type.HTTP, addr);
        HttpURLConnection conn = null;
        try {
            if (url != null) {
                conn = (HttpURLConnection)url.openConnection(mProxy);
                conn.setConnectTimeout(3000);
                conn.connect();
                int code = conn.getResponseCode();
                if(code == 200){
                    return true;
                }else {
                    System.out.println(code);
                    return false;
                }
            }
        } catch (IOException e) {
            System.out.println(e.getMessage());
        }
        return false;
    }

FileHelper 文件写入类 追加到最后一行

/**
     * 追加文件:使用FileWriter
     *
     * @param fileName
     * @param content
     */
    public static void writeLine(String fileName, String content) {
        FileWriter writer = null;
        try {
            File file = new File(fileName);
            if(!file.exists()) {
                try {
                    file.createNewFile();
                } catch (IOException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }
            }
            // 打开一个写文件器,构造函数中的第二个参数true表示以追加形式写文件
            writer = new FileWriter(fileName, true);
            writer.write(content);
        } catch (IOException e) {
            e.printStackTrace();
        } finally {
            try {
                if(writer != null){
                    writer.close();
                }
            } catch (IOException e) {
                e.printStackTrace();
            }
        }
    }

Crawler4j controller 以爬取xici代理为例

public class XiciController {

    private static CloseableHttpClient httpClient = null;

    public static void main(String[] args) throws Exception {
        String crawlStorageFolder = "/home/xuantang/IdeaProjects/FreeProxy/data";
        int numberOfCrawlers = 8;

        CrawlConfig config = new CrawlConfig();

        config.setFollowRedirects(false);
        config.setCrawlStorageFolder(crawlStorageFolder);

        HashSet<BasicHeader> collections = new HashSet<BasicHeader>();
        collections.add(new BasicHeader("User-Agent","Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3192.0 Safari/537.36"));
        collections.add(new BasicHeader("Accept","text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8"));
        collections.add(new BasicHeader("Accept-Encoding", "gzip,deflate,sdch"));
        collections.add(new BasicHeader("Accept-Language", "zh-CN,zh;q=0.8,en;q=0.6"));
        collections.add(new BasicHeader("Content-Type","application/x-www-form-urlencoded;charset=UTF-8"));
        collections.add(new BasicHeader("Connection", "keep-alive"));
        config.setDefaultHeaders(collections);
        /*
         * Instantiate the controller for this crawl.
         */
        PageFetcher pageFetcher = new PageFetcher(config);
        RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
        RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);

        CrawlController controller = new CrawlController(config, pageFetcher, robotstxtServer);

        /*
         * For each crawl, you need to add some seed urls. These are the first
         * URLs that are fetched and then the crawler starts following links
         * which are found in these pages
         */
        controller.addSeed("http://www.xicidaili.com/nt");
        /*
         * Start the crawl. This is a blocking operation, meaning that your code
         * will reach the line after this only when crawling is finished.
         */
        controller.start(XiciCrawler.class, numberOfCrawlers);
    }
}

页面处理

public class XiciCrawler extends WebCrawler {

    private static int count = 0;
    private final static Pattern FILTERS = Pattern.compile(".*(\\.(css|js|gif|jpg"
            + "|png|mp3|mp4|zip|gz))$");

    /**
     * This method receives two parameters. The first parameter is the page
     * in which we have discovered this new url and the second parameter is
     * the new url. You should implement this function to specify whether
     * the given url should be crawled or not (based on your crawling logic).
     * In this example, we are instructing the crawler to ignore urls that
     * have css, js, git, ... extensions and to only accept urls that start
     * with "http://www.ics.uci.edu/". In this case, we didn't need the
     * referringPage parameter to make the decision.
     */
    @Override
    public boolean shouldVisit(Page referringPage, WebURL url) {
        String href = url.getURL().toLowerCase();
        return href.startsWith("http://www.xicidaili.com/nt");
    }

    /**
     * This function is called when a page is fetched and ready
     * to be processed by your program.
     */
    @Override
    public void visit(Page page) {

        System.out.println(page.getContentCharset());
        if (page.getParseData() instanceof HtmlParseData) {
            HtmlParseData htmlParseData = (HtmlParseData) page.getParseData();
            String html = htmlParseData.getHtml();
            Document document = Jsoup.parse(html);
            Element content = document.getElementById("ip_list");
            Elements dds = content.getElementsByTag("tr");
            for (Element element : dds) {
                if (element.getElementsByTag("td").text().length() != 0) {
                    try{
                        String text = element.getElementsByTag("td").text();
                        String[] result = text.split(" ");
                        String ip = result[0];
                        String port = result[1];
                        String location = result[2];
                        Proxy proxy = new Proxy(ip, port, location);
                        count++;
                        if(HttpHelper.checkGoogleProxy(proxy)){
                            proxy.setType("google");
                            FileHelper.writeLine("/d1/lab409/FreeProxy/data/proxy.txt", proxy.toString() + "\n");
                            System.out.println(count + " " + "this proxy is ok for google: " + proxy.getIp()
                                    + " " + proxy.getPort() + " " + proxy.getLocation() + "------------------" +
                                    "--------------------------------------------------------------");
                        }else if(HttpHelper.checkBaiduProxy(proxy)){
                            proxy.setType("baidu");
                            FileHelper.writeLine("/d1/lab409/FreeProxy/data/proxy.txt", proxy.toString() + "\n");
                            System.out.println(count + " " + "this proxy is ok for baidu: " + proxy.getIp()
                                    + " " + proxy.getPort() + " " + proxy.getLocation());
                        }
                    }catch (Exception e){
                        System.out.println(e.getMessage());
                    }

                }

            }
        }
    }
}

使用Springboot可以对外提供api获取可用代理

  • 数据持久化,使用mybatis连接数据库操作
@Mapper
public interface ProxyMapper {
    /**
     * @param
     * @return
     */
    @Insert("INSERT INTO proxy (ip, port, location) VALUES (#{ip},#{port},#{location});")
    int insert(Proxy proxy);

    @Select("select ip, port, location from proxy")
    List<Proxy> get();
}

Service 接口服务, 插入数据库,查询数据库

@RestController
public class ProxyController {

    @Autowired
    ProxyMapper proxyMapper;

    @RequestMapping(value = "/proxys", method = RequestMethod.GET,
            produces = "application/json;charset=UTF-8")
    @ResponseBody
    public List<Proxy> getProxy(){
        List<Proxy> canUseProxys = ENCrawler.getCanUseProxys(10000);
        for (Proxy proxy : canUseProxys){
            if(proxy.getLocation() != null)
                try {
                    proxyMapper.insert(proxy);
                }catch (Exception e){
                    System.out.println(e.getMessage());
                }
        }
        return canUseProxys;
    }
}

源码

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 194,242评论 5 459
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 81,769评论 2 371
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 141,484评论 0 319
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,133评论 1 263
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,007评论 4 355
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,080评论 1 272
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,496评论 3 381
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,190评论 0 253
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,464评论 1 290
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,549评论 2 309
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,330评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,205评论 3 312
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,567评论 3 298
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 28,889评论 0 17
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,160评论 1 250
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,475评论 2 341
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,650评论 2 335

推荐阅读更多精彩内容

  • Spring Cloud为开发人员提供了快速构建分布式系统中一些常见模式的工具(例如配置管理,服务发现,断路器,智...
    卡卡罗2017阅读 134,494评论 18 139
  • Android 自定义View的各种姿势1 Activity的显示之ViewRootImpl详解 Activity...
    passiontim阅读 171,047评论 25 707
  • 1. 简介 1.1 什么是 MyBatis ? MyBatis 是支持定制化 SQL、存储过程以及高级映射的优秀的...
    笨鸟慢飞阅读 5,398评论 0 4
  • 那么近那么远
    lllllllllllllI阅读 77评论 0 0
  • 那一滩尸体上密密麻麻趴着无数小米粒般白 色正在蠕动的蛆。 舌头伸得很长,还吐着带着干涸血液的泡 沫,因血液分解而由...
    顾浅夏阅读 325评论 0 0