首页 > seo 教程 > 正文

python beautifulsoup多线程分析抓取网页

时间:04-08 来源:老王python, 标签:分析网页

最近在用python做一些网页分析方面的事情,很久没更新博客了,今天补上。下面的代码用到了

1 python 多线程

2 网页分析库:beautifulsoup ,这个库比之前分享的python SGMLParser 网页分析库要强大很多,大家有兴趣可以去了解下。


#encoding=utf-8
#@description:蜘蛛抓取内容。

import Queue
import threading
import urllib,urllib2
import time
from BeautifulSoup import BeautifulSoup

hosts = ["http://www.baidu.com","http://www.163.com"]#要抓取的网页

queue = Queue.Queue()
out_queue = Queue.Queue()

class ThreadUrl(threading.Thread):
    """Threaded Url Grab"""
    def __init__(self, queue, out_queue):
        threading.Thread.__init__(self)
        self.queue = queue
        self.out_queue = out_queue

def run(self):
    while True:
        host = self.queue.get()
        proxy_support = urllib2.ProxyHandler({'http':'http://xxx.xxx.xxx.xxxx'})#代理IP
        opener = urllib2.build_opener(proxy_support, urllib2.HTTPHandler)
        urllib2.install_opener(opener)
        url = urllib.urlopen(host)
        chunk = url.read()
        self.out_queue.put(chunk)
        self.queue.task_done()

class DatamineThread(threading.Thread):
    """Threaded Url Grab"""
    def __init__(self, out_queue):
        threading.Thread.__init__(self)
        self.out_queue = out_queue

    def run(self):
        while True:
            chunk = self.out_queue.get()
            soup = BeautifulSoup(chunk)
            print soup
            self.out_queue.task_done()
            start = time.time()

def main():
    t = ThreadUrl(queue, out_queue)
    t.setDaemon(True)
    t.start()
    for host in hosts:
        queue.put(host)
        dt = DatamineThread(out_queue)
        dt.setDaemon(True)
        dt.start()
        queue.join()
        out_queue.join()
main()
print "Elapsed Time: %s" % (time.time() - start)

相关文章

老王python提供python基础教程,爬虫,seo工具,excel相关开发教程。

Copyright © 2020 www.cnpythoner.com All rights reserved. 赣ICP备19013357号-1基于python+django开发