目的:爬取陽光熱線問政平臺問題反映每個帖子里面的標(biāo)題、內(nèi)容、編號和帖子url
CrawlSpider版流程如下:
創(chuàng)建爬蟲項(xiàng)目dongguang
scrapy startproject dongguang
設(shè)置items.py文件
# -*- coding: utf-8 -*- import scrapy class NewdongguanItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() # pass # 每頁的帖子鏈接 url = scrapy.Field() # 帖子標(biāo)題 title = scrapy.Field() # 帖子編號 number = scrapy.Field() # 帖子內(nèi)容 content = scrapy.Field()
在spiders目錄里面,創(chuàng)建并編寫爬蟲文件sun.py
# -*- coding: utf-8 -*- import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule from dongguan.items import DongguanItem class SunSpider(CrawlSpider): name = 'dg' allowed_domains = ['wz.sun0769.com'] start_urls = ['http://wz.sun0769.com/html/top/report.shtml'] # rules是Rule的集合,每個rule規(guī)則同時(shí)執(zhí)行。另外,如果發(fā)現(xiàn)web服務(wù)器有反爬蟲機(jī)制如返回一個假的url,則可以使用Rule里面的參數(shù)process_links調(diào)用一個自編函數(shù)來處理url后返回一個真的url rules = ( # 每個url都有一個獨(dú)一無二的指紋,每個爬蟲項(xiàng)目都有一個去重隊(duì)列 # Rule里面沒有回調(diào)函數(shù),則默認(rèn)對匹配的鏈接要跟進(jìn),就是對匹配的鏈接在進(jìn)行請求獲取響應(yīng)后對響應(yīng)里面匹配的鏈接繼續(xù)跟進(jìn),只不過沒有回調(diào)函數(shù)對響應(yīng)數(shù)據(jù)進(jìn)行處理 # Rule(LinkExtractor(allow="page="))如果設(shè)置為follow=False,則不會跟進(jìn),只顯示當(dāng)前頁面匹配的鏈接。如設(shè)置為follow=True,則會對每個匹配的鏈接發(fā)送請求獲取響應(yīng)進(jìn)而從每個響應(yīng)里面再次匹配跟進(jìn),直至沒有。python遞歸深度默認(rèn)為不超過1000,否則會報(bào)異常 Rule(LinkExtractor(allow="page=")), Rule(LinkExtractor(allow='http://wz.sun0769.com/html/question/\d+/\d+.shtml'),callback='parse_item') ) def parse_item(self, response): print(response.url) item = DongguanItem() item['url'] = response.url item['title'] = response.xpath('//div[@class="pagecenter p3"]//strong/text()').extract()[0] item['number'] = response.xpath('//div[@class="pagecenter p3"]//strong/text()').extract()[0].split(' ')[-1].split(':')[-1] # 對帖子里面有圖片的處理,發(fā)現(xiàn)沒有圖片時(shí)則沒有class="contentext"的div標(biāo)簽,以此作為標(biāo)準(zhǔn)獲取帖子內(nèi)容 if len(response.xpath('//div[@class="contentext"]')) == 0: item['content'] = ''.join(response.xpath('//div[@class="c1 text14_2"]/text()').extract()) else: item['content'] = ''.join(response.xpath('//div[@class="contentext"]/text()').extract()) yield item
編寫管道pipelines.py文件
# -*- coding: utf-8 -*- import json class DongguanPipeline(object): def __init__(self): self.file = open('dongguan.json','w') def process_item(self, item, spider): content = json.dumps(dict(item),ensure_ascii=False).encode('utf-8') + '\n' self.file.write(content) return item def closespider(self): self.file.close()
編寫settings.py文件
# -*- coding: utf-8 -*- BOT_NAME = 'dongguan' SPIDER_MODULES = ['dongguan.spiders'] NEWSPIDER_MODULE = 'dongguan.spiders' # log日志文件默認(rèn)保存在當(dāng)前目錄,下面為日志級別,當(dāng)大于或等于INFO時(shí)將被保存 LOG_FILE = 'dongguan.log' LOG_LEVEL = 'INFO' # 爬取深度設(shè)置 # DEPTH_LIMIT = 1 # Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = 'dongguan (+http://www.yourdomain.com)' # Obey robots.txt rules # ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32 # Configure item pipelines # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = { 'dongguan.pipelines.DongguanPipeline': 300, }
測試運(yùn)行爬蟲,終端執(zhí)行命令(只要在項(xiàng)目目錄內(nèi)即可)
scrapy crawl dg
Spider版流程如下:
創(chuàng)建爬蟲項(xiàng)目newdongguang
scrapy startproject newdongguan
設(shè)置items.py文件
# -*- coding: utf-8 -*- import scrapy class NewdongguanItem(scrapy.Item): # 每頁的帖子鏈接 url = scrapy.Field() # 帖子標(biāo)題 title = scrapy.Field() # 帖子編號 number = scrapy.Field() # 帖子內(nèi)容 content = scrapy.Field()
在spiders目錄里面,創(chuàng)建并編寫爬蟲文件newsun.py
# -*- coding: utf-8 -*- import scrapy from newdongguan.items import NewdongguanItem class NewsunSpider(scrapy.Spider): name = 'ndg' # 設(shè)置爬取的域名范圍,可寫可不寫,不寫則表示爬取時(shí)候不限域名,結(jié)果有可能會導(dǎo)致爬蟲失控。 allowed_domains = ['wz.sun0769.com'] offset = 0 url = 'http://wz.sun0769.com/index.php/question/report?page=' + str(offset) start_urls = [url] def parse(self, response): link_list = response.xpath("http://a[@class='news14']/@href").extract() for each in link_list: # 對每頁的帖子發(fā)送請求,獲取帖子內(nèi)容里面指定數(shù)據(jù)返回給管道文件 yield scrapy.Request(each,callback=self.deal_link) self.offset += 30 if self.offset <= 124260: url = 'http://wz.sun0769.com/index.php/question/report?page=' + str(self.offset) # 對指定分頁發(fā)送請求,響應(yīng)交給parse函數(shù)處理 yield scrapy.Request(url,callback=self.parse) # 從每個分頁帖子內(nèi)容獲取數(shù)據(jù),返回給管道 def deal_link(self,response): item = NewdongguanItem() item['url'] = response.url item['title'] = response.xpath("http://div[@class='pagecenter p3']//strong[@class='tgray14']/text()").extract()[0] item['number'] = response.xpath("http://div[@class='pagecenter p3']//strong[@class='tgray14']/text()").extract()[0].split(' ')[-1].split(':')[-1] if len(response.xpath("http://div[@class='contentext']")) == 0: item['content'] = ''.join(response.xpath("http://div[@class='c1 text14_2']/text()").extract()) else: item['content'] = ''.join(response.xpath("http://div[@class='contentext']/text()").extract()) yield item
編寫管道pipelines.py文件
# -*- coding: utf-8 -*- import codecs import json class NewdongguanPipeline(object): def __init__(self): # 使用codecs寫文件,直接設(shè)置文件內(nèi)容編碼格式,省去每次都要對內(nèi)容進(jìn)行編碼 self.file = codecs.open('newdongguan.json','w',encoding = 'utf-8') # 以前文件寫法 # self.file = open('newdongguan.json','w') def process_item(self, item, spider): print(item['title']) content = json.dumps(dict(item),ensure_ascii=False) + '\n' # 以前文件寫法 # self.file.write(content.encode('utf-8')) self.file.write(content) return item def close_spider(self): self.file.close()
編寫settings.py文件
# -*- coding: utf-8 -*- BOT_NAME = 'newdongguan' SPIDER_MODULES = ['newdongguan.spiders'] NEWSPIDER_MODULE = 'newdongguan.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = 'newdongguan (+http://www.yourdomain.com)' USER_AGENT = 'User-Agent:Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0;' # Configure item pipelines # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = { 'newdongguan.pipelines.NewdongguanPipeline': 300, }
測試運(yùn)行爬蟲,終端執(zhí)行命
srapy crawl ndg
備注:markdown語法關(guān)于代碼塊縮進(jìn)問題,可通過tab鍵來解決。而簡單文本則可以通過回車鍵來解決,如Spider版流程如下:和1. 創(chuàng)建爬蟲項(xiàng)目newdongguang
以上就是本文的全部內(nèi)容,希望對大家的學(xué)習(xí)有所幫助,也希望大家多多支持腳本之家。
更多文章、技術(shù)交流、商務(wù)合作、聯(lián)系博主
微信掃碼或搜索:z360901061

微信掃一掃加我為好友
QQ號聯(lián)系: 360901061
您的支持是博主寫作最大的動力,如果您喜歡我的文章,感覺我的文章對您有幫助,請用微信掃描下面二維碼支持博主2元、5元、10元、20元等您想捐的金額吧,狠狠點(diǎn)擊下面給點(diǎn)支持吧,站長非常感激您!手機(jī)微信長按不能支付解決辦法:請將微信支付二維碼保存到相冊,切換到微信,然后點(diǎn)擊微信右上角掃一掃功能,選擇支付二維碼完成支付。
【本文對您有幫助就好】元
