Python抓取百度百科数据

NeiFallis 8年前
   <h2>前言</h2>    <p>本文整理自慕课网 《Python开发简单爬虫》 ,将会记录爬取百度百科“python”词条相关页面的整个过程。</p>    <h2>抓取策略</h2>    <p style="text-align: center;"><img src="https://simg.open-open.com/show/f30f95a41718345bab05e30b00edc8db.jpg"></p>    <p>确定目标:确定抓取哪个网站的哪些页面的哪部分数据。本实例抓取百度百科python词条页面以及python相关词条页面的标题和简介。</p>    <p>分析目标:分析要抓取的url的格式,限定抓取范围。分析要抓取的数据的格式,本实例中就要分析标题和简介这两个数据所在的标签的格式。分析要抓取的页面编码的格式,在网页解析器部分,要指定网页编码,然后才能进行正确的解析。</p>    <p>编写代码:在网页解析器部分,要使用到分析目标得到的结果。</p>    <p>执行爬虫:进行数据抓取。</p>    <h2>分析目标</h2>    <p>1、url格式</p>    <p>进入百度百科python词条页面,页面中相关词条的链接比较统一,大都是 /view/xxx.htm 。</p>    <p><img src="https://simg.open-open.com/show/388ecba783ec287651bb8f3500075801.jpg"></p>    <p>2、数据格式</p>    <p>标题位于类lemmaWgt-lemmaTitle-title下的h1子标签,简介位于类lemma-summary下。</p>    <p><img src="https://simg.open-open.com/show/49ba1a6f7576da5e3659ff66ee649d1b.jpg"> <img src="https://simg.open-open.com/show/c8ea3cd6c1b71808b4e9db1c23b9bfa2.jpg"></p>    <p>3、编码格式</p>    <p>查看页面编码格式,为utf-8。</p>    <p><img src="https://simg.open-open.com/show/c0502c3299f08fadaf8105acc894e9b7.jpg"></p>    <p>经过以上分析,得到结果如下:</p>    <p><img src="https://simg.open-open.com/show/62ee0d3196656a87072f06e44b29cd1e.jpg"></p>    <h2>代码编写</h2>    <h2>项目结构</h2>    <p>在sublime下,新建文件夹baike-spider,作为项目根目录。</p>    <p>新建spider_main.py,作为爬虫总调度程序。</p>    <p>新建url_manger.py,作为url管理器。</p>    <p>新建html_downloader.py,作为html下载器。</p>    <p>新建html_parser.py,作为html解析器。</p>    <p>新建html_outputer.py,作为写出数据的工具。</p>    <p>最终项目结构如下图:</p>    <p style="text-align: center;"><img src="https://simg.open-open.com/show/456bea6cbdf3f4ca7e6870951b517a4e.jpg"></p>    <h2>spider_main.py</h2>    <pre>  # coding:utf-8  import url_manager, html_downloader, html_parser, html_outputer     class SpiderMain(object):      def __init__(self):          self.urls = url_manager.UrlManager()          self.downloader = html_downloader.HtmlDownloader()          self.parser = html_parser.HtmlParser()          self.outputer = html_outputer.HtmlOutputer()         def craw(self, root_url):          count = 1          self.urls.add_new_url(root_url)          while self.urls.has_new_url():              try:                  new_url = self.urls.get_new_url()                  print('craw %d : %s' % (count, new_url))                  html_cont = self.downloader.download(new_url)                  new_urls, new_data = self.parser.parse(new_url, html_cont)                  self.urls.add_new_urls(new_urls)                  self.outputer.collect_data(new_data)                     if count == 10:                      break                     count = count + 1              except:                  print('craw failed')             self.outputer.output_html()        if __name__=='__main__':      root_url = 'http://baike.baidu.com/view/21087.htm'      obj_spider = SpiderMain()      obj_spider.craw(root_url)  </pre>    <h2>url_manger.py</h2>    <pre>  # coding:utf-8  class UrlManager(object):      def __init__(self):          self.new_urls = set()          self.old_urls = set()         def add_new_url(self, url):          if urlis None:              return          if urlnot in self.new_urlsand urlnot in self.old_urls:              self.new_urls.add(url)         def add_new_urls(self, urls):          if urlsis None or len(urls) == 0:              return          for urlin urls:              self.add_new_url(url)         def has_new_url(self):          return len(self.new_urls) != 0         def get_new_url(self):          new_url = self.new_urls.pop()          self.old_urls.add(new_url)          return new_url  </pre>    <h2>html_downloader.py</h2>    <pre>  # coding:utf-8  import urllib.request     class HtmlDownloader(object):      def download(self, url):          if urlis None:              return None          response = urllib.request.urlopen(url)          if response.getcode() != 200:              return None          return response.read()  </pre>    <h2>html_parser.py</h2>    <pre>  # coding:utf-8  from bs4import BeautifulSoup  import re  from urllib.parseimport urljoin     class HtmlParser(object):      def _get_new_urls(self, page_url, soup):          new_urls = set()          # /view/123.htm          links = soup.find_all('a', href=re.compile(r'/view/\d+\.htm'))          for linkin links:              new_url = link['href']              new_full_url = urljoin(page_url, new_url)              # print(new_full_url)              new_urls.add(new_full_url)          #print(new_urls)          return new_urls         def _get_new_data(self, page_url, soup):          res_data = {}          # url          res_data['url'] = page_url          # <dd class="lemmaWgt-lemmaTitle-title"> <h1>Python</h1>          title_node = soup.find('dd', class_='lemmaWgt-lemmaTitle-title').find('h1')          res_data['title'] = title_node.get_text()          # <div class="lemma-summary" label-module="lemmaSummary">          summary_node = soup.find('div', class_='lemma-summary')          res_data['summary'] = summary_node.get_text()          # print(res_data)          return res_data         def parse(self, page_url, html_cont):          if page_urlis None or html_contis None:              return          soup = BeautifulSoup(html_cont, 'html.parser')          # print(soup.prettify())          new_urls = self._get_new_urls(page_url, soup)          new_data = self._get_new_data(page_url, soup)          # print('mark')          return new_urls, new_data  </pre>    <h2>html_outputer.py</h2>    <pre>  # coding:utf-8  class HtmlOutputer(object):      def __init__(self):          self.datas = []         def collect_data(self, data):          if datais None:              return          self.datas.append(data)         def output_html(self):          fout = open('output.html','w', encoding='utf-8')             fout.write('<html>')          fout.write('<body>')          fout.write('<table>')             for datain self.datas:              fout.write('<tr>')              fout.write('<td>%s</td>' % data['url'])              fout.write('<td>%s</td>' % data['title'])              fout.write('<td>%s</td>' % data['summary'])              fout.write('</tr>')             fout.write('</table>')          fout.write('</body>')          fout.write('</html>')             fout.close()  </pre>    <h2>运行</h2>    <p>在命令行下,执行 python spider_main.py 。</p>    <h2>编码问题</h2>    <p>问题描述:UnicodeEncodeError: ‘gbk’ codec can’t encode character ‘xa0’ in position …</p>    <p>使用Python写文件的时候,或者将网络数据流写入到本地文件的时候,大部分情况下会遇到这个问题。网络上有很多类似的文章讲述如何解决这个问题,但是无非就是encode,decode相关的,这是导致该问题出现的真正原因吗?不是的。很多时候,我们使用了decode和encode,试遍了各种编码,utf8,utf-8,gbk,gb2312等等,该有的编码都试遍了,可是仍然出现该错误,令人崩溃。</p>    <p>在windows下面编写python脚本,编码问题很严重。将网络数据流写入文件时,我们会遇到几个编码:</p>    <p>1、#encoding=’XXX’</p>    <p>这里(也就是python文件第一行的内容)的编码是指该python脚本文件本身的编码,无关紧要。只要XXX和文件本身的编码相同就行了。</p>    <p>比如notepad++”格式”菜单里面里可以设置各种编码,这时需要保证该菜单里设置的编码和encoding XXX相同就行了,不同的话会报错。</p>    <p>2、网络数据流的编码</p>    <p>比如获取网页,那么网络数据流的编码就是网页的编码。需要使用decode解码成unicode编码。</p>    <p>3、目标文件的编码</p>    <p>将网络数据流写入到新文件,写文件代码如下:</p>    <pre>  fout = open('output.html','w')  fout.write(str)  </pre>    <p>在windows下面,新文件的默认编码是gbk,python解释器会用gbk编码去解析我们的网络数据流str,然而str是decode过的unicode编码,这样的话就会导致解析不了,出现上述问题。 解决的办法是改变目标文件的编码:</p>    <pre>  fout = open('output.html','w', encoding='utf-8')  </pre>    <h2>运行结果</h2>    <p style="text-align: center;"><img src="https://simg.open-open.com/show/198c8d450ac17bf2c730cd565b15029e.jpg"> <img src="https://simg.open-open.com/show/d4224fcdbe3450986f7da8d6a7092a1b.jpg"></p>    <h2> </h2>    <p> </p>    <p>来自:http://python.jobbole.com/87320/</p>    <p> </p>