在学数据可视化,缺乏点数据举行实操,就想着从饿了么上面爬点外卖店信息。
主如果猎取数据,所以代码比较大略,直接上代码:
import requests import json import csv def crawler_ele(page=0): def get_page(page): url = 'https://h5.ele.me/restapi/shopping/v3/restaurants?latitude=xxxx&longitude=xxxx&offset={page}&limit=8&terminal=h5'.format(page=page*8) headers = { "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.80 Safari/537.36", 'cookie': r'xxxx' } re = json.loads(requests.get(url,headers=headers).text) return re re = get_page(page) if re.get('items'): with open('data.csv','a',newline='') as f: writer = csv.DictWriter(f,fieldnames=['称号', '月销售量','配送费', '起送价', '风味','评分', '配送时长', '评分统计', '间隔', '所在']) writer.writeheader() for item in re.get('items'): info = dict() restaurant = item.get('restaurant') info['所在'] = restaurant.get('address') info['配送费'] = restaurant.get('float_delivery_fee') info['称号'] = restaurant.get('name') info['配送时长'] = restaurant.get('order_lead_time') info['间隔'] = restaurant.get('distance') info['起送价'] = restaurant.get('float_minimum_order_amount') info['评分'] = restaurant.get('rating') info['月销售量'] = restaurant.get('recent_order_num') info['评分统计'] = restaurant.get('rating_count') info['风味'] = restaurant.get('flavors')[0].get('name') writer.writerow(info) # print(info) if re.get('has_next') == True: crawler_page(page+1) crawler_ele(0)
在这里简朴解释几句:
url 中的经纬度去掉了,能够自行查询增加须要爬取所在的经纬度,也能够经由过程挪用舆图api猎取经纬度;
headers 须要加 Cookies ,不然会有登录权限限定爬取页数;
末了挪用的是递归不是轮回,所以保留效果的 csv 文件内里会有多个反复表头,能够用 Excel 翻开删除反复值就能够了。
相干教程引荐:Python视频教程
以上就是python爬取饿了么的细致内容,更多请关注ki4网别的相干文章!