Requests 是Python的多功能 HTTP 库,具有各种应用程序。它的应用之一是使用文件 URL 从 Web 下载文件。
安装:首先,您需要下载请求库。您可以通过键入以下命令使用 pip 直接安装它:
pip install requests
或者直接从这里下载并手动安装。
下载文件
# imported the requests library
import requests
image_url = "https://www.python.org/static/community_logos/python-logo-master-v3-TM.png"
# URL of the image to be downloaded is defined as image_url
r = requests.get(image_url) # create HTTP response object
# send a HTTP request to the server and save
# the HTTP response in a response object called r
with open("python_logo.png",'wb') as f:
# Saving received content as a png file in
# binary format
# write the contents of the response (r.content)
# to a new file in binary mode.
f.write(r.content)
上面写的一小段代码将从网上下载以下图像。现在检查您的本地目录(此脚本所在的文件夹),您将找到此图像:
我们只需要图像源的 URL。 (您可以通过右键单击图像并选择查看图像选项来获取图像源的 URL。)
下载大文件
HTTP 响应内容 ( r.content ) 只不过是一个存储文件数据的字符串。因此,在大文件的情况下,不可能将所有数据保存在单个字符串中。为了克服这个问题,我们对我们的程序进行了一些更改:
r = requests.get(URL, stream = True)
将流参数设置为True将导致仅下载响应头并且连接保持打开状态。这避免了一次将所有内容读入内存以进行大响应。每次迭代r.iter_content时都会加载一个固定的块。
下面是一个例子:
import requests
file_url = "http://codex.cs.yale.edu/avi/db-book/db4/slide-dir/ch1-2.pdf"
r = requests.get(file_url, stream = True)
with open("python.pdf","wb") as pdf:
for chunk in r.iter_content(chunk_size=1024):
# writing one chunk at a time to pdf file
if chunk:
pdf.write(chunk)
下载视频
在此示例中,我们有兴趣下载此网页上可用的所有视频讲座。本次讲座的所有档案都可以在这里找到。所以,我们先抓取网页,提取所有视频链接,然后将视频一一下载。
import requests
from bs4 import BeautifulSoup
'''
URL of the archive web-page which provides link to
all video lectures. It would have been tiring to
download each video manually.
In this example, we first crawl the webpage to extract
all the links and then download videos.
'''
# specify the URL of the archive here
archive_url = "http://www-personal.umich.edu/~csev/books/py4inf/media/"
def get_video_links():
# create response object
r = requests.get(archive_url)
# create beautiful-soup object
soup = BeautifulSoup(r.content,'html5lib')
# find all links on web-page
links = soup.findAll('a')
# filter the link sending with .mp4
video_links = [archive_url + link['href'] for link in links if link['href'].endswith('mp4')]
return video_links
def download_video_series(video_links):
for link in video_links:
'''iterate through all links in video_links
and download them one by one'''
# obtain filename by splitting url and getting
# last string
file_name = link.split('/')[-1]
print( "Downloading file:%s"%file_name)
# create response object
r = requests.get(link, stream = True)
# download started
with open(file_name, 'wb') as f:
for chunk in r.iter_content(chunk_size = 1024*1024):
if chunk:
f.write(chunk)
print( "%s downloaded!\n"%file_name )
print ("All videos downloaded!")
return
if __name__ == "__main__":
# getting all video links
video_links = get_video_links()
# download all videos
download_video_series(video_links)
使用 Requests 库下载网页文件的优点是:
- 您可以通过递归遍历网站轻松下载 Web 目录!
- 这是一种独立于浏览器的方法,而且速度要快得多!
- 人们可以简单地抓取网页以获取网页上的所有文件 URL,因此,只需一个命令即可下载所有文件 –
使用 BeautifulSoup 在Python实现网页抓取