📅  最后修改于: 2020-10-31 14:38:29             🧑  作者: Mango
要执行蜘蛛,请在first_scrapy目录中运行以下命令-
scrapy crawl first
其中,第一个是创建蜘蛛时指定的蜘蛛名称。
蜘蛛爬行后,您将看到以下输出-
2016-08-09 18:13:07-0400 [scrapy] INFO: Scrapy started (bot: tutorial)
2016-08-09 18:13:07-0400 [scrapy] INFO: Optional features available: ...
2016-08-09 18:13:07-0400 [scrapy] INFO: Overridden settings: {}
2016-08-09 18:13:07-0400 [scrapy] INFO: Enabled extensions: ...
2016-08-09 18:13:07-0400 [scrapy] INFO: Enabled downloader middlewares: ...
2016-08-09 18:13:07-0400 [scrapy] INFO: Enabled spider middlewares: ...
2016-08-09 18:13:07-0400 [scrapy] INFO: Enabled item pipelines: ...
2016-08-09 18:13:07-0400 [scrapy] INFO: Spider opened
2016-08-09 18:13:08-0400 [scrapy] DEBUG: Crawled (200)
(referer: None)
2016-08-09 18:13:09-0400 [scrapy] DEBUG: Crawled (200)
(referer: None)
2016-08-09 18:13:09-0400 [scrapy] INFO: Closing spider (finished)
从输出中可以看到,每个URL都有一个日志行(参考:无),指出URL是起始URL,没有参考。接下来,您应该看到在first_scrapy目录中创建了两个名为Books.html和Resources.html的新文件。