2 :mod:`robotparser` --- Parser for robots.txt
3 =============================================
5 .. module:: robotparser
6 :synopsis: Loads a robots.txt file and answers questions about
7 fetchability of other URLs.
8 .. sectionauthor:: Skip Montanaro <skip@pobox.com>
13 single: World Wide Web
17 This module provides a single class, :class:`RobotFileParser`, which answers
18 questions about whether or not a particular user agent can fetch a URL on the
19 Web site that published the :file:`robots.txt` file. For more details on the
20 structure of :file:`robots.txt` files, see http://www.robotstxt.org/orig.html.
23 .. class:: RobotFileParser()
25 This class provides a set of methods to read, parse and answer questions
26 about a single :file:`robots.txt` file.
29 .. method:: set_url(url)
31 Sets the URL referring to a :file:`robots.txt` file.
36 Reads the :file:`robots.txt` URL and feeds it to the parser.
39 .. method:: parse(lines)
41 Parses the lines argument.
44 .. method:: can_fetch(useragent, url)
46 Returns ``True`` if the *useragent* is allowed to fetch the *url*
47 according to the rules contained in the parsed :file:`robots.txt`
53 Returns the time the ``robots.txt`` file was last fetched. This is
54 useful for long-running web spiders that need to check for new
55 ``robots.txt`` files periodically.
58 .. method:: modified()
60 Sets the time the ``robots.txt`` file was last fetched to the current
63 The following example demonstrates basic use of the RobotFileParser class. ::
65 >>> import robotparser
66 >>> rp = robotparser.RobotFileParser()
67 >>> rp.set_url("http://www.musi-cal.com/robots.txt")
69 >>> rp.can_fetch("*", "http://www.musi-cal.com/cgi-bin/search?city=San+Francisco")
71 >>> rp.can_fetch("*", "http://www.musi-cal.com/")