| |
- Fetcher
-
- URLFetcher
class Fetcher |
|
Fetcher: represents a file which is being downloaded from a web
site, or some other source which potentially take a while.
The idea is to use this object in a loop:
while (not fetcher.is_done()):
fetcher.work()
One could equally well manage several fetchers in parallel. (But, see
the caveat in URLFetcher.)
Fetcher(loader) -- constructor
The argument must be a PackageCollection. (When the download is complete,
the loader's downloaded_files map will be updated.)
(This is an abstract base class, so creating a Fetcher is not of
any particular use.) |
|
Methods defined here:
- __init__(self, loader)
- is_done(self)
- is_done() -> bool
Return whether the fetching process is complete.
- work(self)
- work() -> None
Do another increment of work.
|
class URLFetcher(Fetcher) |
|
URLFetcher: represents a file which is being downloaded via a URL.
URLFetcher(loader, url, filename) -- constructor
The loader argument must be a PackageCollection. The URL is the one
to download; the data will be written to filename. (The directory
containing filename must exist already.)
Caveat: Python's urllib2 always uses a blocking socket. Therefore,
any call to work() may take arbitrarily long. (It will actually take
as long as necessary to get another 1000 bytes. Which could be forever.)
This is not the way the Fetcher class is supposed to work, but it's what
we've got. |
|
Methods defined here:
- __del__(self)
- __init__(self, loader, url, filename)
- closeall(self)
- closeall() ->
Close the in (HTTP) and out (file write) streams, if necessary.
(This is an internal method. Do not call.)
- is_done(self)
- is_done() -> bool
Return whether the fetching process is complete.
- work(self)
- work() -> None
Do another increment of work. If the download is complete, close
the files.
| |