FAQ | This is a LIVE service | Changelog

Skip to content
Snippets Groups Projects
Commit cf14a8fe authored by J.W. Smith's avatar J.W. Smith
Browse files

refactor: Scraper uses http get immediately

An error once happened where it seems the request content was lost
before it could be used. This may mitigate the risk of it happening
again.
parent 150c301d
No related branches found
No related tags found
No related merge requests found
......@@ -50,12 +50,10 @@ def get_news_reports_from_url(job_dir: str, url = URL_DEFAULT) -> None:
assert os.path.exists(job_dir)
r = requests.get(url)
# write a local copy of the zip file
# Get the zip file from the url and immediately write a local copy
fn_zip = f"{job_dir}/data.zip"
with open(fn_zip,'wb') as zipfile:
zipfile.write(r.content)
zipfile.write(requests.get(url).content)
# unzip it
dir_unzip = f"{job_dir}/data/"
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment