I am familiar with how to use the Google Chrome Web Inspector to manually save a webpage as a HAR file with the content. I would like to automate this.
In my searches for tools to automate the generation of a HAR file, I have found some solutions, but none of them save the content of the resources.
I have tried the following without any luck:
- .js
Getting the content of the page you requested (the raw HTML) is doable, but getting the content of every other network resource that loads (CSS, javascript, images, etc) is what my problem is.
I am familiar with how to use the Google Chrome Web Inspector to manually save a webpage as a HAR file with the content. I would like to automate this.
In my searches for tools to automate the generation of a HAR file, I have found some solutions, but none of them save the content of the resources.
I have tried the following without any luck:
- https://github./ariya/phantomjs/blob/master/examples/netsniff.js
- https://github./cyrus-and/chrome-har-capturer
Getting the content of the page you requested (the raw HTML) is doable, but getting the content of every other network resource that loads (CSS, javascript, images, etc) is what my problem is.
Share Improve this question asked Feb 10, 2014 at 3:44 TeddyTeddy 18.6k4 gold badges32 silver badges43 bronze badges 4- Did you find a way to do this? – Monodeep Commented Jan 17, 2015 at 16:00
- @Monodeep I never found a solution for this – Teddy Commented Jan 18, 2015 at 16:48
- Thanks for the reply . I found a solution and i am using it successfully . It is using Selenium, Firebug & NetExport (Firefox Extensions). If you still need it I can post the code here (i have written it in python) – Monodeep Commented Feb 22, 2015 at 10:38
-
FYI chrome-har-capturer does that:
--content
option. – cYrus Commented Jun 9, 2016 at 10:26
3 Answers
Reset to default 6I think the most reliable way to automate generating HAR is using BrowsermobProxy along with chromedriver and Selenium.
Here is a script in python to programatically generate HAR file which can be integrated in your development cycle. It also captures content.
from browsermobproxy import Server
from selenium import webdriver
import os
import json
import urlparse
server = Server("path/to/browsermob-proxy")
server.start()
proxy = server.create_proxy()
chromedriver = "path/to/chromedriver"
os.environ["webdriver.chrome.driver"] = chromedriver
url = urlparse.urlparse (proxy.proxy).path
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--proxy-server={0}".format(url))
driver = webdriver.Chrome(chromedriver,chrome_options =chrome_options)
proxy.new_har("http://stackoverflow.", options={'captureHeaders': True,'captureContent':True})
driver.get("http://stackoverflow.")
result = json.dumps(proxy.har, ensure_ascii=False)
print result
proxy.stop()
driver.quit()
You can also checkout this tool which generates HAR and NavigationTiming data from both Chrome and Firefox headlessly: Speedprofile
You might take a look at phantomjs, it looks like it exports as HAR http://phantomjs/network-monitoring.html
You can use an http proxy to save the contents. On windows, you can use the free fiddler. On Mac and Linux, you can use Charles Proxy but it's not free.
This is a screenshot from Fiddler, and you can choose to save the requests in all their glory, including headers.