最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

javascript - CapybaraSelenium gets a Net::ReadTimeout randomly on location.reload() - Stack Overflow

programmeradmin6浏览0评论

I'm using Capybara, the selenium-webdriver gem, and chromedriver in order to drive my javascript enabled tests.

The problem is that about 50% of our builds fail due to a Net::ReadTimeout error. At first this was manifesting as a 'could not find element' error, but after I upped Capybara's default max wait time to 30 seconds, I started seeing the timeout.

I examined the screenshots of when the timeout happens, it's stuck on a 'Successfully logged in' modal that we show briefly before using the Javascript function, location.reload(), to reload the page.

I've ran the test locally and can sometimes reproduce it, also randomly. Sometimes it zips by this modal and does the reload so fast you can barely see it, and other times it just hangs forever.

I don't feel like it's an asset pilation issue, since the site has already loaded at that point in order for the user to access the login form.

Wondering if anyone has seen this before and knows a solution.

The specific code:

    visit login_path

    page.within '#sign-in-pane__body' do
      fill_in 'Email', with: user.email
      click_button 'Submit'
    end

    expect(page).to have_content 'Enter Password'

    page.within '#sign-in-pane__body' do
      fill_in 'Password', with: user.password
      click_button 'Submit'
    end

    expect(page).to have_text 'Home page landing text'

The hang up happens between click_button 'Submit' and expecting the home page text.

The flow of the logic causing the timeout is the user submits the login form, we wait for the server to render a .js.erb template that triggers a JS event upon successful login. When that trigger happens we show a modal saying that login was successful, then execute a location.reload().

I'm using Capybara, the selenium-webdriver gem, and chromedriver in order to drive my javascript enabled tests.

The problem is that about 50% of our builds fail due to a Net::ReadTimeout error. At first this was manifesting as a 'could not find element' error, but after I upped Capybara's default max wait time to 30 seconds, I started seeing the timeout.

I examined the screenshots of when the timeout happens, it's stuck on a 'Successfully logged in' modal that we show briefly before using the Javascript function, location.reload(), to reload the page.

I've ran the test locally and can sometimes reproduce it, also randomly. Sometimes it zips by this modal and does the reload so fast you can barely see it, and other times it just hangs forever.

I don't feel like it's an asset pilation issue, since the site has already loaded at that point in order for the user to access the login form.

Wondering if anyone has seen this before and knows a solution.

The specific code:

    visit login_path

    page.within '#sign-in-pane__body' do
      fill_in 'Email', with: user.email
      click_button 'Submit'
    end

    expect(page).to have_content 'Enter Password'

    page.within '#sign-in-pane__body' do
      fill_in 'Password', with: user.password
      click_button 'Submit'
    end

    expect(page).to have_text 'Home page landing text'

The hang up happens between click_button 'Submit' and expecting the home page text.

The flow of the logic causing the timeout is the user submits the login form, we wait for the server to render a .js.erb template that triggers a JS event upon successful login. When that trigger happens we show a modal saying that login was successful, then execute a location.reload().

Share Improve this question edited Apr 20, 2017 at 0:11 Zachary Wright asked Apr 20, 2017 at 0:03 Zachary WrightZachary Wright 24.1k11 gold badges45 silver badges57 bronze badges 6
  • Do you have anything in your app (rack-attack, etc) that throttles requests? If not, check your test.log for info about whether the request was actually made and what the app was doing. Also, what do you have Capybara.server set to? – Thomas Walpole Commented Apr 20, 2017 at 1:02
  • I know I said yesterday I was able to reproduce locally, but having a hard time doing that today to check the log. No request throttling. I'm not manually setting the server to anything, so it's whatever the default is. – Zachary Wright Commented Apr 20, 2017 at 13:05
  • 1 If you haven't set Capybara.server to anything it defaults to Webrick which can have issues with multiple simultaneous requests. Try setting Capybara.server = :puma and see if that makes a difference. – Thomas Walpole Commented Apr 20, 2017 at 13:40
  • 1 I've tried out puma, which didn't seem to work, so I switched back to the default for the time being. What's interesting is that after several runs I've actually started seeing the Net::ReadTimeout in other places. It's very rare but sometimes happens just doing a visit path. – Zachary Wright Commented Apr 20, 2017 at 18:41
  • If I watch the specs run, I can see it hitting the point where it times out, and the loading indicator in chrome continuously spins, but nothing ever happens. Looking through the test log it looks like that to Rails, everything is normal. It sees itself responding and rendering pages normally. In the case of the JS issue I saw earlier, it renders the templates I would expect it to. – Zachary Wright Commented Apr 20, 2017 at 18:49
 |  Show 1 more ment

3 Answers 3

Reset to default 10

It turned out this wasn't exclusive to doing a location.reload() in JS. It sometimes happened just visiting a page.

The solution for me was to create an HTTP client for the selenium driver and specify a longer timeout:

Capybara.register_driver :chrome do |app|
  client = Selenium::WebDriver::Remote::Http::Default.new
  client.read_timeout = 120

  Capybara::Selenium::Driver.new(app, {browser: :chrome, http_client: client})
end

Solved similar problem by using my own version of visit method:

 def safe_visit(url)
  max_retries = 3
  times_retried = 0
  begin
    visit url
  rescue Net::ReadTimeout => error
    if times_retried < max_retries
      times_retried += 1
      puts "Failed to visit #{url}, retry #{times_retried}/#{max_retries}"
      retry
    else
      puts error.message
      puts error.backtrace.inspect
      exit(1)
    end
  end
end

Here is what you need to do if you need to configure it for headless chrome

Capybara.register_driver :headless_chrome do |app|
  client = Selenium::WebDriver::Remote::Http::Default.new
  client.timeout = 120 # instead of the default 60
  options = Selenium::WebDriver::Chrome::Options.new
  options.headless!

  Capybara::Selenium::Driver.new(app, {
    browser: :chrome,
    http_client: client,
    options: options
  })
end

Capybara.default_driver = :headless_chrome
Capybara.javascript_driver = :headless_chrome

Passing headless argument in capabilities was not working for me.

capabilities = Selenium::WebDriver::Remote::Capabilities.chrome(
   chromeOptions: { args: %w[headless disable-gpu] }
)

Here is more details about why headless in capabilities was not working.

发布评论

评论列表(0)

  1. 暂无评论