最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

python - Selenium Web Scraping: Missing 11 Sectional Times from Race Data Grid - Stack Overflow

programmeradmin6浏览0评论

I’m new to Python (version 3.13.2) and using Selenium with Firefox to scrape data from a racecard webpage, specifically the "Sectional Times" tab at a URL like . My goal is to collect 88 sectional times, which means 8 per horse for 11 horses, but I’m only getting 77 values. I think the 11 missing ones might be related to horse numbers being filtered out, and I need help capturing all 88.
I’ve attached a screenshot named after_times_wait.png to show the grid layout.
Can anyone help with these questions?
How can I adjust the filter to get all 88 sectional times, especially if some have just one decimal place?

Is skipping the first 69 cells to avoid headers correct, or should I find a better way to detect where the horse data starts?

Any suggestions to ensure all 11 horses are processed, even if data is incomplete?

Thanks for any assistance!

I loaded the webpage using Selenium, clicked the "Sectional Times" tab, scrolled to load all content, and scraped 168 total cells. I tried filtering the cells to keep only decimal numbers with 1 to 2 digits before and after the decimal point (like 61.58 or 6.5), excluding values of 10 or less, which I think are horse numbers (e.g., 5). I processed 9 out of 11 horses, each with 8 sectionals, by padding with 0.00 when needed. I also skipped the first 69 cells assuming they were headers based on the raw data pattern.

I expected to get 88 sectional times, with 8 valid sectionals for each of the 11 horses (e.g., one horse starting with 61.58), resulting in a complete dataset for all horses. Instead, I got only 77 cells and processed 9 horses.

I loaded the webpage using Selenium, clicked the "Sectional Times" tab, scrolled to load all content, and scraped 168 total cells. I tried filtering the cells to keep only decimal numbers with 1 to 2 digits before and after the decimal point (like 61.58 or 6.5), excluding values of 10 or less, which I think are horse numbers (e.g., 5). I processed 9 out of 11 horses, each with 8 sectionals, by padding with 0.00 when needed. I also skipped the first 69 cells assuming they were headers based on the raw data pattern.
What were you expecting (Paste This into the 'What were you expecting' Field): I expected to get 88 sectional times, with 8 valid sectionals for each of the 11 horses (e.g., one horse starting with 61.58), resulting in a complete dataset for all horses. Instead, I got only 77 cells and processed 9 horses.

atr sectional page

I’ve included my code and debug output below for reference.

Here’s the debug information I gathered: All time cells found: 168

First 10 raw time cells: ['Pos', 'Silk', 'Horse', '1', '', ...]

Filtered time cells found: 77

Accepted samples: ['61.58', '56.55', '30.02', '27.18', '26.97']

Rejected samples: ['5.', '11., '6., '4., '7., '10., '2., '8., '1., '3.']

Horses Found in final data: 9

And here’s my code attempt:

from selenium import webdriver
from selenium.webdriver.firefox.service import Service
from selenium.webdrivermon.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd
import time

# Firefox service
service = Service(executable_path="C:\\Scraping\\geckodriver.exe")
driver = webdriver.Firefox(service=service)

try:
    # Racecard URL
    url = ";
    driver.get(url)
    time.sleep(15)  # Extended wait to allow full page load
    driver.save_screenshot("C:\\Scraping\\initial_load.png")
    print("Initial load captured")

    # Debug initial page source
    with open("C:\\Scraping\\page_source_initial.html", "w", encoding="utf-8") as f:
        f.write(driver.page_source)
    print("Initial page source saved")

    # Accept cookies
    try:
        cookie_button = WebDriverWait(driver, 10).until(
            EC.element_to_be_clickable((By.XPATH, '//button[text()="Accept All"]'))
        )
        cookie_button.click()
        print("Cookies accepted")
        time.sleep(5)
    except Exception as e:
        print("No cookie prompt found:", e)

    # Click Sectional Times tab with presence check and forced click
    print("Attempting to click Sectional Times tab...")
    try:
        tab = WebDriverWait(driver, 40).until(
            EC.presence_of_element_located((By.XPATH, '//a[contains(@class, "tab") and contains(normalize-space(.), "Sectional")]'))
        )
        driver.execute_script("arguments[0].click();", tab)
        print("Tab clicked")
    except Exception as e:
        print(f"Tab not found or click failed: {e}")
        driver.save_screenshot("C:\\Scraping\\tab_click_error.png")
    time.sleep(15)

    # Wait for tab container
    print("Waiting for tab container...")
    WebDriverWait(driver, 30).until(
        EC.presence_of_element_located((By.XPATH, "//div[@id='tab-sectionals-times']"))
    )
    print("Tab container found")

    # Multiple scroll attempts to load content
    print("Scrolling to load content...")
    for _ in range(5):
        driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
        time.sleep(5)
        driver.execute_script("window.scrollTo(0, 0);")
        time.sleep(2)
        driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
        time.sleep(5)
    driver.save_screenshot("C:\\Scraping\\after_scroll.png")
    print("Scroll completed")

    # Wait for sectional times with retry and extended timeout
    print("Waiting for sectional times...")
    max_attempts = 3
    for attempt in range(max_attempts):
        try:
            WebDriverWait(driver, 90).until(  # Increased to 90 seconds
                EC.presence_of_all_elements_located((By.XPATH, "//div[@id='tab-sectionals-times']//div[contains(@class, 'card-cell')]"))
            )
            print("Sectional times found")
            break
        except Exception as e:
            print(f"Attempt {attempt + 1} failed: {e}")
            if attempt < max_attempts - 1:
                print("Retrying...")
                time.sleep(10)
            else:
                print("Max retries reached, raising error")
                driver.save_screenshot("C:\\Scraping\\error_screenshot.png")
                raise
    driver.save_screenshot("C:\\Scraping\\after_times_wait.png")

    # Debug page source
    with open("C:\\Scraping\\page_source_after_scroll.html", "w", encoding="utf-8") as f:
        f.write(driver.page_source)

    # Headers
    headers = ["start-12f", "12f-8f", "8f-6f", "6f-4f", "4f-2f", "2f-1f", "1f-finish", "finish"]
    all_data = []

    # Race info
    race_name = driver.find_element(By.TAG_NAME, "h1").text.strip() if driver.find_elements(By.TAG_NAME, "h1") else "Unknown Race"
    distance = driver.find_element(By.CLASS_NAME, "p--large.font-weight--semibold").text.strip() if driver.find_elements(By.CLASS_NAME, "p--large.font-weight--semibold") else "0f"
    meeting = "cheltenham"

    # Scrape horse names
    print("Scraping horse names...")
    horse_elements = driver.find_elements(By.XPATH, "//div[contains(@class, 'card-entry')]//h2//a[contains(@class, 'horse__link')]")
    horses = [elem.text.strip() for elem in horse_elements if elem.text.strip() and elem.is_displayed()]
    print("Horses found:", len(horses))

    # Scrape sectional times with corrected alignment
    print("Scraping sectional times...")
    time_cells = driver.find_elements(By.XPATH, "//div[@id='tab-sectionals-times']//div[contains(@class, 'card-cell')]")
    print("All time cells found:", len(time_cells))
    print("First 10 raw time cells:", [cell.text.strip() for cell in time_cells[:10]])  # Debug raw data
    # Filter for time cells, accepting cleanable decimals after skipping headers
    time_cells_filtered = []
    accepted_samples = []
    rejected_samples = []
    header_count = 69  # Estimated based on raw data pattern
    for i, cell in enumerate(time_cells[header_count:]):  # Skip initial headers
        spans = cell.find_elements(By.XPATH, ".//span[contains(@class, 'visible')]")
        for span in spans:
            text = span.text.strip()
            cleaned = text.replace(' ', '').replace('x', '')
            if '.' in text and cleaned.replace('.', '').isdigit():
                parts = cleaned.split('.')
                if len(parts) == 2 and all(part.isdigit() for part in parts) and len(parts[0]) in [1, 2] and len(parts[1]) in [1, 2] and float(cleaned) > 10:
                    time_cells_filtered.append(cell)
                    accepted_samples.append(text)
                else:
                    rejected_samples.append(text)
    print("Filtered time cells found:", len(time_cells_filtered))
    if accepted_samples:
        print("Accepted samples:", accepted_samples[:5])  # Show first 5 accepted
    if rejected_samples:
        print("Rejected samples:", rejected_samples[:10])  # Show first 10 rejected

    if len(time_cells_filtered) > 0 and len(horses) == 11:  # Process any available data
        print("Warning: Expected 88 cells, found", len(time_cells_filtered))
        # Assign filtered sectionals directly to horses
        expected_horses = 11
        remaining_cells = len(time_cells_filtered)
        for i in range(0, remaining_cells, 8):
            horse_index = i // 8
            if horse_index < expected_horses:
                horse_name = horses[horse_index]
                cells = time_cells_filtered[i:i + 8]
                times = [cell.find_element(By.XPATH, ".//span[contains(@class, 'visible')]").text.strip().replace(' ', '').replace('x', '') if cell.find_elements(By.XPATH, ".//span[contains(@class, 'visible')]") else '' for cell in cells]
                times = [t if '.' in t and t.replace(' ', '').replace('x', '').replace('.', '').isdigit() and len(t.split('.')[0]) in [1, 2] and len(t.split('.')[1]) in [1, 2] and float(t.replace(' ', '').replace('x', '')) > 10 else '0.00' for t in times]
                # Pad or truncate to 8 sectionals
                if len(times) < 8:
                    times.extend(['0.00'] * (8 - len(times)))
                elif len(times) > 8:
                    times = times[:8]
                all_data.append([meeting, race_name, horse_name] + times)
                print(f"{horse_name}: {times}")
    else:
        print(f"Insufficient data: Time cells={len(time_cells_filtered)}, Horses={len(horses)}")

    # Debug
    print("Horses Found in final data:", len(all_data))
    print("Distance:", distance)

    # Save to CSV
    df = pd.DataFrame(all_data, columns=["meeting", "race", "horses"] + headers)
    df.to_csv("C:\\Scraping\\results_2025-03-11.csv", index=False)
    print("Saved to results_2025-03-11.csv")

except Exception as e:
    print(f"Error: {e}")
    driver.save_screenshot("C:\\Scraping\\error_screenshot.png")

finally:
    time.sleep(3)
    driver.quit()

I’m new to Python (version 3.13.2) and using Selenium with Firefox to scrape data from a racecard webpage, specifically the "Sectional Times" tab at a URL like https://www.attheraces/racecard/Cheltenham/11-March-2025/1320. My goal is to collect 88 sectional times, which means 8 per horse for 11 horses, but I’m only getting 77 values. I think the 11 missing ones might be related to horse numbers being filtered out, and I need help capturing all 88.
I’ve attached a screenshot named after_times_wait.png to show the grid layout.
Can anyone help with these questions?
How can I adjust the filter to get all 88 sectional times, especially if some have just one decimal place?

Is skipping the first 69 cells to avoid headers correct, or should I find a better way to detect where the horse data starts?

Any suggestions to ensure all 11 horses are processed, even if data is incomplete?

Thanks for any assistance!

I loaded the webpage using Selenium, clicked the "Sectional Times" tab, scrolled to load all content, and scraped 168 total cells. I tried filtering the cells to keep only decimal numbers with 1 to 2 digits before and after the decimal point (like 61.58 or 6.5), excluding values of 10 or less, which I think are horse numbers (e.g., 5). I processed 9 out of 11 horses, each with 8 sectionals, by padding with 0.00 when needed. I also skipped the first 69 cells assuming they were headers based on the raw data pattern.

I expected to get 88 sectional times, with 8 valid sectionals for each of the 11 horses (e.g., one horse starting with 61.58), resulting in a complete dataset for all horses. Instead, I got only 77 cells and processed 9 horses.

I loaded the webpage using Selenium, clicked the "Sectional Times" tab, scrolled to load all content, and scraped 168 total cells. I tried filtering the cells to keep only decimal numbers with 1 to 2 digits before and after the decimal point (like 61.58 or 6.5), excluding values of 10 or less, which I think are horse numbers (e.g., 5). I processed 9 out of 11 horses, each with 8 sectionals, by padding with 0.00 when needed. I also skipped the first 69 cells assuming they were headers based on the raw data pattern.
What were you expecting (Paste This into the 'What were you expecting' Field): I expected to get 88 sectional times, with 8 valid sectionals for each of the 11 horses (e.g., one horse starting with 61.58), resulting in a complete dataset for all horses. Instead, I got only 77 cells and processed 9 horses.

atr sectional page

I’ve included my code and debug output below for reference.

Here’s the debug information I gathered: All time cells found: 168

First 10 raw time cells: ['Pos', 'Silk', 'Horse', '1', '', ...]

Filtered time cells found: 77

Accepted samples: ['61.58', '56.55', '30.02', '27.18', '26.97']

Rejected samples: ['5.', '11., '6., '4., '7., '10., '2., '8., '1., '3.']

Horses Found in final data: 9

And here’s my code attempt:

from selenium import webdriver
from selenium.webdriver.firefox.service import Service
from selenium.webdrivermon.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd
import time

# Firefox service
service = Service(executable_path="C:\\Scraping\\geckodriver.exe")
driver = webdriver.Firefox(service=service)

try:
    # Racecard URL
    url = "https://www.attheraces/racecard/Cheltenham/11-March-2025/1320"
    driver.get(url)
    time.sleep(15)  # Extended wait to allow full page load
    driver.save_screenshot("C:\\Scraping\\initial_load.png")
    print("Initial load captured")

    # Debug initial page source
    with open("C:\\Scraping\\page_source_initial.html", "w", encoding="utf-8") as f:
        f.write(driver.page_source)
    print("Initial page source saved")

    # Accept cookies
    try:
        cookie_button = WebDriverWait(driver, 10).until(
            EC.element_to_be_clickable((By.XPATH, '//button[text()="Accept All"]'))
        )
        cookie_button.click()
        print("Cookies accepted")
        time.sleep(5)
    except Exception as e:
        print("No cookie prompt found:", e)

    # Click Sectional Times tab with presence check and forced click
    print("Attempting to click Sectional Times tab...")
    try:
        tab = WebDriverWait(driver, 40).until(
            EC.presence_of_element_located((By.XPATH, '//a[contains(@class, "tab") and contains(normalize-space(.), "Sectional")]'))
        )
        driver.execute_script("arguments[0].click();", tab)
        print("Tab clicked")
    except Exception as e:
        print(f"Tab not found or click failed: {e}")
        driver.save_screenshot("C:\\Scraping\\tab_click_error.png")
    time.sleep(15)

    # Wait for tab container
    print("Waiting for tab container...")
    WebDriverWait(driver, 30).until(
        EC.presence_of_element_located((By.XPATH, "//div[@id='tab-sectionals-times']"))
    )
    print("Tab container found")

    # Multiple scroll attempts to load content
    print("Scrolling to load content...")
    for _ in range(5):
        driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
        time.sleep(5)
        driver.execute_script("window.scrollTo(0, 0);")
        time.sleep(2)
        driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
        time.sleep(5)
    driver.save_screenshot("C:\\Scraping\\after_scroll.png")
    print("Scroll completed")

    # Wait for sectional times with retry and extended timeout
    print("Waiting for sectional times...")
    max_attempts = 3
    for attempt in range(max_attempts):
        try:
            WebDriverWait(driver, 90).until(  # Increased to 90 seconds
                EC.presence_of_all_elements_located((By.XPATH, "//div[@id='tab-sectionals-times']//div[contains(@class, 'card-cell')]"))
            )
            print("Sectional times found")
            break
        except Exception as e:
            print(f"Attempt {attempt + 1} failed: {e}")
            if attempt < max_attempts - 1:
                print("Retrying...")
                time.sleep(10)
            else:
                print("Max retries reached, raising error")
                driver.save_screenshot("C:\\Scraping\\error_screenshot.png")
                raise
    driver.save_screenshot("C:\\Scraping\\after_times_wait.png")

    # Debug page source
    with open("C:\\Scraping\\page_source_after_scroll.html", "w", encoding="utf-8") as f:
        f.write(driver.page_source)

    # Headers
    headers = ["start-12f", "12f-8f", "8f-6f", "6f-4f", "4f-2f", "2f-1f", "1f-finish", "finish"]
    all_data = []

    # Race info
    race_name = driver.find_element(By.TAG_NAME, "h1").text.strip() if driver.find_elements(By.TAG_NAME, "h1") else "Unknown Race"
    distance = driver.find_element(By.CLASS_NAME, "p--large.font-weight--semibold").text.strip() if driver.find_elements(By.CLASS_NAME, "p--large.font-weight--semibold") else "0f"
    meeting = "cheltenham"

    # Scrape horse names
    print("Scraping horse names...")
    horse_elements = driver.find_elements(By.XPATH, "//div[contains(@class, 'card-entry')]//h2//a[contains(@class, 'horse__link')]")
    horses = [elem.text.strip() for elem in horse_elements if elem.text.strip() and elem.is_displayed()]
    print("Horses found:", len(horses))

    # Scrape sectional times with corrected alignment
    print("Scraping sectional times...")
    time_cells = driver.find_elements(By.XPATH, "//div[@id='tab-sectionals-times']//div[contains(@class, 'card-cell')]")
    print("All time cells found:", len(time_cells))
    print("First 10 raw time cells:", [cell.text.strip() for cell in time_cells[:10]])  # Debug raw data
    # Filter for time cells, accepting cleanable decimals after skipping headers
    time_cells_filtered = []
    accepted_samples = []
    rejected_samples = []
    header_count = 69  # Estimated based on raw data pattern
    for i, cell in enumerate(time_cells[header_count:]):  # Skip initial headers
        spans = cell.find_elements(By.XPATH, ".//span[contains(@class, 'visible')]")
        for span in spans:
            text = span.text.strip()
            cleaned = text.replace(' ', '').replace('x', '')
            if '.' in text and cleaned.replace('.', '').isdigit():
                parts = cleaned.split('.')
                if len(parts) == 2 and all(part.isdigit() for part in parts) and len(parts[0]) in [1, 2] and len(parts[1]) in [1, 2] and float(cleaned) > 10:
                    time_cells_filtered.append(cell)
                    accepted_samples.append(text)
                else:
                    rejected_samples.append(text)
    print("Filtered time cells found:", len(time_cells_filtered))
    if accepted_samples:
        print("Accepted samples:", accepted_samples[:5])  # Show first 5 accepted
    if rejected_samples:
        print("Rejected samples:", rejected_samples[:10])  # Show first 10 rejected

    if len(time_cells_filtered) > 0 and len(horses) == 11:  # Process any available data
        print("Warning: Expected 88 cells, found", len(time_cells_filtered))
        # Assign filtered sectionals directly to horses
        expected_horses = 11
        remaining_cells = len(time_cells_filtered)
        for i in range(0, remaining_cells, 8):
            horse_index = i // 8
            if horse_index < expected_horses:
                horse_name = horses[horse_index]
                cells = time_cells_filtered[i:i + 8]
                times = [cell.find_element(By.XPATH, ".//span[contains(@class, 'visible')]").text.strip().replace(' ', '').replace('x', '') if cell.find_elements(By.XPATH, ".//span[contains(@class, 'visible')]") else '' for cell in cells]
                times = [t if '.' in t and t.replace(' ', '').replace('x', '').replace('.', '').isdigit() and len(t.split('.')[0]) in [1, 2] and len(t.split('.')[1]) in [1, 2] and float(t.replace(' ', '').replace('x', '')) > 10 else '0.00' for t in times]
                # Pad or truncate to 8 sectionals
                if len(times) < 8:
                    times.extend(['0.00'] * (8 - len(times)))
                elif len(times) > 8:
                    times = times[:8]
                all_data.append([meeting, race_name, horse_name] + times)
                print(f"{horse_name}: {times}")
    else:
        print(f"Insufficient data: Time cells={len(time_cells_filtered)}, Horses={len(horses)}")

    # Debug
    print("Horses Found in final data:", len(all_data))
    print("Distance:", distance)

    # Save to CSV
    df = pd.DataFrame(all_data, columns=["meeting", "race", "horses"] + headers)
    df.to_csv("C:\\Scraping\\results_2025-03-11.csv", index=False)
    print("Saved to results_2025-03-11.csv")

except Exception as e:
    print(f"Error: {e}")
    driver.save_screenshot("C:\\Scraping\\error_screenshot.png")

finally:
    time.sleep(3)
    driver.quit()
Share Improve this question edited Mar 19 at 15:40 Audrey E Unique Expressions Ar asked Mar 18 at 21:48 Audrey E Unique Expressions ArAudrey E Unique Expressions Ar 12 bronze badges 1
  • Please provide enough code so others can better understand or reproduce the problem. – Community Bot Commented Mar 19 at 12:40
Add a comment  | 

1 Answer 1

Reset to default 0

You are getting only 77 values instead of 88 because the last column value cannot be obtained by the .text method for Firefox. Use .get_attribute('innerText') instead. Also, the values of the other columns have some labels that needs to be processed if you directly get the text from the div element.

In the following code the I have used two separate selectors using the | pipe operator as one xpath to select the span for other columns and the div for the finish column.

Also, You can select only the items that are visible in the table. You don't need to select all card-cell divs and filter it.

It gives me all 88 values as result.

from selenium import webdriver
from selenium.webdriver.firefox.service import Service
from selenium.webdrivermon.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd
import time

driver = webdriver.Firefox()
wait = WebDriverWait(driver, 30)

try:
    # Racecard URL
    url = "https://www.attheraces/racecard/Cheltenham/11-March-2025/1320"
    driver.get(url)

    # Accept cookies
    try:
        cookie_button = wait.until(EC.element_to_be_clickable((By.XPATH, '//button[text()="Accept All"]'))).click()
        print("Cookies accepted")
    except Exception as e:
        print("No cookie prompt found:", e)

    # Click Sectional Times tab with presence check and forced click
    print("Attempting to click Sectional Times tab...")
    try:
        tab = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "a[data-name='sectional-times']")))
        driver.execute_script("arguments[0].click();", tab)
        print("Tab clicked")
    except Exception as e:
        print(f"Tab not found or click failed: {e}")
        driver.save_screenshot("C:\\Scraping\\tab_click_error.png")

    # Wait for tab container
    print("Waiting for tab container...")
    wait.until(EC.presence_of_element_located((By.XPATH, "//div[@id='tab-sectionals-times']")))
    print("Tab container found")

    # Headers
    headers = [h.get_attribute('innerText').strip() for h in driver.find_elements(By.CSS_SELECTOR,"div.card-column__main div.card-header div.card-header__th:not(.is-hidden)")]
    all_data = []

    # Race info
    race_name = driver.find_element(By.CSS_SELECTOR, "div.race-header__details > p > b").text.strip() if driver.find_elements(By.CSS_SELECTOR, "div.race-header__details > p > b") else "Unknown Race"
    distance = driver.find_element(By.CLASS_NAME, "p--large.font-weight--semibold").text.strip() if driver.find_elements(By.CLASS_NAME, "p--large.font-weight--semibold") else "0f"
    meeting = "cheltenham"

    # Scrape
    rows = driver.find_elements(By.CSS_SELECTOR, "div.card-column__main div.card-body > div.card-entry")
    for row in rows:
        horse_name = row.find_element(By.CSS_SELECTOR, "div.horse a").get_attribute('innerText').strip()
        times = [time.get_attribute('innerText').strip() for time in row.find_elements(By.XPATH,".//div[contains(@class,'card-sectional')]//span[@class='visible'] | .//div[contains(@class,'card-sectional--finish')]")]
        all_data.append([meeting, race_name, horse_name] + times)

    # Save to CSV
    df = pd.DataFrame(all_data, columns=["meeting", "race", "horses"] + headers)
    df.to_csv("results_2025-03-11.csv", index=False)
    print("Saved to results_2025-03-11.csv")
    print(df)

except Exception as e:
    print(f"Error: {e}")
    driver.save_screenshot("C:\\Scraping\\error_screenshot.png")

finally:
    time.sleep(3)
    driver.quit()

The Result:

       meeting                                        race             horses Start-12f 12f-8f  8f-6f  6f-4f  4f-2f  2f-1f 1f-Finish     Finish
0   cheltenham  Michael O'Sullivan Supreme Novices' Hurdle   Kopek Des Bordes     61.58  56.55  30.02  27.18  26.97  13.44     16.28  3m 52.04s
1   cheltenham  Michael O'Sullivan Supreme Novices' Hurdle      William Munny     61.87  57.04  29.50  27.47  26.70  13.81     16.01  3m 52.40s
2   cheltenham  Michael O'Sullivan Supreme Novices' Hurdle       Romeo Coolio     61.07  56.87  29.83  27.28  27.46  14.57     16.59  3m 53.65s
3   cheltenham  Michael O'Sullivan Supreme Novices' Hurdle          Karniquet     61.63  57.13  29.98  27.69  26.95  14.66     16.47  3m 54.50s
4   cheltenham  Michael O'Sullivan Supreme Novices' Hurdle     Salvator Mundi     61.74  57.55  29.93  28.39  27.46  14.37     16.77  3m 56.20s
5   cheltenham  Michael O'Sullivan Supreme Novices' Hurdle       Tutti Quanti     62.13  58.39  30.21  28.69  28.69  14.32     15.75  3m 58.18s
6   cheltenham  Michael O'Sullivan Supreme Novices' Hurdle             Irancy     61.77  57.48  29.72  28.44  28.58  15.57     17.71  3m 59.27s
7   cheltenham  Michael O'Sullivan Supreme Novices' Hurdle           Sky Lord     61.87  57.26  29.96  27.57  28.65  15.78     18.30  3m 59.38s
8   cheltenham  Michael O'Sullivan Supreme Novices' Hurdle  Funiculi Funicula     61.99  57.64  29.52  28.52  29.84  16.08     18.44   4m 2.05s
9   cheltenham  Michael O'Sullivan Supreme Novices' Hurdle             Karbau     61.66  57.39  30.43  29.37  30.10  16.53     19.30   4m 4.79s
10  cheltenham  Michael O'Sullivan Supreme Novices' Hurdle          Workahead     60.72  56.96  30.79  30.99  33.04  16.42     19.06   4m 7.99s
发布评论

评论列表(0)

  1. 暂无评论