Building a resilient sports data pipeline

Building a resilient sports data pipeline

December 14, 2024·gourbeyd
gourbeyd

How to build a resilient sports data pipeline ? Short-dive into FootX architectural choices.

Defining needed data

The most important part of data visualisation and algobetting is data.
A lot of time should be spent figuring out interesting features that will be used.
For football, this can go from classical stats (number of shots, number of goals, number of passes …) to more advanced ones such as preferred side to lead an offense, pressure, passes made into the box …
Once identified, we have to identify what datasources can give us this information.

Soccer data sources

  • API (free, paid)
    • Lots of resource out there, some free plans offer classical stats for many leagues, with rate limiting.
    • Paid sources such as Statsbomb are very high quality with many more statistics, but it comes with a price (multiple thousands dollars for a season of a league). Those are the sources used by bookmakers.
  • good ol’ scrapping
    • Some websites might show very interesting data, but scrapping is needed. Free alternative, paid with scrapping efforts and compute time.

Scrapping pipelines

This project uses scrapping at some point. I’ve implemented it with Python and the help of selenium/beautifulsoup libraries. While very handy, I’ve faced some consistency issues.

About resilience

Whether it is scrapping or api fetching, sometimes fetching data will fail. To avoid (re)launching pipelines all day, solutions are needed.

res
FootX - Pipeline architecture
On this schema, blue blackground indicates a topic of a pub/sub mechanism, orange pipelines needing scrapping or api fetching, and green only computations.

I chose to use a pub/sub mechanism, tasks to be done, such as fetch a game’s data, are stored in a topic and then consumed by workers.

Why use a pub/sub mechanism ?

Consumers that needs to perform scrapping or api calls will only mark message as consumed when they successfully accomplished their task. This allow easy restarts without having to worry on which game data was correctly fetched.

Such a stack could also allow live processing, while I do not yet implement it in my projects.

Storage choices

I personnally went with MongoDB for the following reasons:

  • Kinda close to my datasource, being JSON formatted
    • I did not want to store only features but all game data available to allow me to perform further feature extraction later.
  • Easy to self-host, set up replication, well integrated with any processing tool I use …
  • When fetching data, my queries are based on specific field, which can easily be indexed in MongoDB.

Few notes on getting the best out of MongoDB:

  • One collection per data group (ie games, players ..)
  • Index on the fields most used for queries, they will be much faster. For games collection in my case this includes: date, league, teamIdentifier, season.
  • Follow MongoDB best practices:
    • Example, to include odds in the data, is it better to embed it in the game data, or create another collection and reference it ? => I chose to embed it as odds data are small sized.

Final words

In the end, new games can easily be processed and added to datasets, which will allow for more coverage in the future. Transposing this to other sports seem trivial, as nothing is really football specific there.

Thanks for reading ! Do not hesitate to contact us to discuss any topic !