The iconic red-light countdown. The roar of the crowd as cars scream past at 200mph. The unmistakeable smell of burning rubber, engine oil and sizzling tarmac. The sights, sounds and smells of Formula 1 racing make it unique, drawing the biggest crowds in motorsports and creating some unforgettable moments of drama. What these thousands of spectators don’t see is the drama going on in the garages, in the pits, and inside the carbon-fibre bodies of the car as each team’s IT infrastructure works overtime to get its cars over the finish line first.
There are fewer teams in F1 – in fact, in all international sports – with the pedigree of Williams. Since it was founded by Frank Williams in 1977, the team has been winning both drivers’ and constructors’ championships – 16 to date.
Like all F1 teams, IT underpins everything it does, and it puts its technology through some of the most demanding applications and conditions on the planet. From the sweltering humidity of Singapore and Brazil to the baking dust of Bahrain and Abu Dhabi and the famously unpredictable British summer – equipment has to function wherever it happens to be.
Transporting a high-performance racing and engineering operation around the world is no walk in the park either. Williams transports all its infrastructure – including around 120 people – from race to race every couple of weeks between March and November. The costs of this can be staggering – even by the standards of top-level sport – so any opportunity to save is welcome.
Like many companies with highly virtualized IT infrastructures, Williams saw that its traditional network-attached storage was proving to be a bottleneck. As an example, trackside engineers have just one hour between Friday practice sessions to analyze data, which is then used to fine tune the car. It could take up to three minutes to open one of these data sets – and when you consider that engineers have to work with dozens of data sets, load times start to become a critical issue.
Because the team has a close working relationship with Dell, this topic came up in one of their regular meetings and the team decided to test out a hyper-converged solution, where compute resources and storage capacity are combined into a single chassis. In this case it was the Dell XC Series hyper-converged appliances powered by Intel and Nutanix.
“The benefits were so huge that moving to a hyper-converged architecture was a no-brainer. It was so far beyond our previous architecture. We simulated different workloads and achieved between 10 and 11 times the throughput that we were getting on our network attached storage,” one senior staff member said.
And that was just the start of the journey. The XC Series turned out to offer Williams the storage integration it was looking for but it also got a few more unexpected benefits. Like saving them over $100,000 a year on freight fees by consolidating storage and compute. To tell this story we’ve put together an interactive Formula 1 track where you can check out some of the highlights. It sheds some light on how a company like Williams solved its problems with a hyper-converged solution. We say if a team like Williams can get great results in some of the toughest environments in the world by going hyper-converged, you can too.
You can also read the full spotlight, to see how Williams stays ahead of the competition.