Welcome to Canopy Simulations. We are a new startup kicking off with one simple aim: to make cutting edge simulation fast, affordable and accessible at every level of motorsport. Between the members of our team we have 26 years experience of developing and using simulation software at the forefront of F1 (mostly for McLaren and Ferrari). We’ve seen the state-of-the-art, and founded the company in the belief we could improve on it. So what is it, you might ask, that we think we can improve on?
Over the years each of us in Canopy Simulations has seen the following storyline play out.
- In the early days someone writes a simulation. Everyone is very happy that we can now accurately predict car behaviour without the need for expensive and messy track testing.
- Now you can run two lap time simulations to predict a lap time delta for a common parameter change (fuel mass, engine power, ride height etc.). Now we have trends in our lap time, and everyone is even happier (notwithstanding the fact that the immaturity of the software sometimes means you seem to go faster when you reduce power…).
- Later on we realise that a sweep of about 10 simulations across each of our common parameter changes would be much more useful. We have about 10 different parameter changes we’re interested in, so now we need to run 100 simulations. We might even need to hire someone to do it.
- Finally the message gets home that the cross correlations between variables are also vitally important, so we really need to explore a 10 dimensional setup space, and to do this we will struggle to get away with fewer than 1,000 simulations.
At this point major issues start to rear their head. Those 1,000 simulations will probably take between 5 and 15 hours to run. You therefore have 3 options:
- Downgrade your expectations.
- Start running your simulation studies overnight.
- Start looking for a lot more computing power and parallelise your computation.
Option 1 involves surrendering some competitiveness, so is probably a non-starter. Option 2 cripples you with respect to your turn-around-times, waiting overnight to get some results back, only to find that you used the wrong set of car parameters is “not very F1”, to quote an old colleague of mine. Option 3 is much more appealing, but where is this computing power going to come from? Who is going to sign the capex forms? Are you going to need to spend a fortune on licenses for your favoured development tools (e.g. Matlab/Simulink/Dymola)? And even if you manage to get hold of 10 spare PCs you still have to make sure they are all running the same version of your in-house code. and you are still waiting between 30 and 90 minutes for your simulations. What if Aero-Analysis and Tyre-Analysis both want to run massive studies at the same time?
I am generally met with disbelief when describing the inner workings of an F1 team; the assumption is that behind the shiny outer layer are unlimited budgets, massive development teams and impossibly advanced software solutions. However, we are aware of more than one team who have attempted option 3 (modest parallelism), and gradually given up and gone back to option 2 (run overnight). F1, as it turns out, is “not very F1”.
So where do Canopy fit in here? Our approach is relatively simple:
- Write everything in a fast, low level language, with no dependence on licensed software tools (like Matlab etc.).
- Provide a user environment in which finding lap time is intuitive.
- Leverage the power of the cloud to turn things around fast.
So now we can get on to the meat of this post: the transformative power of speed. Using the Canopy Platform we can scale up our computing power to an arbitrarily large number of cores, but let’s assume we can access 100 cores to do our work for us. We can now turn around a 1000 simulation study in about 4 minutes. This makes a massive difference to the way simulation tools are used:
- Anyone can run a large and complex study to understand the car behaviour: spending several days cranking the handle on an ‘aeroscan’ is a thing of the past, and no longer requires a specialist handle-turner. Just by making simulation faster, the user group suddenly expands, usage goes up and so does the value extracted.
- You can perform large and complex studies right up to the race (or even during practice sessions, as more information about the track or car becomes available). If you have to wait overnight, not only do you need a specialist handle-turner, but she needs to turn the handle a very long way in advance, lest she need to re-run the simulations. Furthermore, her vast experience and expertise would probably be better used finding lap time than turning the handle.
- Studies of marginal gains do get done. A 12 hour wait for results means that only performance studies with nailed-on value get completed. A 4 minute wait means that you are free to hoover up marginal gains.
- Last but not least: time is money. Waiting is just wasteful, very highly qualified and skilled people get diverted into a life of specialist handle-turning. The costs of this in lost productivity adds up very fast, in terms of money, performance and disillusionment of your specialist handle turners.
Factors such as these, therefore, mean that the benefit of bringing your simulation execution times down is out of all proportion with the first-order time saving you achieve.
“Is that it? The entire purpose of this platform is to enable teams to do something they can already do, but faster?”. Well, no, that isn’t it. There are about 5 or 6 teams in F1 who have really high quality simulation technology, but there are several hundred motorsport teams in various formulae who would benefit from access to similar technology. These teams would not typically have regarded offline simulation as falling into the category of “low hanging fruit”, but our tools bring accessible performance gains to all teams, at a fraction of the cost of in-house development. Not only are we improving on the in-house simulation tools of the F1 teams; we are aiming to put anyone who wants to use them ahead of the best F1 teams.
We’ve focussed heavily on speed in this article. However the claim that “anyone can run a large and complex study” requires a bit of justification. Is this true? How can it be done? And how do you extract value from having run 1,000 simulations? This will be the focus of our next post.