In the past we were utilising csvs for data storage and transfer. 5 different csvs were provided on a daily bases, one contained weather data for each location for wildfire prediction, one contained wind data, another contained fire-station data and one final one contained wildfire predictions. Every time a user entered the application, three of these csvs were parsed and merged using for loops before sending the final json to the frontend. In computer science terminology, the asymptotic complexity of this process was O(n^3).
This asymptotic complexity meant that the application became considerably slower as we added more locations. The effects of this processing was noticeable for the user and was a bottleneck that did not allow us to move beyond 350 locations in the United States.
We are now centralising all of our data within a single sqlite database which contains multiple tables. Our django application accesses this using it’s “models” feature, this considerably cuts down on processing time, on potential mishaps in regards to data preparation and speeds up performance. It also means the application now benefits from a much cleaner, more streamlined and more maintainable architecture end-to-end.
Current performance data can be seen below: