A few years back, when I posted an analysis of how TValue is very slow, it prompted a lot of response from the community. Various people ran their own benchmarks, and started working on building or optimizing their own meta-value types. Some people are even still working on that today. But one of the most interesting things was Robert Love’s response. He looked at the TValue code and found a way that it could be optimized for the common case to speed things up. Continue reading ‘The next RTTI bottleneck’ »
Archive for the ‘Optimization’ Category.
One of the biggest challenges in working on the TURBU engine has been minimizing load times. Some large projects have a whole lot of data to work with, which could take the better part of a minute to load if I tried to load it all up front. No one wants to sit and wait for that, so I’ve pared down the loading so that only the stuff that’s needed right away gets loaded from the project database right at startup.
And yet, on one of my larger test projects, that wasn’t enough. One of the things that has to be loaded upfront was map tile data, so that the maps can draw. Unfortunately, this project has over 200 different tilesets, and it was taking quite a while to load that much data. I’ve got a RTTI-based deserializer that can turn dataset records into objects, but it was taking a completely unreasonable 3.3 seconds to read the tile data.
What’s wrong with this code?
for i := 0 to Self.CheckedCount - 1 do
A couple posts ago, I mentioned that I’ve been working with code generation lately. This is for a part of the TURBU project. An RPG relies pretty heavily on scripting, and RPG Maker, the system I created TURBU to replace, has a fairly extensive, if limited, scripting system. The limitations were one of the things that made me say “I could do better than this,” in fact: No functions, no local variables, callable procedures exist but parameters don’t, so any “passing” has to be done in global variables, only two data types: integer and boolean, no event handlers, minimal looping support, etc.
Last week at work, I was asked to look at one of our verification modules that was taking about three times longer to run than it had in an earlier version. This module takes a set of result files, compares them against another file showing expected results, and reports any discrepancies that are outside the defined margin of error. It’s some pretty heavy work involving hundreds of thousands of data points, and the old version already took more than ten minutes. Increasing the running time by a factor of three just wasn’t acceptable. So I started to look at what was going on.