Unfortunately not. Our devs added a feature to an internal viz testing tool we've used, that dumped that data out to a database, but it wasn't easy, and it required that we run the tool serially against one viz at a time, rather than using multiple threads. That was required because the perf recording had nothing you could use to tie it to the viz it had analyzed, so they had to map the perf recording to the last viz rendered based solely on the fact that it was the last recording generated.
Also, on a more theoretical level, I think you want two ways of testing viz performance. One is through synthetic tests regularly spaced out over time and kept consistent. The other is real-world usage tied to a specific user's experience. The former gives you a way to measure performance even if no one is looking at it, and allows for better baseline measurements over time. The latter allows you to detect issues that people are truly experiencing relative to your benchmarks. It'd be great to get better long-term performance data at the high level as well as a more granular level. That's my dream