Looks like the queue is just growing at least since yesterday…
Any news on this @Zwift???
The numbers seem to be going down now so at least it’s not completely stuck anymore, just slooooow.
Still 20k to go, and there’s ZRL races tomorrow again. It’ll not clear.
…and then WTRL on Thu and Tour of NY on the weekend, maybe we should start guessing what peak Zwiftpower will be? Can we get to 40k? 50k?
Or is it actually just analysis of live power data from people who didn’t make their .fit file public that is stuck now?
Guess the last line in your screenshot says it all…
Did get down to about 14,000, but back up to over 21,000 pending now. Looks like there is a lack of resource to process the number of fit files.
Disappointed that Zwift hasn’t fixed this. It’s been getting slower and slower. With so much money recently invested into Zwift, some faster servers to process .fit files would be appreciated. Zwift owns Zwift Power now which means that customers are paying for this service as a part of the Zwift experience. Although I understand that Zwift has gotten much busier this year and I’m not a IT expert, I would expect that with faster servers it should work properly. There doesn’t appear to be a lack of funding to pay for this sort of thing.
I was this thinking that myself. My guess it’s on track for 100k by Sunday evening.
Perhaps sooner than that - its 37k now only 2hrs later. 6k added extrapolated out = not good!
A reminder of why we had some outages recently:
Fortunately there’s no big popular races happening shortloh.
Yeah, and everybody is probably working double overtime to meet their quarterly or annual KPIs so nobody has any time to ride indoors the rest of the month anyway. Bah humbug.
At the current pace, there will still be a huge backlog when thursday’s TTT starts.
I thought they had moved to faster servers?
Has the number of racers increased or are the servers just not as fast as they thought?
I believe the former developer had .fit file processing on a dedicated server.
Wasn’t the upgrade supposed to make the whole thing more scalable? Or did they just change the front end and leave the data analysis running on a C64? (Seemed to me like the queue processing went a bit faster after the update but could of course have been just due to fewer entries at the time.)