I'm currently developing a serverless time-series cloud service and we've reached the juncture of designing an innovative cost metric system. Our goal is to move away from the convention of calculating costs based on memory and CPU usage, instead offering our customers a more precise price calculation method.

For write requests, the measurement method seems somewhat straightforward – we can measure according to data size. However, when it comes to read requests, the paradigm shifts. These requests often involve varying levels of analysis operations which could result in substantial differences in resource utilization.

I'm struggling to find a fitting and fair methodology to measure the cost for read requests, particularly those that involve complex analytical queries. I'd appreciate it if anyone could suggest suitable approaches to accurately factor in such diverse resource requirements in our pricing model.

Any help or pointers would be greatly appreciated. Thanks in advance.