This paper presents a new research direction for online Multi-Level Aggregation (MLA) with delays. Given an edge-weighted rooted tree $T$ as input, a sequence of requests arriving at its vertices needs to be served in an online manner. A request $r$ is characterized by two parameters: its arrival time $t(r) > 0$ and location $l(r)$ being a vertex in tree $T$. Once $r$ arrives, we can either serve it immediately or postpone this action until any time $t > t(r)$. A request that has not been served at its arrival time is called pending up to the moment it gets served. We can serve several pending requests at the same time, paying a service cost equal to the weight of the subtree containing the locations of all the requests served and the root of $T$. Postponing the service of a request $r$ to time $t > t(r)$ generates an additional delay cost of $t - t(r)$. The goal is to serve all requests in an online manner such that the total cost (i.e., the total sum of service and delay costs) is minimized. The MLA problem is a generalization of several well-studied problems, including the TCP Acknowledgment (trees of depth 1), Joint Replenishment (depth 2), and Multi-Level Message Aggregation (arbitrary depth). The current best algorithm achieves a competitive ratio of $O(d^2)$, where $d$ denotes the depth of the tree.
Here, we consider a stochastic version of MLA where the requests follow a Poisson arrival process. We present a deterministic online algorithm that achieves a constant ratio of expectations, meaning that the ratio between the expected costs of the solution generated by our algorithm and the optimal offline solution is bounded by a constant. Our algorithm is obtained by carefully combining two strategies. In the first one, we plan periodic oblivious visits to the subset of frequent vertices, whereas in the second one, we greedily serve the pending requests in the remaining vertices. This problem is complex enough to demonstrate a very rare phenomenon that "single-minded" or "sample-average" strategies are not enough in stochastic optimization.