Abstract
Let $\{t_k\}$ be a sequence of points in $d$-dimensional Euclidean space. Let $\{X_k\}$ be a sequence of random variables with zero mean, i.i.d. or nearly so. If $\mathscr{A}$ is a class of subsets of $R^d$, let $$M_n(\omega) = \sup_{A\in\mathscr{A}}\Sigma_{\{k\leqslant n: t_k \in A\}}X_k(\omega).$$ $M_n$ is related to a commonly used estimator in monotone regression. Under various conditions on $\mathscr{A}$ and the points $\{t_k\}$, we study the a.s. convergence to zero of $M_n/n$ as $n \rightarrow \infty$.
Citation
R. T. Smythe. "Maxima of Partial Sums and a Monotone Regression Estimator." Ann. Probab. 8 (3) 630 - 635, June, 1980. https://doi.org/10.1214/aop/1176994734
Information