Abstract
The two-sided markets, such as ride-sharing companies, often involve a group of subjects who are making sequential decisions across time and/or location. With the rapid development of smart phones and internet of things, they have substantially transformed the transportation landscape of human beings. In this paper we consider large-scale fleet management in ride-sharing companies that involve multiple units in different areas receiving sequences of products (or treatments) over time. Major technical challenges, such as policy evaluation, arise in those studies because: (i) spatial and temporal proximities induce interference between locations and times, and (ii) the large number of locations results in the curse of dimensionality. To address both challenges simultaneously, we introduce a multiagent reinforcement learning (MARL) framework for carrying policy evaluation in these studies. We propose novel estimators for mean outcomes under different products that are consistent despite the high dimensionality of state-action space. The proposed estimator works favorably in simulation experiments. We further illustrate our method using a real dataset obtained from a two-sided marketplace company to evaluate the effects of applying different subsidizing policies. A Python implementation of our proposed method is available in the Supplementary Material and also at https://github.com/RunzheStat/CausalMARL.
Funding Statement
Shi’s research was partly supported by the EPSRC Grant EP/W014971/1.
Song’s research was partially supported by NSF Grant DMS-2003637.
Acknowledgments
We thank the Associated Editor and two anonymous referees for their constructive comments and suggestions.
Citation
Chengchun Shi. Runzhe Wan. Ge Song. Shikai Luo. Hongtu Zhu. Rui Song. "A multiagent reinforcement learning framework for off-policy evaluation in two-sided markets." Ann. Appl. Stat. 17 (4) 2701 - 2722, December 2023. https://doi.org/10.1214/22-AOAS1700
Information