Abstract:
A flexible airport bus timetable is considered essential for improving service quality and operational efficiency. However, existing timetables are often static and difficult to adapt to real-time fluctuations in passenger flow. To address this issue, a dynamic airport bus timetable optimization method based on deep reinforcement learning is proposed. The timetable optimization problem is modeled as a Markov Decision Process, and a DQN network is used to determine departure times. State features include current time, the number of arriving passengers, the total number of waiting passengers, total passenger waiting time, maximum waiting time, occupancy rate, and the number of stranded passengers. The objective is to balance the interests of the airport and passengers by designing a reward function that incorporates passenger waiting time, occupancy rate, and the number of stranded passengers. The proposed method is validated using real-world data from Xianyang airport shuttle buses. Results show that, compared to the real timetable, the departure points after dynamic optimization were reduced by 5 percentage points. The average occupancy rate of the vehicles increased from 0.47 to 0.6, and the maximum waiting time for passengers only increased by 1.29 points of time. Furthermore, experiments under varying passenger flow conditions demonstrate that the method effectively adjusts departure times based on real-time demand, reducing operational costs and enhancing service quality.