Abstract:
Effective and feasible procedures for validating microscopic, stochastic traffic simulation models are in short supply. Exercising such micro-simulators many times on specific (real) networks may lead to the occurrence of traffic gridlock (or simulation failures) on some or all replications. While the lack of failures may not assure validity of the simulator for predicting performance, the occurrence of failures can provide clues for identifying deficiencies of the simulation model and invite strategies for model improvement.
We define failure as a severe malfunction in one or more traffic links on the network where vehicles are unable to discharge for an unusually long period. Such malfunctions can be detected through the use of link-based time traces of vehicle trips. Identifying locations where malfunctions arise requires further spatial analyses. A procedure for identifying “whether”, “when” and “where” failures occur is described. The simulator CORSIM serves as the test-bed simulator for the proposed methodology but the procedure is applicable to any comparable microscopic model; real-world traffic networks are simulated as case studies.
Possible root causes of detected failures are: (1) flaws in the simulator behavioral algorithms, (2) improper calibration of inputs, and (3) capacity problems related to the specific network under study (for example, when heavy traffic demand projections are simulated). Strategies to identify “why” failures occur are used in the case studies.
The results indicate that the proposed failure detection and diagnosis process is an effective (and essential) way to explore the validity of traffic simulators and find improvements where needed.
Keywords:
Traffic micro-simulation, traffic anomaly detection, simulation failure, model validation
