Explore chapters and articles related to this topic
Count Data Models
Published in Simon Washington, Matthew Karlaftis, Fred Mannering, Panagiotis Anastasopoulos, Statistical and Econometric Methods for Transportation Data Analysis, 2020
Simon Washington, Matthew Karlaftis, Fred Mannering, Panagiotis Anastasopoulos
The likelihood ratio test is a common test used to assess two competing models. It provides evidence in support of one model, usually a full or complete model, over another competing model that is restricted by having a reduced number of model parameters. The likelihood ratio test statistic is () X2=−2[LL(βR)−LL(βU)]
Heterogeneous preferences of green vehicles by vehicle size: Analysis of Seoul case
Published in International Journal of Sustainable Transportation, 2018
Jin-Seok Hahn, Jang-Ho Lee, Keechoo Choi
A likelihood ratio test is a statistical test used to compare the goodness of fit of two models, one of which (the null model) is a special case of the other (the alternative model). The probability distribution of the test statistic is approximately a chi-squared distribution with degrees of freedom equal to the number of free parameters of the alternative and null models, respectively (Greene, 2002). From the likelihood ratio test, we cannot reject the IIA assumption. The likelihood ratio value for the test is 0.62, which is smaller than the chi-squared value (6.63) corresponding to the additional parameter in the unrestricted model at a 0.01 level of significance.
Evaluating the impact of waiting time reliability on route choice using smart card data
Published in Transportmetrica A: Transport Science, 2023
Sanmay Shelat, Oded Cats, Niels van Oort, J. W. C. van Lint
We also separately validate the off-peak hour models (with and without reliability parameters), using a k-fold procedure where each fold is assigned observations from a unique OD pair. In one iteration of this procedure, we keep the observations of an OD pair as the test dataset; estimate the choice model using observations from the remaining OD pairs; and calculate the likelihood that this estimated model predicts the test data. This is then repeated until all OD pairs have been kept as the test dataset. After all iterations have been completed for a particular model, all prediction likelihoods are aggregated by taking their product. The two models can then be compared using the likelihood ratio test.
A separate analysis of crash frequency for the highways involving traffic hazards and involving no traffic hazards
Published in Journal of Transportation Safety & Security, 2021
Jinxian Weng, Xiafan Gan, Jia Chen
In this study, we claim that it is better to develop the separate spatial-temporal interaction effect model for the highways involving traffic hazards and for the highways involving no traffic hazards. To support this claim, we will carry out the model transferability test by judging whether the model for the highways involving traffic hazards is significantly different from the one for the highways involving no traffic hazards. The likelihood ratio test has been widely applied to validate significant differences between the two models. The likelihood ratio test is a statistical test used to compare the goodness-of-fit of two models (a null model against an alternative model). According to Washington, Karlaftis, and Mannering (2011), the test statistic denoted by D could be calculated by where the statistic D follows the chi-square distribution, refers to the log-likelihood at the convergence of the model using the combined data (combing the traffic hazard–based data and no traffic hazard–based data). is the log-likelihood at the convergence of the model built based on the traffic hazards data, and is the log-likelihood at the convergence of the model using no traffic hazard–based data. According to Eq. (12), we could get a value of 85.26 for the statistic D. The corresponding p value is < .0001, suggesting that there is a significant difference between the two developed spatial-temporal interaction effect models. In other words, these results could confirm the appropriateness of not combining traffic hazards– and no traffic hazards–based data together for the analysis of crash frequency.