Explore chapters and articles related to this topic
Data Statistics and Analytics
Published in Paresh Chra Deka, A Primer on Machine Learning Applications in Civil Engineering, 2019
This is a procedure for testing the hypothesis that K population means are equal, where K > 2. The one-way ANOVA compares the means of the samples or groups in order to make inferences about the population means. It is also called a single factor analysis of variance because there will be only one independent variable or factor. In an ANOVA, there are two kinds of variables, namely, independent and dependent. The independent variable is controlled by the researcher. It is a categorical or discrete variable used to form the groupings of observations. In the one-way ANOVA, whereas only one independent variable is considered, two or more (theoretically, any finite number) levels of the independent variables are possible. The independent variable is typically a categorical variable. The independent variable (or factor) divides individuals into two or more groups or levels. There are two types of independent variables: active and attribute. If the independent variable is an active variable then we have to manipulate the values of the variable to study its effect on another variable; whereas, for an attribute independent variable, we do not have to alter the variable during the study.
Project Control System
Published in Adedeji B. Badiru, Project Management, 2019
Define objectives for the experiment. Develop and prioritize the goals you want the experiment to accomplish. Have a focus! Determine the problem to be solved. Specify the experimental criterion (i.e., dependent variable or response variable). The nature of the criterion helps to determine what statistical tests are applicable. Is the response measurable?How accurate can it be measured?What independent variables are involved?Are they measurable?Can their levels be manipulated?Over what range can they be manipulated?Are the levels fixed or random?
Research Methods in Human Factors
Published in Robert W. Proctor, Van Zandt Trisha, Human Factors in Simple and Complex Systems, 2018
Robert W. Proctor, Van Zandt Trisha
Another way to classify variables is in terms of whether they are manipulated or measured. An independent variable is one that is manipulated by the researcher. Most often, the manipulation is made for stimulus variables, such as the level of illumination. We manipulate independent variables to determine their effects on other variables, which are referred to as dependent variables. Dependent variables usually are behavioral variables. These are sometimes called criterion variables (Sanders & McCormick, 1993) and can be grouped into measures that reflect performance (such as speed and force of responding), physiological indexes (such as heart rate and EEG recordings), or subjective responses (such as preferences and estimates of effort). The distinction between independent and dependent variables forms a cornerstone of the true experiment, because it allows us to establish causal relations.
Selected AI optimization techniques and applications in geotechnical engineering
Published in Cogent Engineering, 2023
Kennedy C. Onyelowe, Farid F. Mojtahedi, Ahmed M. Ebid, Amirhossein Rezaei, Kolawole J. Osinubi, Adrian O. Eberemu, Bunyamin Salahudeen, Emmanuel W. Gadzama, Danial Rezazadeh, Hashem Jahangir, Paul Yohanna, Michael E. Onyia, Fazal E. Jalal, Mudassir Iqbal, Chidozie Ikpa, Ifeyinwa I. Obianyo, Zia Ur Rehman
Analysis of variance (ANOVA) is tool used for statistical analysis which separates an observed aggregate variability found inside a data set into random and systematic factors. The random factors do not influence a given data set statistically, while the systematic factors do. Thus, analysts in diverse field of study make use of the ANOVA test for the determination of the effect that independent variables have on the dependent variable in a statistical regression study. The variables that are measured are called the dependent variables whereas; the variables that are controlled/manipulated are called independent variables or factors. ANOVA is an extension of the t—and z-tests and it is also called the Fisher analysis of variance. It is used to investigate if significance difference exists between the mean of two or more groups. The assumptions used in ANOVA are that the fundamental distributions are normally distributed and that the variances of the distributions being compared are analogous (Smith, 2018). The formula for ANOVA is given as:
Reliable corridor level travel time estimation using probe vehicle data
Published in Transportation Letters, 2020
Rahul Sakhare, Lelitha Vanajakshi
Linear regression is a linear approach to explain the relationship between a dependent variable and one or more explanatory variables. The dependent variable is a possible outcome variable that is determined by other independent variables. If only one independent variable is used to model the outcome, then it is termed as a simple linear regression whereas use of two or more than two independent variables to model the outcome is termed as multivariate linear regression. Residuals are the difference between the data points and regressed line and linear regression models often try to minimize the sum of squares of residuals. It is also known as the least square approach. This is one of the most common approach used for linear regression models though there are other available approaches like minimizing the lack of fit, minimizing penalized version of the least squares cost function. For the present study, least square approach was used for the purpose of fitting the linear regression model using MATLAB. The following section discusses the functional form of the model and different explanatory variables that were used to build the model.
Implementation of a novel taxonomy based on cognitive work analysis in the assessment of safety performance
Published in International Journal of Occupational Safety and Ergonomics, 2018
Linear regression uses an independent variable (activities of the management) to explain the outcome of the dependent variables (sociotechnical work conditions and contexts, see Appendix 1) [23]. An independent variable is defined as the variable that is changed in a scientific experiment; it represents the cause or reason for an outcome (see Appendix 1). A change in the independent variable directly causes a change in the dependent variable. The effect on the dependent variable is measured. A dependent variable is what is measured in the experiment and what is affected during the experiment. It is called dependent because it ‘depends’ on the independent variable.