Explore chapters and articles related to this topic
Artificial Intelligence in the Construction Industry
Published in M.Z. Naser, Leveraging Artificial Intelligence in Engineering, Management, and Safety of Infrastructure, 2023
Amir H. Behzadan, Nipun D. Nath, Reza Akhavian
In the self-assessment approach, data are collected through workers’ diaries, self- reports, interviews, and questionnaires. For example, Li et al. (2017) surveyed 445 construction workers in China to evaluate their perception of construction safety. Gerdle et al. (2008) interviewed 9,952 participants in Sweden through mail questionnaires to study the effect of work status on anatomical pain. Campo et al. (2008) surveyed 882 U.S. participants on WMSDs, and Östergren et al. (2005) studied 4,919 people in Sweden to correlate work-related physical and psychological factors with shoulder and neck pains. More recently, researchers have also utilized video- and web-based questionnaires to gather self-assessment data. For example, Sunindijo and Zou (2012) collected the opinions of 353 construction personnel on the importance of soft skills for construction safety management through a web- based online survey. Although the self-assessment approach is straightforward, it often adds to the cost of data collection, particularly in cases where a large sample size is desired, as well as calls for special skills for analyzing and interpreting the results (David, 2005). Moreover, previous studies have cited that workers’ self- assessments may be imprecise, unreliable, and biased. For example, Balogh et al. (2004) found that for a similar level of exposure (e.g., physical exertion) to WMSDs, workers who had previous complaints about body pain (during the last 12 months) reported the exposure to be significantly high, compared to those who did not have prior complaints.
Why Data Science?
Published in Chong Ho Alex Yu, Data Mining and Exploration, 2022
As mentioned in Chapter 1, how people behave in experimental settings might not reflect how they actually act in the real world. Moreover, the quality of self-report data in non-experimental studies (e.g., survey and interview) are affected by privacy, social desirability, and faulty memory. Further, when the metric is known, some people might game the metric in order to obtain the desirable result. Big data analytics could rectify the situation by using behavioral data collected in naturalistic settings. In his seminal work Everybody Lies, Seth Stephens-Davidowitz (2017) found ample evidence to show that most people do not do what they say and do not say what they do. For example, in response to polls most voters declared that the ethnicity of the candidate is unimportant. However, by checking search terms in Google, Stephens-Davidowitz found the otherwise. Specifically, when Google users entered the word “Obama,” they frequently associated his name with some words related to race. When Stephens-Davidowitz decided to use Google’s search data for research, some conventional researchers were skeptical because this type of data source was considered unorthodox. Nevertheless, using these behavioral data, Stephens-Davidowitz exposed much of the dark side of human mentalities, because when users can hide their face behind the Internet and become totally anonymous, they feel free to do what they like.
Human-Machine System Performance in Spaceflight: A Guide for Measurement
Published in Mustapha Mouloua, Peter A. Hancock, James Ferraro, Human Performance in Automated and Autonomous Systems, 2019
Kimberly Stowers, Shirley Sonesh, Chelsea Iwig, Eduardo Salas
Performance is not always so simple to quantify. Often there are instances in which false positives or false negatives are prevalent. In such cases, it is helpful to use techniques such as calculating “false alarm demand” (FAD) (Elara et al., 2009; Elara et al., 2010), which measures effects of false alarms on human robot team performance and extends the neglect tolerance model, which allows human operators to switch control based on acceptable performance levels to situations where false positives and negatives are prevalent. Performance can even be conceptualized from a subjective standpoint. That is, does the operator believe he or she achieved their performance targets? For this reason, it can sometimes be helpful to also measure performance using self-report methods. While self-report methods have inherent biases, they can be used to complement more objective measures, which will be discussed later.
Curtailing smartphone use: a field experiment evaluating two interventions
Published in Behaviour & Information Technology, 2022
The first point refers to some interesting differences in the pattern of the results between objective usage time and subjectively estimated usage time. Objective usage time decreased while estimated usage time did not, revealing some inconsistency between objective and subjective data. While we had expected usage time to decrease from week 1 to week 2 for the experimental groups (H1 & H3), the results revealed that it also decreased unexpectedly for the control group. At the same time, subjective usage time did not vary for all groups between week 1 and 2. Overall, this result is in line with previous studies, which also found that subjective and objective usage time did not correlate (Andrews et al. 2015; Wilcockson, Ellis, and Shaw 2018). It is generally considered difficult for users to provide an accurate estimate of usage time (Scharkow 2016). If self-report data entails a risk to be flawed, these findings raise a more general methodological concern (Schwarz and Oyserman 2001). Given the problems associated with self-report data, a broad methodological approach needs to be adopted, comprising subjective and objective measures alike.
Two studies of the perceptions of risk, benefits and likelihood of undertaking password management behaviours
Published in Behaviour & Information Technology, 2022
A weakness of self-report data is that they are based on what people say they would do, and not what they actually do. To mitigate as much as possible for this effect, respondents to the surveys also completed a short social desirability scale, the short 10-item (X1) version of MCSDS. In the first study with the MTurk sample, there was evidence of a small but significant correction with Likelihood ratings for PW/Change component, so respondents might be hiding their password change/reuse habits, by answering in a socially desirable way. However, in the second sample, there was no effect of susceptibility to social desirability on Likelihood of undertaking the behaviour. This difference may be because MTurkers are more inclined to answer in socially desirable ways or in ways that they think the researchers want (known as the ‘demand characteristics’ of the research situation [Rosenthal and Rosnow 1991]) but it could also be due to the different age profile of the two samples. There is some research showing that older participants (i.e. students versus people of working age, but this is only one of the distinctions between the two samples) are more susceptible to social desirability effects (Fraboni and Cooper 1989). Further research is needed to establish whether these explanations do account for the results.
Eudaimonia and Hedonia in the Design and Evaluation of a Cooperative Game for Psychosocial Well-Being
Published in Human–Computer Interaction, 2020
Katie Seaborn, Peter Pennefather, Deborah I. Fels
The self-report data may have been subject to participant bias, notably social desirability bias (which may explain some of the conflicting or unexpected results), acquiescence bias (common with scales, such as the Likert scale), and demand characteristics (the in-lab setting; taking part in a research study). Further, well-being orientations were captured post-game rather than during the game to avoid disrupting the experience; even though the items were carefully phrased to refer to the experience itself, having participants report afterward may have captured reflective ratings rather than immediate reactions. Future work can illuminate the extent of these issues by comparing during-experience surveying and post-experience surveying, triangulation with other measures, and diverse experimental designs that include in-home observation and studies conducted by an independent third party (rather than the named researchers).