Explore chapters and articles related to this topic
Academic spin-outs
Published in Paul Trott, Dap Hartmann, Patrick van der Duin, Victor Scholten, Roland Ortt, Managing Technology Entrepreneurship and Innovation, 2015
Paul Trott, Dap Hartmann, Patrick van der Duin, Victor Scholten, Roland Ortt
Various groups of stakeholders emphasize different performance indicators of academic start-ups. It would be helpful to discuss a number of success indicators in detail. In general, we can make a distinction between objective and subjective measures of start-up success. Objective success measures are hard data about the success of the new firm, such as traditional growth-rate measures. Subjective measures are ‘softer’ and express the success of the new firm based on the perceptions of individuals. The extent to which a performance measure is appropriate depends on the extent to which it is reliable and valid. Reliability reflects whether it measures the object systematically and repeatedly with the same outcome. Thus, for various entrepreneurs, it should provide similar answers. Validity refers to the extent it measures what it is supposed to measure. This often depends on whether the entrepreneur understands the question in the way you want him to and provides you with the right information.
Assessment
Published in Suzanne K. Kearns, Timothy J. Mavin, Steven Hodge, Competency-Based Education in Aviation, 2017
Suzanne K. Kearns, Timothy J. Mavin, Steven Hodge
A perennial concern for assessment designers and assessors is the extent to which evidence can be regarded as evidence of true competence as described in a competency text. Validity refers to the fit between evidence and the subject of the evidence. High validity implies that the assessment being used is measuring what it claims to measure. Low validity would be applied to an instrument that measures something that is not related to the actual need of the assessment task (Linn and Gronlund 1995). Even though there are many types of validity, four main types are used in education: content validity, criterion-related validity, construct validity and face validity.
Logic
Published in Rowan Garnier, John Taylor, Discrete Mathematics, 2020
Logic is used to establish the validity of arguments. It is not so much concerned with what the argument is about but more with providing rules so that the general form of the argument can be judged as sound or unsound. The rules which logic provides allow us to assess whether the conclusion drawn from stated premises is consistent with those premises or whether there is some faulty step in the deductive process which claims to support the validity of the conclusion.
Evaluation of spring-loaded cane ground reaction forces during walking using time and frequency domain analysis
Published in Journal of Medical Engineering & Technology, 2018
Carlos Zerpa, Ayan Sadhu, Mohamed Khalil Zammali, Katelyn Varga
The researchers decided to only analyse the vertical and braking-propulsive ground reaction forces in time and frequency domain because these forces provide the most stable measures to detect any deviation from a normal walking pattern due to a pathology [19,20]. The medio-lateral force, on the other hand, provides more unreliable measures due to the instability of the subject or patient during the support phase, which lacks accuracy in detecting walking pattern abnormalities [19,20]. Based on these findings in existing literature, the researchers analysed the measures of vertical ground reaction forces in time domain and the measures of braking-propulsive ground reaction forces in frequency domain to provide evidence of validity on the inference made from the interpretation of the results. Validity defines as the degree to what the inferences made from the interpretation of a measure support a theoretical or empirical rationale [24].
Measurement properties and feasibility of the Loughborough soccer passing test: A systematic review
Published in Journal of Sports Sciences, 2018
Daizong Wen, Sam Robertson, Guopeng Hu, Benhao Song, Haichun Chen
In the final step, we extracted the reliability, validity, responsiveness, and interpretability results from each article for assessing the measurement properties and feasibility. Reliability, including test-retest reliability, inter/intra-rater reliability and internal consistency reliability, was defined as the degree to which measurement is free from error (Baumgartner & Jackson, 1998). The correlation coefficient (r) or intraclass correlation coefficient (ICC) values of < 0.4, ≥ 0.4 to < 0.8 and ≥ 0.8 were rated as poor, moderate, and excellent, respectively (Helmerhorst, Brage, Warren, Besson, & Ekelund, 2012; Streiner, Norman, & Cairney, 2014). Validity is the degree to which a test measures the construct it claims to measure; it consists of content validity, construct validity (discriminative and convergent) and criterion validity (concurrent and predictive) (Portney & Watkins, 2009). The responsiveness reflects the ability of an instrument to detect change over time and is generally estimated by testing the statistical significance of the mean change scores. However, two important but often overlooked properties, smallest worthwhile change (SWC) calculated by 20% of the between-participants standard deviation (Hopkins, 2004), and minimal detectable change (MDC) estimated as measurement error with a given level of confidence (Beaton, 2000), are often considered more practically meaningful for the evaluation of responsiveness. Both useful indicators also overcome the limitations of the “statistically significant difference” (Beaton, 2000; Copay, Subach, Glassman, Polly, & Schuler, 2007). Therefore, we further calculated the values of SWC and MDC according to a previous recommendation regarding the interpretation of changes in an athletic performance test (Hopkins, 2004). If MDC is greater than SWC, the measurement error of the test is considered “large”. Only when a relevant change exceeds the SWC or MDC (when MDC > SWC), the investigator can be confident that it is predominantly a real change most of the time and not measurement error (Hopkins, 2004). With respect to feasibility, we primarily focused on the interpretability (Mokkink et al., 2010) of the SWC or MDC (when MDC > SWC) for use in discriminating the performers of different constructs (such as playing levels), and detecting a change in performance caused by an external intervention. Specific attention was paid to the extent to which a test is easy to perform and administer. It is only when a test can be undertaken without excessive costs (e.g., long duration, multiple examiners requirements, expensive high-end equipment or complex processes) that it can be easily applied in practical environments such as teams and clubs.