Explore chapters and articles related to this topic
Sensor- and Recognition-Based Input for Interaction
Published in Julie A. Jacko, The Human–Computer Interaction Handbook, 2012
In time-varying systems, we are often concerned with the frequency with which we receive new samples from the sensor. An overly high sampling rate can result in too much data to process, and can be reduced by downsampling. By contrast, many interactive systems will seem to lose their responsiveness if the overall latency is greater than 100 milliseconds. Latency or lag refers to any delay present in the sensor’s response to a change in the sensed property of the world, and can limit the responsiveness of an interactive system built on the sensor (MacKenzie and Ware 1993). A low-sampling rate imparts latency, which may be remedied by predictive techniques such as the Kalman filter.
An advanced order batching approach for automated sequential auctions with forecasting and postponement
Published in International Journal of Production Research, 2023
Xiang T. R. Kong, Miaohui Zhu, Yu Liu, Kaida Qin, George Q. Huang
We use system response time to measure system responsiveness. A shorter response time indicates higher responsiveness. The system response time refers to the average buyers’ response time (). Let the number of buyers be . The system response time (R) can be calculated as follows: According to the practice of the auction market, each buyer needs to pick up the goods as soon as s/he finishes transactions. The difference between order processing completion time and the ultimate order arrival time of each buyer is called the buyer’s response time. For the buyer , the buyer response time can be derived by where = 1 if auction order i belongs to buyer k, and is zero otherwise. and are the completion time and arrival time of auction order i, respectively. The order completion time depends on the starting and processing time of the batch to which it belongs where is the starting time of the batch .
Measuring what matters in isometric multi-joint rate of force development
Published in Journal of Sports Sciences, 2019
David Drake, Rodney A. Kennedy, Eric S. Wallace
Responsiveness (also termed sensitivity to change) is the ability of a measure to detect change over time (Norman, Wyrwich, & Patrick, 2007). Despite being identified as a critical component of validity (Impellizzeri & Marcora, 2009; Norman et al., 2007; Robertson, Kremer, Aisbett, Tran, & Cerin, 2017), responsiveness of performance tests are scarcely evaluated within sports science (Fanchini et al., 2015). A predominant focus has been on the reliability of measures, which provides evidence for the “noise” of a measure in a population but not the ability of a measure to detect change. That said, a measure with a large typical error “noise” that responds to training with a large magnitude (signal) can be more responsive and useful than a measure with a low typical error but responds to training with a low magnitude (Buchheit, 2014). As such decision-making on the efficacy of performance measures should be evaluated in terms of responsiveness and not based on reliability measures in isolation (Fanchini et al., 2014; Impellizzeri & Marcora, 2009). This concept has not been investigated using isometric multi-joint tests. A common view is that RFD measures are less reliable than peak force (Maffiuletti et al., 2016). Leading to certain neuromuscular measures disregarded in practice based on arbitrary reliability thresholds. An example of this is stated within Bazyler, Sato, Wassinger, Lamont, and Stone (2014) “RFD at 50 and 90 ms with 120° were excluded because of low test-retest reliability (ICC < 0.7)” (Bazyler et al., 2014). Therefore, the assessment of responsiveness in isometric multi-joint tests including comparisons of how differing testing protocols affect responsiveness would offer greater evidence for this critical component of test validity.