Explore chapters and articles related to this topic
Codes of Conduct, Compliance, and Reporting
Published in Rebecca Mirsky, John Schaufelberger, Professional Ethics for the Construction Industry, 2022
Rebecca Mirsky, John Schaufelberger
A familiar saying in business is that you can’t manage or improve something that you don’t measure. In other words, unless you have some way to measure how effective your ethics program is, you won’t know if it’s working. As we continue to move around the wheel of our Effective Ethics and Business Conduct Program, we reach the wedge for Program Assessment and Evaluation. Earlier, we discussed the importance of record keeping and tracking to identify any common themes and ensure that corrective action is being taken. It is also important to evaluate the effectiveness of the methods being used to communicate with and train employees on the contents and use of the code of conduct. Surveys and focus groups are two widely used ways to perform this evaluation. Ethics program effectiveness can be evaluated internally by an assigned group or task force, or an independent outside consultant can be engaged to remove any concerns about potential conflicts of interest.
Evaluation, Separation, and Adoption: Helping Clients Make Virtual Group Coaching What They Do All the Time
Published in William J. Rothwell, Cho Hyun Park, Virtual Coaching to Improve Group Relationships, 2020
Normally, two types of evaluation are conducted: formative and summative evaluation. Formative evaluation can be conducted before moving on to the next step of virtual group coaching. Its purpose is to check whether progress is going well in the direction of achieving intended outcomes. If some challenges or obstacles are found through formative evaluation, the causes should be analyzed to take proper actions to get back on track. For example, in the diagnosis step, a dataset is collected and analyzed with a purpose. If the analysis results show that the dataset is not enough or appropriate to achieve the purpose, the reason should be identified to adjust a plan to achieve the diagnosis purpose. Once formative evaluation determines that diagnosis has been conducted properly, the next step, action planning, can proceed. In the same manner, formative evaluations can be conducted to determine whether action planning has been developed in alignment with the goal of virtual group coaching before moving on to the intervention step. Thus, formative evaluation is helpful to prevent the entire virtual group coaching process from being misdirected.
Machine Intelligence and Managerial Decision-Making
Published in Jay Liebowitz, Data Analytics and AI, 2020
Evaluation is the third step in the decision-making process. An evaluation is an appraisal of something to determine the extent of the opportunity or challenge. There are three main types of evaluation: process (are we satisfied in how we have produced the results), impact (has the process under evaluation produced the desired impact), and outcome (has the impact of the process provoked the targeted outcome). An example here can be found in reviewing the data of a sales process—should we evaluate the sales tasks and activities, the impact of the sales campaign, or changes in consumer purchasing patterns? Will improving the sales process imply doing things more efficiently, more effectively, making better use of organizational resources, or responding more precisely to customer needs?
Research on dynamic visual attraction evaluation method of commercial street based on eye movement perception
Published in Journal of Asian Architecture and Building Engineering, 2022
Guo Xiangmin, Cui Weiqiang, Lo Tiantian, Hou Shumeng
However, there is a lack of scientific methods to meet this requirement in the traditional design kit. On the one hand, the design evaluation method based on visual appeal has not been paid enough attention to; thus, the facade design of many commercial streets lacks scientific and objective guidance, or is highly similar or lacks characteristics, and lacks competitiveness in the commercial development. On the other hand, the existing tools are difficult to meet the requirements for evaluating visual attractiveness. Traditional evaluation tools mainly include questionnaire surveys and interviews. These evaluation tools are dominated by designers or professionals, while personal factors such as personality and emotion greatly hinder the expression of visual attractiveness evaluation feedback from the users. Simultaneously, the existing research on visual attractiveness is mainly based on evaluating static pictures. Still, there is no research combining with the actual situation of users’ dynamic walking in the commercial street. Therefore, it is necessary to innovate evaluation tools. On the one hand, the evaluation process should be based on the user group instead of the designer himself. On the other hand, the analytical method of dynamic eye-movement knowledge should be explored.
Predicting production-output performance within a complex business environment: from singular to multi-dimensional observations in evaluation
Published in International Journal of Production Research, 2021
Henry J. Liu, Peter E.D. Love, Le Ma, Michael C.P. Sing
Performance evaluation (measurement) is a systematic method that is used to quantify the effectiveness and/or efficiency of a system, programme or project (Neely, Gregory, and Platts 2005). Program Theory identifies two types of evaluation: (1) formative and (2) summative (Ainsworth and Viegut 2006). At a project level, formative evaluation takes the form of a pre-project study to determine an investment’s feasibility and predict profitability (e.g. net present value). Contrastingly, summative evaluation is a comparison between the forecasted outcomes and the actual outputs and/or impacts (Irani et al. 2005; Irani, Sharif, and Love 2005). Both formative and summative evaluations have been widely applied to not only interpret the present performance of organisations/projects but also to measure their future outcomes (European Commission 2001; Irani et al. 2005; Irani, Sharif, and Love 2005; Irani, Ghoneim, and Love 2006).
Knowledge, context and problemsheds: a critical realist method for interdisciplinary water studies
Published in Water International, 2020
Denyer et al. (2008, p. 394) state that ‘the distinction between knowledge for solving theoretical problems and knowledge for solving field problems is fundamental’. Though purpose is qualitatively different for these two types of research, their framing logic (for formulating questions/propositions about generative causality) is very similar. This means, I would suggest, that the difference is only fundamental at first glance. The findings of an evaluation study can be used for theory development, and the findings of explanatory research can be inserted into evaluation research design (as they unavoidably are when evaluators posit possible/desirable causal chains). There is, in my view, no fundamental reason why these different purposes could not be part of the same research exercise (though there are of course more everyday reasons why they rarely are in practice).