Explore chapters and articles related to this topic
Communication and Information Literacy Skills
Published in Hudson Jackson, Kassim Tarhini, Compendium of Civil Engineering Education Strategies, 2022
Hudson Jackson, Kassim Tarhini
Grading rubrics were developed to ensure that the important components of writing and IL (such as scope of research, variety of sources, and use of sources) were consistently assessed and evaluated by different instructors. USCGA faculty opted to develop rubrics that would accomplish the goal of communications assessment and improvement. After reviewing initiatives at other institutions, overly complicated rubrics seemed difficult to use, less sustainable, and could discourage faculty members from embracing assessment and development. For assessment tools where a rubric is used, students received the grading rubric together with the assignment to ensure that the expectations of the instructors were known. Details of the assessment using these rubrics are discussed in Chapter 6.
Engineering Education
Published in Quamrul H. Mazumder, Introduction to Engineering, 2018
Assessment data must be collected by the instructor teaching these courses. The program faculty may use rubrics for each learning outcome to assess students’ work with target achievement levels. Development of rubrics requires performance indicators and levels of performances that can be measured. An example of performance indicators for outcomes a–e is presented in Table 2.2. Table 2.3 shows a sample rubric for student learning outcome a that also shows five different levels of performance—missing, emerging, developing, practicing, maturing, and mastering. For example, the program faculty may develop performance criteria that more than 70% of the students in an introduction to engineering course are expected to achieve. To assess learning outcomes, the instructor may select works submitted by students such as homework, examination, and assignments and determine the level of performance for each student in the class using the rubrics.
Design dimensions of project courses
Published in Varun Gupta, Anh Nguyen-Duc, Real-World Software Projects for Computer Science and Engineering Students, 2021
Assessment rubric is a documented approach for systematic evaluation of student work. It tells what elements of student performance matter most and how the work is to be judged. The rubrics are often written to guarantee proper understanding of the expectations among the examiners resulting in fair assessment. Rubric-based assessment is successfully adopted in many SE courses (Petkov and Petkova 2006, Feldt et al. 2009). We should note that rubrics are not checklists. They are associated with the development of criteria and rating scales for evaluation of the product against these criteria. With the rubrics and the associated criteria, an examiner should be able to carry on a systematic evaluation of the student work and, ideally, provide a repeatable evaluation.
Investigating the impact of innovation competence instruction in higher engineering education
Published in European Journal of Engineering Education, 2023
A.R. Ovbiagbonhia, Bas Kollöffel, Perry Den Brok
Third, this study yielded concrete examples of teaching materials and tools in the form of rubrics for assessing innovation competence. For instance, teachers can use the assessment rubrics to measure the development of students’ innovation competence throughout the degree programmes, as well as to measure the effectiveness of their own teaching interventions. The assessment rubrics can also help students to actively monitor and regulate their own learning by reflecting on the individual items in the rubrics. In this way, students will develop a better understanding of their own innovation competence levels, which supports students’ metacognitive skills. Results revealed that despite teachers’ awareness of the significance of innovation for society, they initially did not explicitly support their students’ innovation competence. Providing teachers with teaching tools will not only encourage them, but may also help them to better focus their innovation competence instruction in their practice.
Archive as Laboratory: Engaging STEM Students & STEM Collections
Published in Engineering Studies, 2019
Tracy B. Grimm, Sharra Vostral
Vostral and Grimm began by determining the learning outcomes for each archive module. The outcomes needed to meet Grimm’s archival literacy instruction goals as well as Vostral’s coverage of subject matter for her courses. Both sets of outcomes require students to apply and practice critical thinking skills as they learn to locate, interpret, evaluate, and use information sources. Of course, these outcomes can range from beginner to expert, and the expectation for the students is at the introductory level (despite the course numbers reflecting a 200 or 300 level). For Grimm, a rubric of archival literacy learning outcomes based upon Peter Carini’s proposed competencies for student learning guides lesson planning and discussion.19 Carini’s competencies stress the student’s ability to recognize, interpret, evaluate, use, and access primary sources. The rubric can be adjusted to reflect the goals of the course and the levels of the students. Within each competency, outcomes on a scale from novice to expert are determined through partnership discussions in order to align Vostral’s and Grimm’s learning outcomes for the archive module.20
Revisiting the design intent concept in the context of mechanical CAD education
Published in Computer-Aided Design and Applications, 2018
Jeffrey Otey, Pedro Company, Manuel Contero, Jorge D. Camba
Research shows that rubrics can be a useful tool to facilitate standardized design intent communication. Rubrics are important not only for assessment, but also for communication of expectations. Of current interest is how to define qualities of design intent (and model quality) in such a manner that lends itself to easy assessment. More precise definitions of these terms are vital to any productive research being accomplished. The authors envision further development of these concepts to construct assessment rubrics with the goal of standardizing such definitions and assessment strategies. These rubrics must be adaptive towards the individual and his state of knowledge and other preferences (rubrics change in a system-driven base). They must also be adaptable, as their personalization must be controlled and steered by the user (i.e., user-driven). It is the authors’ conviction that CAD model quality should not be a correlative goal only to be attempted after basic skills are cultivated, but a major goal from the inauguration of instruction.