• Utility Menu

University Logo

GA4 Tracking Code

Home

fa51e2b1dc8cca8f7467da564e77b5ea

  • Make a Gift
  • Join Our Email List

Whenever we give feedback, it inevitably reflects our priorities and expectations about the assignment. In other words, we're using a rubric to choose which elements (e.g., right/wrong answer, work shown, thesis analysis, style, etc.) receive more or less feedback and what counts as a "good thesis" or a "less good thesis." When we evaluate student work, that is, we always have a rubric. The question is how consciously we’re applying it, whether we’re transparent with students about what it is, whether it’s aligned with what students are learning in our course, and whether we’re applying it consistently. The more we’re doing all of the following, the more consistent and equitable our feedback and grading will be:

Being conscious of your rubric ideally means having one written out, with explicit criteria and concrete features that describe more/less successful versions of each criterion. If you don't have a rubric written out, you can use this assignment prompt decoder for TFs & TAs to determine which elements and criteria should be the focus of your rubric.

Being transparent with students about your rubric means sharing it with them ahead of time and making sure they understand it. This assignment prompt decoder for students is designed to facilitate this discussion between students and instructors.

Aligning your rubric with your course means articulating the relationship between “this” assignment and the ones that scaffold up and build from it, which ideally involves giving students the chance to practice different elements of the assignment and get formative feedback before they’re asked to submit material that will be graded. For more ideas and advice on how this looks, see the " Formative Assignments " page at Gen Ed Writes.

Applying your rubric consistently means using a stable vocabulary when making your comments and keeping your feedback focused on the criteria in your rubric.

How to Build a Rubric

Rubrics and assignment prompts are two sides of a coin. If you’ve already created a prompt, you should have all of the information you need to make a rubric. Of course, it doesn’t always work out that way, and that itself turns out to be an advantage of making rubrics: it’s a great way to test whether your prompt is in fact communicating to students everything they need to know about the assignment they’ll be doing.

So what do students need to know? In general, assignment prompts boil down to a small number of common elements :

  • Evidence and Analysis
  • Style and Conventions
  • Specific Guidelines
  • Advice on Process

If an assignment prompt is clearly addressing each of these elements, then students know what they’re doing, why they’re doing it, and when/how/for whom they’re doing it. From the standpoint of a rubric, we can see how these elements correspond to the criteria for feedback:

All of these criteria can be weighed and given feedback, and they’re all things that students can be taught and given opportunities to practice. That makes them good criteria for a rubric, and that in turn is why they belong in every assignment prompt.

Which leaves “purpose” and “advice on process.” These elements are, in a sense, the heart and engine of any assignment, but their role in a rubric will differ from assignment to assignment. Here are a couple of ways to think about each.

On the one hand, “purpose” is the rationale for how the other elements are working in an assignment, and so feedback on them adds up to feedback on the skills students are learning vis-a-vis the overall purpose. In that sense, separately grading whether students have achieved an assignment’s “purpose” can be tricky.

On the other hand, metacognitive components such as journals or cover letters or artist statements are a great way for students to tie work on their assignment to the broader (often future-oriented) reasons why they’ve been doing the assignment. Making this kind of component a small part of the overall grade, e.g., 5% and/or part of “specific guidelines,” can allow it to be a nudge toward a meaningful self-reflection for students on what they’ve been learning and how it might build toward other assignments or experiences.

Advice on process

As with “purpose,” “advice on process” often amounts to helping students break down an assignment into the elements they’ll get feedback on. In that sense, feedback on those steps is often more informal or aimed at giving students practice with skills or components that will be parts of the bigger assignment.

For those reasons, though, the kind of feedback we give students on smaller steps has its own (even if ungraded) rubric. For example, if a prompt asks students to  propose a research question as part of the bigger project, they might get feedback on whether it can be answered by evidence, or whether it has a feasible scope, or who the audience for its findings might be. All of those criteria, in turn, could—and ideally would—later be part of the rubric for the graded project itself. Or perhaps students are submitting earlier, smaller components of an assignment for separate grades; or are expected to submit separate components all together at the end as a portfolio, perhaps together with a cover letter or artist statement .

Using Rubrics Effectively

In the same way that rubrics can facilitate the design phase of assignment, they can also facilitate the teaching and feedback phases, including of course grading. Here are a few ways this can work in a course:

Discuss the rubric ahead of time with your teaching team. Getting on the same page about what students will be doing and how different parts of the assignment fit together is, in effect, laying out what needs to happen in class and in section, both in terms of what students need to learn and practice, and how the coming days or weeks should be sequenced.

Share the rubric with your students ahead of time. For the same reason it's ideal for course heads to discuss rubrics with their teaching team, it’s ideal for the teaching team to discuss the rubric with students. Not only does the rubric lay out the different skills students will learn during an assignment and which skills are more or less important for that assignment,  it means that the formative feedback they get along the way is more legible as getting practice on elements of the “bigger assignment.” To be sure, this can’t always happen. Rubrics aren’t always up and running at the beginning of an assignment, and sometimes they emerge more inductively during the feedback and grading process, as instructors take stock of what students have actually submitted. In both cases, later is better than never—there’s no need to make the perfect the enemy of the good. Circulating a rubric at the time you return student work can still be a valuable tool to help students see the relationship between the learning objectives and goals of the assignment and the feedback and grade they’ve received.

Discuss the rubric with your teaching team during the grading process. If your assignment has a rubric, it’s important to make sure that everyone who will be grading is able to use the rubric consistently. Most rubrics aren’t exhaustive—see the note above on rubrics that are “too specific”—and a great way to see how different graders are handling “real-life” scenarios for an assignment is to have the entire team grade a few samples (including examples that seem more representative of an “A” or a “B”) and compare everyone’s approaches. We suggest scheduling a grade-norming session for your teaching staff.

  • Designing Your Course
  • In the Classroom
  • When/Why/How: Some General Principles of Responding to Student Work
  • Consistency and Equity in Grading
  • Assessing Class Participation
  • Assessing Non-Traditional Assignments
  • Beyond “the Grade”: Alternative Approaches to Assessment
  • Getting Feedback
  • Equitable & Inclusive Teaching
  • Advising and Mentoring
  • Teaching and Your Career
  • Teaching Remotely
  • Tools and Platforms
  • The Science of Learning
  • Bok Publications
  • Other Resources Around Campus

Eberly Center

Teaching excellence & educational innovation, creating and using rubrics.

A rubric is a scoring tool that explicitly describes the instructor’s performance expectations for an assignment or piece of work. A rubric identifies:

  • criteria: the aspects of performance (e.g., argument, evidence, clarity) that will be assessed
  • descriptors: the characteristics associated with each dimension (e.g., argument is demonstrable and original, evidence is diverse and compelling)
  • performance levels: a rating scale that identifies students’ level of mastery within each criterion  

Rubrics can be used to provide feedback to students on diverse types of assignments, from papers, projects, and oral presentations to artistic performances and group projects.

Benefitting from Rubrics

  • reduce the time spent grading by allowing instructors to refer to a substantive description without writing long comments
  • help instructors more clearly identify strengths and weaknesses across an entire class and adjust their instruction appropriately
  • help to ensure consistency across time and across graders
  • reduce the uncertainty which can accompany grading
  • discourage complaints about grades
  • understand instructors’ expectations and standards
  • use instructor feedback to improve their performance
  • monitor and assess their progress as they work towards clearly indicated goals
  • recognize their strengths and weaknesses and direct their efforts accordingly

Examples of Rubrics

Here we are providing a sample set of rubrics designed by faculty at Carnegie Mellon and other institutions. Although your particular field of study or type of assessment may not be represented, viewing a rubric that is designed for a similar assessment may give you ideas for the kinds of criteria, descriptions, and performance levels you use on your own rubric.

  • Example 1: Philosophy Paper This rubric was designed for student papers in a range of courses in philosophy (Carnegie Mellon).
  • Example 2: Psychology Assignment Short, concept application homework assignment in cognitive psychology (Carnegie Mellon).
  • Example 3: Anthropology Writing Assignments This rubric was designed for a series of short writing assignments in anthropology (Carnegie Mellon).
  • Example 4: History Research Paper . This rubric was designed for essays and research papers in history (Carnegie Mellon).
  • Example 1: Capstone Project in Design This rubric describes the components and standards of performance from the research phase to the final presentation for a senior capstone project in design (Carnegie Mellon).
  • Example 2: Engineering Design Project This rubric describes performance standards for three aspects of a team project: research and design, communication, and team work.

Oral Presentations

  • Example 1: Oral Exam This rubric describes a set of components and standards for assessing performance on an oral exam in an upper-division course in history (Carnegie Mellon).
  • Example 2: Oral Communication This rubric is adapted from Huba and Freed, 2000.
  • Example 3: Group Presentations This rubric describes a set of components and standards for assessing group presentations in history (Carnegie Mellon).

Class Participation/Contributions

  • Example 1: Discussion Class This rubric assesses the quality of student contributions to class discussions. This is appropriate for an undergraduate-level course (Carnegie Mellon).
  • Example 2: Advanced Seminar This rubric is designed for assessing discussion performance in an advanced undergraduate or graduate seminar.

See also " Examples and Tools " section of this site for more rubrics.

CONTACT US to talk with an Eberly colleague in person!

  • Faculty Support
  • Graduate Student Support
  • Canvas @ Carnegie Mellon
  • Quick Links

creative commons image

  • help_outline help

iRubric: Undergraduate Essay rubric

  • rubric Undergraduate writing

undergraduate essay rubric

  • Skip to Content
  • Skip to Main Navigation
  • Skip to Search

undergraduate essay rubric

IUPUI IUPUI IUPUI

Open Search

  • Center Directory
  • Hours, Location, & Contact Info
  • Plater-Moore Conference on Teaching and Learning
  • Teaching Foundations Webinar Series
  • Associate Faculty Development
  • Early Career Teaching Academy
  • Faculty Fellows Program
  • Graduate Student and Postdoc Teaching Development
  • Awardees' Expectations
  • Request for Proposals
  • Proposal Writing Guidelines
  • Support Letter
  • Proposal Review Process and Criteria
  • Support for Developing a Proposal
  • Download the Budget Worksheet
  • CEG Travel Grant
  • Albright and Stewart
  • Bayliss and Fuchs
  • Glassburn and Starnino
  • Rush Hovde and Stella
  • Mithun and Sankaranarayanan
  • Hollender, Berlin, and Weaver
  • Rose and Sorge
  • Dawkins, Morrow, Cooper, Wilcox, and Rebman
  • Wilkerson and Funk
  • Vaughan and Pierce
  • CEG Scholars
  • Broxton Bird
  • Jessica Byram
  • Angela and Neetha
  • Travis and Mathew
  • Kelly, Ron, and Jill
  • Allison, David, Angela, Priya, and Kelton
  • Pamela And Laura
  • Tanner, Sally, and Jian Ye
  • Mythily and Twyla
  • Learning Environments Grant
  • Extended Reality Initiative(XRI)
  • Champion for Teaching Excellence Award
  • Feedback on Teaching
  • Consultations
  • Equipment Loans
  • Quality Matters@IU
  • To Your Door Workshops
  • Support for DEI in Teaching
  • IU Teaching Resources
  • Just-In-Time Course Design
  • Teaching Online
  • Scholarly Teaching Taxonomy
  • The Forum Network
  • Media Production Spaces
  • CTL Happenings Archive
  • Recommended Readings Archive

Center for Teaching and Learning

  • Assessing Student Learning

Creating and Using Rubrics

Rubrics are both assessment tools for faculty and learning tools for students that can ease anxiety about the grading process for both parties. Rubrics lay out specific criteria and performance expectations for an assignment. They help students and instructors stay focused on those expectations and to be more confident in their work as a result. Creating rubrics does require a substantial time investment up front, but this process will result in reduced time spent grading or explaining assignment criteria down the road.  

Reasons for Using Rubrics

Research indicates that rubrics:

  • Rubrics can help normalize the work of multiple graders, e.g., across different sections of a single course or in large lecture courses where TAs manage labs or discussion groups.
  • Well-crafted rubrics can reduce the time that faculty spend grading assignments.
  • Timely feedback has a positive impact on the learning process.
  • When coupled with other forms of feedback (e.g., brief, individualized comments) rubrics show students how to improve.
  • By giving students a clear sense of what constitutes different levels of performance, rubrics can make self- and peer-assessments more meaningful and effective.
  • If students complete an assignment with a rubric as a guide, then students are better equipped to think critically about their work and to improve it.
  • Rubrics establish, in great detail, what different levels of student work look like. If students have seen an assignment rubric in advance and know that they will be held accountable to it, defending grade decisions can be much easier.

Tips for Creating Effective Rubrics

  • To create performance descriptions for a new rubric, first rank student responses to an assignment from best to mediocre to worst. Read back through the assignments in that order. Record the characteristics that define student work at each of the three levels. Use your notes to craft the performance descriptions for each criteria category of your new rubric.
  • Alternately, start by drafting your high and low performance descriptions for each criteria category, then fill in the mid-range descriptions.
  • Use the language of your assignment prompt in your rubric.
  • Consider rubric language carefully—how do you encapsulate the range of student responses that could realistically fall in a given cell? Lots of “and/or” statements.
  • E.g., “Introduction and/or conclusion handled well but may leave some points unaddressed;” “Sources may be improperly cited or may be missing”
  • Completely Effective, Reasonably Effective, Ineffective
  • Superb, Strong, Acceptable, Weak
  • Compelling, Reasonable, Basic
  • Advanced, Intermediate, Novice
  • Proficient, Not Yet Proficient, Beginning
  • Outstanding, Very Good, Good, Basic, Unsatisfactory
  • Exemplary, Proficient, Competent, Developing, Beginning

Tips for Testing and Revising Rubrics

  • Score sample assignments without a rubric and then with one. Compare the results. Ask a colleague to use your rubric to do the same.
  • Ask a colleague to use your rubric to score student work you've already scored with the rubric and then compare results.
  • Get your colleagues' feedback on the alignment of your rubric's grading criteria with your assignment and course-level learning objectives.
  • Discuss your rubrics with your students and determine what they do and do not like or understand about them.

Tips for Using Rubrics

  • Create a generic rubric template that you can modify for specific assignments.
  • Keep the rubric to one page if at all possible. Give the rubric a descriptive title that clearly links it to the assignment prompt and/or digital grade book.
  • Give the rubric to students in advance (i.e., with the related assignment prompt) and discuss it with them. Explain the purpose of the rubric, and require students to use the rubric for self-assessment and to reflect on process.
  • Allow students to score example work with the rubric before attempting actual peer- or self-review. Discuss with the students how the example work correlates to the competency levels on the rubric.
  • Consider engaging in active-learning, rubric development exercises with your students. Have your students help you identify relevant assignment components or develop drafts of your performance descriptions, etc.
  • When returning work to students, only highlight those portions of the rubric text that are relevant.
  • Couple rubrics with other measures or forms of feedback. Giving  brief additional feedback that responds holistically and/or subjectively to student work is a good way to support formative assessment.
  • Include relevant learning objectives on your rubrics and/or related assignment prompts.
  • To document trends in your teaching, keep copies of rubrics that you return to students and review them later on. Analyzing groups of graded rubrics over time can give you a sense of what might be weak in your teaching and what you need to focus on in the future.
  • Canvas has a built-in rubric tool .
  • iRubric  can be used create be used to create rubrics in Canvas as well (availability varies by department). 

Online Resources

  • Rubrics resource page from the Eberly Center at Carnegie Mellon University (includes several discipline-specific examples):
  • Sample Rubrics from the Association for the Assessment of Learning in Higher Education
  • Association of American Colleges and Universities VALUE (Valid Assessment of Learning in Undergraduate Education) Rubrics
  • Holistic Essay-Grading Rubrics at the University of Georgia, Athens
  • Quality Matters Rubric for Assessing University-Level Online and Blended Courses  (Seventh Edition)
  • iRubric Tool and Samples
  • Canvas Guides on Rubrics:
  • Creating a rubric
  • Editing a rubric
  • Managing course rubrics
  • Rubrics in Speedgrader

Barkley, E.F., Cross, P.K., and Major, C.H. (2005). Collaborative learning techniques: A handbook for college faculty . San Francisco, CA: Jossey-Bass.

Barney, Sebastian, et al . “Improving Students with Rubric-Based Self-Assessment and Oral Feedback.” IEEE Transaction on Education 55, no. 3 (August 2012): 319-25.

Besterfield-Sacre, Mary, et al . “Scoring Concept Maps: An Integrated Rubric for Assessing Engineering Education.” Journal of Engineering Education 93, no. 2 (2004): 105-15.

Broad, Brian. What we Really Value: Beyond Rubrics in Teaching and Writing Assessment . Logan, UT: Utah State University Press, 2003.

Hout, Brian. Rearticulating Writing Assessment for Teaching and Learning . Logan, UT: Utah State University Press, 2002.

Howell, Rebecca J. “Exploring the Impact of Grading Rubrics on Academic Performance: Findings from a Quasi-Experimental, Pre-Post Evaluation.” Journal on Excellence in College Teaching 22, no. 2 (2011): 31-49.

Jonsson, Anders and Gunilla Svingby. “The Use of Scoring Rubrics: Reliability, Validity, and Educational Consequences.” Educational Research Review 2 (2007): 130-44.

Kishbaugh, Tara L.S., et al . “Measuring Beyond Content: A Rubric Bank for Assessing Skills in Authentic Research Assignments in the Sciences.” Chemistry Education Research and Practice 13 (2012): 268-76.

Leist, Cathy, et al . “The Effects of Using a Critical Thinking Scoring Rubric to Assess Undergraduate Students’ Reading Skills.” Journal of College Reading and Learning 43, no. 1 (Fall 2012): 31-58.

Livingston, Michael and Lisa Storm Fink. “The Infamy of Grading Rubrics.” English Journal, High School Edition 102, no. 2 (Nov. 2012): 108-13.

Stevens, Dannelle D. and Antonia J. Levi. Introduction to Rubrics: An Assessment Tool to Save GradingTime, Convey Effective Feedback and Promote Student Learning . (Sterling, VA: Stylus Press, 2005).

Wilson, Maja. Rethinking Rubrics in Writing Assessment . (Portsmouth, NH: Heinemann, 2006).

Authored by James Gregory (September, 2014)

Updated by James Gregory (September, 2015)

Updated by James Gregory (February, 2016)

Updated by Andi Rehak (February, 2017)

Center for Teaching and Learning resources and social media channels

Rubric Best Practices, Examples, and Templates

A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

Rubrics can help instructors communicate expectations to students and assess student work fairly, consistently and efficiently. Rubrics can provide students with informative feedback on their strengths and weaknesses so that they can reflect on their performance and work on areas that need improvement.

How to Get Started

Best practices, moodle how-to guides.

  • Workshop Recording (Fall 2022)
  • Workshop Registration

Step 1: Analyze the assignment

The first step in the rubric creation process is to analyze the assignment or assessment for which you are creating a rubric. To do this, consider the following questions:

  • What is the purpose of the assignment and your feedback? What do you want students to demonstrate through the completion of this assignment (i.e. what are the learning objectives measured by it)? Is it a summative assessment, or will students use the feedback to create an improved product?
  • Does the assignment break down into different or smaller tasks? Are these tasks equally important as the main assignment?
  • What would an “excellent” assignment look like? An “acceptable” assignment? One that still needs major work?
  • How detailed do you want the feedback you give students to be? Do you want/need to give them a grade?

Step 2: Decide what kind of rubric you will use

Types of rubrics: holistic, analytic/descriptive, single-point

Holistic Rubric. A holistic rubric includes all the criteria (such as clarity, organization, mechanics, etc.) to be considered together and included in a single evaluation. With a holistic rubric, the rater or grader assigns a single score based on an overall judgment of the student’s work, using descriptions of each performance level to assign the score.

Advantages of holistic rubrics:

  • Can p lace an emphasis on what learners can demonstrate rather than what they cannot
  • Save grader time by minimizing the number of evaluations to be made for each student
  • Can be used consistently across raters, provided they have all been trained

Disadvantages of holistic rubrics:

  • Provide less specific feedback than analytic/descriptive rubrics
  • Can be difficult to choose a score when a student’s work is at varying levels across the criteria
  • Any weighting of c riteria cannot be indicated in the rubric

Analytic/Descriptive Rubric . An analytic or descriptive rubric often takes the form of a table with the criteria listed in the left column and with levels of performance listed across the top row. Each cell contains a description of what the specified criterion looks like at a given level of performance. Each of the criteria is scored individually.

Advantages of analytic rubrics:

  • Provide detailed feedback on areas of strength or weakness
  • Each criterion can be weighted to reflect its relative importance

Disadvantages of analytic rubrics:

  • More time-consuming to create and use than a holistic rubric
  • May not be used consistently across raters unless the cells are well defined
  • May result in giving less personalized feedback

Single-Point Rubric . A single-point rubric is breaks down the components of an assignment into different criteria, but instead of describing different levels of performance, only the “proficient” level is described. Feedback space is provided for instructors to give individualized comments to help students improve and/or show where they excelled beyond the proficiency descriptors.

Advantages of single-point rubrics:

  • Easier to create than an analytic/descriptive rubric
  • Perhaps more likely that students will read the descriptors
  • Areas of concern and excellence are open-ended
  • May removes a focus on the grade/points
  • May increase student creativity in project-based assignments

Disadvantage of analytic rubrics: Requires more work for instructors writing feedback

Step 3 (Optional): Look for templates and examples.

You might Google, “Rubric for persuasive essay at the college level” and see if there are any publicly available examples to start from. Ask your colleagues if they have used a rubric for a similar assignment. Some examples are also available at the end of this article. These rubrics can be a great starting point for you, but consider steps 3, 4, and 5 below to ensure that the rubric matches your assignment description, learning objectives and expectations.

Step 4: Define the assignment criteria

Make a list of the knowledge and skills are you measuring with the assignment/assessment Refer to your stated learning objectives, the assignment instructions, past examples of student work, etc. for help.

  Helpful strategies for defining grading criteria:

  • Collaborate with co-instructors, teaching assistants, and other colleagues
  • Brainstorm and discuss with students
  • Can they be observed and measured?
  • Are they important and essential?
  • Are they distinct from other criteria?
  • Are they phrased in precise, unambiguous language?
  • Revise the criteria as needed
  • Consider whether some are more important than others, and how you will weight them.

Step 5: Design the rating scale

Most ratings scales include between 3 and 5 levels. Consider the following questions when designing your rating scale:

  • Given what students are able to demonstrate in this assignment/assessment, what are the possible levels of achievement?
  • How many levels would you like to include (more levels means more detailed descriptions)
  • Will you use numbers and/or descriptive labels for each level of performance? (for example 5, 4, 3, 2, 1 and/or Exceeds expectations, Accomplished, Proficient, Developing, Beginning, etc.)
  • Don’t use too many columns, and recognize that some criteria can have more columns that others . The rubric needs to be comprehensible and organized. Pick the right amount of columns so that the criteria flow logically and naturally across levels.

Step 6: Write descriptions for each level of the rating scale

Artificial Intelligence tools like Chat GPT have proven to be useful tools for creating a rubric. You will want to engineer your prompt that you provide the AI assistant to ensure you get what you want. For example, you might provide the assignment description, the criteria you feel are important, and the number of levels of performance you want in your prompt. Use the results as a starting point, and adjust the descriptions as needed.

Building a rubric from scratch

For a single-point rubric , describe what would be considered “proficient,” i.e. B-level work, and provide that description. You might also include suggestions for students outside of the actual rubric about how they might surpass proficient-level work.

For analytic and holistic rubrics , c reate statements of expected performance at each level of the rubric.

  • Consider what descriptor is appropriate for each criteria, e.g., presence vs absence, complete vs incomplete, many vs none, major vs minor, consistent vs inconsistent, always vs never. If you have an indicator described in one level, it will need to be described in each level.
  • You might start with the top/exemplary level. What does it look like when a student has achieved excellence for each/every criterion? Then, look at the “bottom” level. What does it look like when a student has not achieved the learning goals in any way? Then, complete the in-between levels.
  • For an analytic rubric , do this for each particular criterion of the rubric so that every cell in the table is filled. These descriptions help students understand your expectations and their performance in regard to those expectations.

Well-written descriptions:

  • Describe observable and measurable behavior
  • Use parallel language across the scale
  • Indicate the degree to which the standards are met

Step 7: Create your rubric

Create your rubric in a table or spreadsheet in Word, Google Docs, Sheets, etc., and then transfer it by typing it into Moodle. You can also use online tools to create the rubric, but you will still have to type the criteria, indicators, levels, etc., into Moodle. Rubric creators: Rubistar , iRubric

Step 8: Pilot-test your rubric

Prior to implementing your rubric on a live course, obtain feedback from:

  • Teacher assistants

Try out your new rubric on a sample of student work. After you pilot-test your rubric, analyze the results to consider its effectiveness and revise accordingly.

  • Limit the rubric to a single page for reading and grading ease
  • Use parallel language . Use similar language and syntax/wording from column to column. Make sure that the rubric can be easily read from left to right or vice versa.
  • Use student-friendly language . Make sure the language is learning-level appropriate. If you use academic language or concepts, you will need to teach those concepts.
  • Share and discuss the rubric with your students . Students should understand that the rubric is there to help them learn, reflect, and self-assess. If students use a rubric, they will understand the expectations and their relevance to learning.
  • Consider scalability and reusability of rubrics. Create rubric templates that you can alter as needed for multiple assignments.
  • Maximize the descriptiveness of your language. Avoid words like “good” and “excellent.” For example, instead of saying, “uses excellent sources,” you might describe what makes a resource excellent so that students will know. You might also consider reducing the reliance on quantity, such as a number of allowable misspelled words. Focus instead, for example, on how distracting any spelling errors are.

Example of an analytic rubric for a final paper

Example of a holistic rubric for a final paper, single-point rubric, more examples:.

  • Single Point Rubric Template ( variation )
  • Analytic Rubric Template make a copy to edit
  • A Rubric for Rubrics
  • Bank of Online Discussion Rubrics in different formats
  • Mathematical Presentations Descriptive Rubric
  • Math Proof Assessment Rubric
  • Kansas State Sample Rubrics
  • Design Single Point Rubric

Technology Tools: Rubrics in Moodle

  • Moodle Docs: Rubrics
  • Moodle Docs: Grading Guide (use for single-point rubrics)

Tools with rubrics (other than Moodle)

  • Google Assignments
  • Turnitin Assignments: Rubric or Grading Form

Other resources

  • DePaul University (n.d.). Rubrics .
  • Gonzalez, J. (2014). Know your terms: Holistic, Analytic, and Single-Point Rubrics . Cult of Pedagogy.
  • Goodrich, H. (1996). Understanding rubrics . Teaching for Authentic Student Performance, 54 (4), 14-17. Retrieved from   
  • Miller, A. (2012). Tame the beast: tips for designing and using rubrics.
  • Ragupathi, K., Lee, A. (2020). Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher Education. In: Sanger, C., Gleason, N. (eds) Diversity and Inclusion in Global Higher Education. Palgrave Macmillan, Singapore.

Logo for University of Wisconsin Pressbooks

Responding, Evaluating, Grading

Rubric for a Research Proposal

Matthew Pearson - Writing Across the Curriculum

UW-Madison WAC Sourcebook 2020 Copyright © by Matthew Pearson - Writing Across the Curriculum. All Rights Reserved.

Share This Book

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Undergrad Neurosci Educ
  • v.15(1); Fall 2016

Using Rubrics as a Scientific Writing Instructional Method in Early Stage Undergraduate Neuroscience Study

Erin b.d. clabough.

1 Biology Department, Hampden-Sydney College, Hampden-Sydney, VA 23943

2 Biology Department, Randolph-Macon College, Ashland, VA 23005

Seth W. Clabough

3 Communication Center/English Department, Randolph-Macon College, Ashland, VA 23005

Associated Data

Scientific writing is an important communication and learning tool in neuroscience, yet it is a skill not adequately cultivated in introductory undergraduate science courses. Proficient, confident scientific writers are produced by providing specific knowledge about the writing process, combined with a clear student understanding about how to think about writing (also known as metacognition). We developed a rubric for evaluating scientific papers and assessed different methods of using the rubric in inquiry-based introductory biology classrooms. Students were either 1) given the rubric alone, 2) given the rubric, but also required to visit a biology subject tutor for paper assistance, or 3) asked to self-grade paper components using the rubric. Students who were required to use a peer tutor had more negative attitudes towards scientific writing, while students who used the rubric alone reported more confidence in their science writing skills by the conclusion of the semester. Overall, students rated the use of an example paper or grading rubric as the most effective ways of teaching scientific writing, while rating peer review as ineffective. Our paper describes a concrete, simple method of infusing scientific writing into inquiry-based science classes, and provides clear avenues to enhance communication and scientific writing skills in entry-level classes through the use of a rubric or example paper, with the goal of producing students capable of performing at a higher level in upper level neuroscience classes and independent research.

Introductory biology courses frequently serve as the foundational course for undergraduates interested in pursuing neuroscience as a career. It is therefore important that neuroscience professors remain aware of the sweeping revisions to undergraduate biology education that continue to be implemented ( Woodin et al., 2009 ; Labov et al., 2010 ; Goldey et al ., 2012 ). Recommendations for these changes are summarized in The American Association for the Advancement of Science’s (AAAS) publication Vision and Change in Undergraduate Biology Education: A Call to Action, which provides a blueprint for massive change in the way that students are introduced to biology ( AAAS, 2009 ). This new perspective encourages a focus on learning and applying the scientific method to a real and present problem that needs to be solved, whereas factual content is deemphasized.

Scientific writing competence is a crucial part of neuroscience education, and is a skill that is partly about process, partly about providing evidence, and lastly about constructing a careful argument. Requiring students to both catalog and reflect on their own work by constructing research papers allows students to experience yet another facet of a scientist’s job description.

As our undergraduate biology classes move away from facts and towards process, we are left with the very real opportunity to teach future neuroscientists how to write up the experiments that they have constructed and run in our classes. As a result, introductory biology classrooms provide an ideal environment for science writing instruction that can serve as the foundation for the writing students will do in upper level neuroscience courses.

Writing as a Teaching Tool

Undergraduate neuroscience faculty should note that writing about science has more benefits than simply honing communication skills or reflecting on information. Previous research shows that the incorporation of writing elements into laboratory content enhances students’ critical thinking abilities ( Quitadamo and Kurtz, 2007 ). Obviously, learning-to-write strategies have been embraced by educators for many years, but writing-to-learn strategies are not as commonly used in the fields of math and science, primarily due to a lack of awareness by science, technology, engineering, and mathematics (STEM) educators about how writing can actually cause learning to occur. In particular, assignments that require the writer to articulate a reasoned argument are a particularly effective way to use writing-to-learn. Advocates of writing-to-learn strategies promote the merging of interpretative methods and rubrics (used so often in the humanities) with the hypothesis testing and experimental design that typically occurs in STEM fields to create a type of hybrid research paradigm ( Reynolds et al., 2012 ), and a more holistic approach.

Making Scientific Writing Competence Part of the Introductory Biology Curriculum

The nature of scientific writing is different from traditional essay or persuasive writing, so providing specialized science writing instruction as early as possible in a young scientist’s career is valuable even at institutions that mandate first year writing competence with a required core curriculum. If general undergraduate biology courses teach students the elements of good scientific writing and how to properly format a paper, future neuroscience students are much better prepared to tackle more difficult scientific content in upper-level courses, and are better able to communicate what they find in their own research. In addition, teaching science writing in a way that appeals to young scientists may help with attrition rates for majors.

Teaching students to proficiently write all sections of a scientific paper also teaches students about the different forms of communication that are essential to both scientists and to engaged citizens ( Bennett, 2008 ). For example, the content of an abstract is similar to a news brief or could serve as a summary to inform a potential research student about what has been happening in the lab. The content of an introduction section justifies the scientific work, which is a key element in a successful grant proposal. Writing a thoughtful discussion shows that the researcher has selected the next logical experiment based on the results. Crafting a discussion that considers how the project fits into the global science community is particularly important for the introductory biology student who is taking the course just to fulfill their lab requirement, and may never sit in another science class again.

What is the Best Way to Teach Scientific Writing?

Given the importance of effective science communication ( Brownell et al., 2013a ), it is surprising that more resources and effort are not channeled toward teaching scientific writing to undergraduate students. There are multiple views on the most effective way to teach writing in a science classroom ( Bennett, 2008 ; Reynolds and Thompson, 2011 ; Reynolds et al., 2012 ). Working in teams is a recommended strategy ( Singh and Mayer, 2014 ) and many methods incorporate classmate peer review to evaluate student writing ( Woodget, 2003 ; Prichard, 2005 ; Blair et al., 2007 ; Hartberg et al., 2008 ). Writing instructional methods that target scientific subjects have a history of success—for example, weaving elements of writing throughout a Neuroimmunology class ( Brownell et al., 2013b ), asking Neurobiology/Cell Biology students to write NSF-style grants ( Itagaki, 2013 ) or using a calibrated peer-review writing-to-learn process in Neuroscience classes ( Prichard, 2005 ).

Methods that emphasize understanding primary scientific literature typically focus on thesis writing ( Reynolds and Thompson, 2011 ), the reading and discussion of landmark published peer-reviewed journal articles as an example of the correct way to write up scientific results ( Hoskins et al., 2011 ; Segura-Totten and Dalman, 2013 ), or require students to actually write or submit their own articles to a peer-reviewed journal to experience the peer-review process first-hand ( Jones et al., 2011 ). These methods typically work well to teach writing to upperclassmen, but may prove unwieldy for use in the general curriculum or for entry-level scientists. Use of a specific paper construction method can effectively help novice writers include required elements and get to a finished project ( O’Connor and Holmquist, 2009 ), but more detailed expectations for content and style will be required for students in an introductory course.

Unfortunately for many undergraduate science writers, the first real attempt at scientific writing often happens during the undergraduate thesis, typically written as a senior, and students are commonly left to learn scientific writing on their own ( O’Connor and Holmquist, 2009 ). It only seems reasonable that teachers should prepare their students to write an effective, culminating thesis well before the capstone coursework and research commences. Previous work showed that integrating science writing into an undergraduate psychology course over a year-long period resulted in improved student writing ability ( Holstein et al., 2015 ). So how can underclassmen be taught scientific writing within a single semester?

Use of Rubrics to Teach Scientific Writing

The use of rubrics in STEM fields is not a new idea, and a grading rubric serves several simultaneously useful functions. First, it clearly communicates assignment requirements and sets uniform standards for student success, while eliminating unintentional bias in the faculty grading process. Next, it can be extremely useful in finding areas that the students still need help on and targeting future instruction accordingly. The rubric can also serve as a tool to create a more effective peer review process, if the instructor chooses to use it in this way. And lastly, the rubric sharpens the teacher’s ideas about what he/she is looking for before the material is taught, possibly making for more effective instruction. A detailed outline can facilitate the writing process ( Frey, 2003 ), and a detailed rubric may function in a similar manner, as it provides a scaffold to write the entire paper.

Previous research shows that rubrics can augment students’ ability to use medical terminology correctly ( Rawson et al., 2005 ) and can improve students’ ability to critically evaluate scientific studies ( Dawn et al., 2011 ). Use of a grading rubric has proven a reliable way to evaluate lab reports in large university settings using graduate teaching assistants across numerous sub-disciplines ( Timmerman et al., 2010 ).

Informal assessment during previous semesters running a inquiry-based classroom revealed that some students with no previous active learning experiences can struggle with the lack of a textbook, the idea that process can be more important than content, and what they perceive as a lack of concrete items to memorize (personal observation, E. Clabough). In response to student feedback, rubrics were developed to provide very concrete methods of grading and assessment for items like oral presentations, lab notebooks, and writing assignments.

When presented with new material, the learning brain seeks out patterns as it processes information. Because a rubric provides structure and pattern to this process, it not only assists students with organizational strategies, but also reflects the way the brain actually learns ( Willis, 2010 ). Use of carefully designed rubrics can increase executive functioning in students, including skills such as organizing, prioritizing, analyzing, comparing/contrasting, and goal setting ( Carter, 2000 ). Requiring students to use the rubrics to make decisions about the material while self-grading may further tap into executive functions during the learning process.

Peer Tutoring to Enhance Science Writing Competence

Peer tutoring places a peer in the role of instructor in a one-on-one setting with a fellow student. The role of the peer tutor is to elucidate concepts, to provide individualized instruction, and to allow the tutee to practice manipulating the subject matter. Numerous studies have established the link between this form of tutoring and improved academic performance for tutees, which is measurable in a variety of subjects including reading, math, social studies and science ( Utley and Monweet, 1997 ; Greenwood et al., 1992 ; Bowman-Perrott et al., 2013 ). The effectiveness of using peer tutoring to teach science writing to undergraduates has been under-examined, and to our knowledge, this is the first study to combine this approach with the use of a grading rubric.

The current experiment explored different ways to teach scientific writing to undergraduate students by incorporating a detailed grading rubric into established inquiry-based undergraduate biology classrooms over the course of a semester. All students were provided with scientific writing rubrics, though some students received additional peer tutoring. We did not directly measure instructional success, but the quality of scientific papers was assessed as a routine part of the course and compared against the attitudes that students had towards science writing in general. Student attitudes about the effectiveness of different ways to teach writing were also measured.

MATERIALS AND METHODS

Course design.

Randolph-Macon College (R-MC) is a small liberal arts college that converted their introductory biology classes into an inquiry-based learning format in 2010. Two semesters of the module-based Integrative Biology are offered and students may take them in any order. The current experiment was performed in these Integrative Biology (BIOL122) classrooms, which were run as a combination lecture/lab course broken into three separate instructional modules over the course of a semester. Short 20–30 minute lectures were interspersed with experiment brainstorming, experiment execution, hands-on class activities, statistics, and paper writing exercises. The three-hour courses met twice weekly throughout the semester, and were taught by the same professor (E. Clabough). Undergraduate students were primarily freshman and sophomores and the course was open to both biology majors and non-majors.

Students were expected to design, perform, and analyze their own experiments in groups using the provided module organisms. Students were broken into small groups of 3–4 students to work as lab teams. Individual papers were written at the conclusion of each of the three modules. Module 1 explored the molecular biology of energy in mouse mitochondrial isolates. Students assessed if a redox dye could substitute for the enzymes within the mitochondrial membrane, and used a colorimeter to assess whether or not an electron was successfully passed to cytochrome C in the preparations. Module 2 centered on genetics using commercially available alcohol dehydrogenase Drosophila mutants. Students used an inebriometer to measure the susceptibility of an AHD mutant/wild type flies to ethanol vapors. Module 3 looked at vertebrate development using a zebrafish fetal alcohol paradigm. Students exposed developing embryos to various ethanol concentrations and measured response variables of their own choosing, including body size, heartbeat and behavioral measures.

Scientific Writing Experimental Conditions

Scientific writing was taught in chunks to the students as the course progressed ( Table 1 ). Each student was expected to individually write a lab paper at the conclusion of each module in order to communicate that module’s experiments. The Module 1 paper consisted of the title page, methods, results, and references. The Module 2 paper consisted of the title page, introduction, methods and results, discussion, and references. The Module 3 paper was formatted as an entire article, complete with title page, abstract, introduction, methods, results, discussion, and references. Some paper elements, particularly at the beginning of the semester, went through several rough drafts before the final module paper was due.

Timetable for teaching scientific writing. Scientific writing content, format, rubrics, and assignments were introduced using a specific timeline throughout the module-based Integrative Biology course. Three separate scientific papers were assigned based on class experimental results. The rubric had eight distinct components that were utilized as needed throughout the semester. Each rubric component was handed out at the time the students were assigned that particular element of the paper. A summary rubric was also handed out before each final paper.

Sections were randomized to one of three experimental conditions—Rubric Only, Rubric + Tutor or Self-Grade Rubric—using a random number generator. Each condition centered on a different use of the same grading rubric for scientific writing. Since it is not practical to withhold a rubric from one section of a multi-section course, all sections had access to the exact same rubric. The first group (n=16) served as a Rubric Only control group. Individual paper element rubrics were handed out to students when each element was introduced during class, and the instructor went over each rubric in detail for all classes. Students were told to consult the rubrics before turning in their drafts or final papers. In addition, a rubric summarizing the upcoming paper requirements (see Supplementary Material ) was handed out approximately a week before each module paper was due.

The second group, Rubric + Tutor (n=14), received the rubrics and peer tutoring. This group was given rubrics, but was also required to use tutoring services at least one time for each module paper (three times over the course of the semester). Due to the specific formatting and content requirements of a scientific paper, participants were tutored by biology subject tutors rather than the writing center tutors. The three biology tutors were upper-class biology majors, nominated by faculty, and employed by the academic center at R-MC. These tutors demonstrated outstanding competence in their courses of study and had undergone a tutoring training program that is nationally certified by the College Reading and Learning Association (CRLA). In addition, the biology subject tutors had all taken Integrative Biology at R-MC.

Biology subject tutors (2 female and 1 male) had designated weekly hours for drop-ins or appointments, generally in the evenings. At the beginning of the semester, the instructor met with the biology subject tutors and informed them of the experiment, provided them with the grading rubrics and paper due dates, and asked for a log of upcoming student sessions. Ongoing contact was kept between the instructor and the subject tutors throughout the semester.

The third group, Self-Grade rubric (n=14), received the same grading rubrics, but used them in a different way. They were given the relevant rubrics, but instead of having the instructor go over the rubrics, this group was asked to make decisions about whether or not their own assignments fell in line with the rubric requirements during class. Students were asked to grade their own drafts, as well as other students’ drafts throughout the semester. For this peer-review, each student used the rubric to grade two other students’ drafts during class and immediately communicated the grading results one-on-one with the writer.

Many students in this study had previously taken the first semester of Integrative Biology (86% of the students in the Rubric Only section, 92% of the Rubric + Tutor group, and 40% of the Self-Grade Rubric section). These students had exposure to and practice with scientific writing, since students in both semesters are required to write scientific papers, so this difference may alter interpretation of between groups differences. Students enrolled in the Rubric Only section reported an average self-reported estimated GPA of 2.69 and the class was composed of 84% freshman. Students in the Rubric + Tutoring section were also mostly freshman (92%), who reported an average GPA of 2.83, while the Self-Grade rubric section contained more upperclassmen (60% freshman), and self-reported an average GPA of 2.46. GPA was not statistically different between groups.

Scientific Writing Evaluation Rubrics and Tutors

Rubrics were designed using a point system for each required paper element (to total approximately 70% of the overall score), and overall paper writing style/format was weighted as approximately 30% of the overall paper grade (see Supplementary Material ). All students were encouraged to use the biology subject tutors as a valuable college resource, although it was only compulsory for students in the Rubric + Tutor group to visit the tutors.

Scientific Writing Attitudes and Perceived Competence Assessment

At the beginning of the semester, all students completed a Likert-based questionnaire ( Likert, 1932 ) which explored their attitudes towards writing in science, as well as how relevant they felt effective writing is to good science. The questionnaire also collected information about how students personally assessed their own competence in writing overall, as well as in science writing, and their perceptions about the effectiveness of different ways to teach scientific writing. The same questionnaires were given again to students during the final week of classes (see Supplementary Material ).

Data Analysis

The writing attitude and perceived competence questionnaire was examined for meaningful change between and within groups to look for differences in the assessment of scientific writing importance or in writer confidence. The mean and SEM were calculated for each Likert style question. After ensuring that the data met the requirements to use a parametric statistic (data were normally distributed, groups had equal variance, there were at least five levels to the ordinal scale, and there were no extreme scores), data were analyzed using ANOVA, followed by t-tests for pairwise comparisons. One pre-assessment question had high variance as measured by standard error, so the Kruskal-Wallis test was used in that instance. The responses were anonymous within each group, so it was not possible to track changes within individual students, but t-tests were also performed to detect differences in each group between the first and last weeks of class.

Although writing performance was not the primary objective of the study, the rubric was used to grade the scientific reports to determine a paper score for each of the three module papers as a part of the course. Papers for all experimental groups were mixed together for grading by the class instructor, though the instructor was not blind to their identity. Because each module paper required that students demonstrate competency writing new parts of a scientific paper, overall paper scores were calculated across the semester. Papers were worth more points as the semester progressed and more paper sections were added (Paper 1: 50 points, Paper 2: 60 points, Paper 3: 100 points). Differences between groups in overall paper scores were collected (total points accumulated over the three papers) and analyzed using an ANOVA.

Biology Subject Tutor Use

In the Rubric + Tutor group, 78.6% of the students visited the tutors an average of 2.3 times per student. Tutoring hours and services were advertised to the students as a valuable paper writing resource, but just 20% of the Self–Grade Rubric class and none of the Rubric Only class visited the tutors at some point during the semester. During the current study semester, a total of 19 students visited the biology subject tutors a total of 44 times campus-wide. This reflects an increase from the semester prior to the current study, when just 10 students utilized the tutors a total of 23 times.

Scientific Writing Rubric Use

Reliability between raters was calculated based on a randomly sampling of student papers scored by two independent raters with disparate education backgrounds (one rater had earned a Ph.D. in science and the other rater had an English Ph.D.). Reliability for overall paper scores was found to be high (r = 0.8644, ICC; Table 2 ).

Rubric Reliability. The intraclass correlation coefficient (ICC) was calculated to determine rubric reliability. Seven final papers were randomly selected to be scored by two independent raters. The ICC provides a measure of agreement or concordance between the raters, where the value 1 represents perfect agreement, whereas 0 represents no agreement. ICC values were calculated for the individual paper elements, as well as for the overall paper. ICC was interpreted as follows: 0–0.2 indicates poor agreement, 0.3–0.4 indicates fair agreement, 0.5–0.6 indicates moderate agreement 0.7–0.8 indicates strong agreement, and 0.8–1.0 indicates near perfect agreement.

The rubrics worked very well as a grading tool for the instructor, consuming about 10–15 minutes to grade an individual paper. One student paper was inadvertently shuffled to the bottom of the pile and unknowingly re-graded. Remarkably, he received the same 87.5% score on the second grading attempt as he did during the first grading session. Use of the rubric made it easier to have conversations with individual students about their papers if there was a grade inquiry, and eliminated the need to write large amounts of comments on each paper. Biology subject tutors reported that they used the rubrics during the tutoring sessions, but felt that they concentrated primarily on grammar and sentence structure with students.

Student Writing Performance

Although writing performance was not the primary focus of this study, no significant difference was found between the Rubric Only group, the Rubric + Tutor group and the Self-Grade Rubric group in overall paper writing scores, calculated as all by adding all the scientific writing points over the semester (by ANOVA; p = 0.096), nor was there a difference in the final paper scores (by ANOVA; p = 0.068).

Attitude Change within Groups

No changes were seen in each group between pre and post assessment answers on the Scientific Writing Attitudes questionnaire, except one significant difference was found for the statement “I am good at writing in general but not good at science writing.” Significantly more students in the Rubric Only group disagreed with this statement at the end of the semester compared to the beginning of the semester (by t-test; p = 0.0431; pre-mean = 3.14 ± 0.275 and post-mean= 2.375 ± 0.24, where 1 is strongly disagree and 5 is strongly agree) ( Figure 1 ).

An external file that holds a picture, illustration, etc.
Object name is june-15-85f1.jpg

Significantly more students in the Rubric Only group disagreed with the statement “I am good at writing in general but not good at science writing” at the end of the semester compared to the beginning (by t-test; p = 0.0431; pre-mean = 3.14 ± 0.275 and post-mean= 2.375 ± 0.24). No other group displayed a significant difference pre-course vs. post-course. Data depicts student responses on the Likert questionnaire, where 1 is strongly disagree and 5 is strongly agree.

Attitude Differences between Rubric Groups

Significant differences between the groups were detected in the post-questionnaire answers for several of the writing attitude and perceived competence questions. The Rubric + Tutor group held significantly more negative attitudes towards scientific writing on several questions. On average, more students in the Rubric + Tutor group agreed with the post-statement “Scientific writing is boring” (by ANOVA; p = 0.016; mean of Rubric-Only group 2.25 ± 0.28; mean of Rubric + Tutor group 3.36 ± 0.27; mean of Self-Grade Rubric group 2.43 ± 0.27) ( Figure 2 ). This difference was not detected during the pre-assessment (by ANOVA, p = 0.46).

An external file that holds a picture, illustration, etc.
Object name is june-15-85f2.jpg

More students in the Rubric + Tutor group agreed with the post-statement “Scientific writing is boring.” Data depicts student responses on the Likert questionnaire administered at the conclusion of the semester, where 1 is strongly disagree and 5 is strongly agree (by ANOVA; p = 0.016; mean of Rubric Only group 2.25 ± SEM 0.28; mean of Rubric + Tutor group 3.36 ± 0.27; mean of Self-Grade rubric group 2.43 ± 0.27).

On average, more students in the Rubric + Tutor group agreed with the post-statement “I feel like scientific writing is confusing” (by ANOVA; p=0.021; mean of Rubric-Only group 2.69 ± 0.30; mean of Rubric + Tutor group 3.71 ± 0.29; mean of Self-Grade Rubric 2.71 ± 0.24) ( Figure 3 ). This difference was not detected during the pre-assessment (by ANOVA, p = 0.96).

An external file that holds a picture, illustration, etc.
Object name is june-15-85f3.jpg

More students in the Rubric + Tutor group agreed with the post-statement “I feel like scientific writing is confusing.” Data depicts student responses on the Likert questionnaire administered at the conclusion of the semester, where 1 is strongly disagree and 5 is strongly agree (by ANOVA; p=0.021; mean of Rubric Only group 2.69 ± SEM 0.30; mean of Rubric + Tutoring group 3.71 ± 0.29; mean of Self-Grade rubric 2.71 ± 0.24).

Significantly more students in the Rubric + Tutor group also agreed with the post-statement “I would enjoy science more if I didn’t have to write up the results” (by ANOVA; p=0.037; mean of the Rubric Only group 2.63, ± 0.29; mean of Rubric + Tutor group 3.6, SEM .29; mean of Self-Grade Rubric group 2.69, SEM 0.33) ( Figure 4 ). This difference was not detected during the pre-assessment (by ANOVA, p = 0.79).

An external file that holds a picture, illustration, etc.
Object name is june-15-85f4.jpg

More students in the Rubric + Tutor group agreed with the post-statement “I would enjoy science more if I didn’t have to write up the results.” Data depicts student responses on the Likert questionnaire administered at the conclusion of the semester, where 1 is strongly disagree and 5 is strongly agree (by ANOVA; p=0.037; mean of the Rubric Only group 2.63 ± 0.29; mean of Rubric + Tutor group 3.6 ±.029; mean of Self-Grade Rubric group 2.69 ± 0.33).

Student Perception of Teaching Tools

The questionnaire also assessed how biology students judged the effectiveness of teaching tools to write more effectively. Students agreed or disagreed with the effectiveness of six methods commonly used to teach writing: working on drafts one-on-one with someone, modeling a paper after an example paper, watching someone else construct a paper from scratch, looking at a detailed grading rubric, participating in small group writing workshops, and listening about to how to place the experimental elements into the paper during a lecture. No significant differences were found in each group’s pre- vs. post- semester assessment responses.

When the post-semester assessment responses from all classes were pooled together (n= 44), we found that students perceived the effectiveness of scientific writing teaching methods very differently (by ANOVA; p <0.0001; using an example paper 4.17 ± 0.12; using a detailed rubric 3.98 ± 0.16; listening to a lecture about constructing science papers 3.8 ± 0.99; one-on-one assistance 3.78 ± 0.4; participating in small group workshops 3.63 ± 0.2; or watching someone else construct a paper from scratch 3.24 ± 0.17; data shown are means ± SEM, where 1 is strongly disagree with effectiveness and 5 is strongly agree) ( Figure 5 ).

An external file that holds a picture, illustration, etc.
Object name is june-15-85f5.jpg

Post-semester assessment showed that students thought the most effective ways to teach scientific writing were 1) using an example paper or 2) using a detailed rubric. Students though that 1) watching someone else construct a paper from scratch or 2) participating in small group writing workshops were the least effective ways to teach scientific writing (by ANOVA; p <0.0001; using an example paper 4.17 ± 0.12; using a detailed rubric 3.98 ± 0.16; listening to a lecture about constructing science papers 3.8 ± 0.99; one-on-one assistance 3.78 ± 0.4; participating in small group workshops 3.63 ± 0.2; or watching someone else construct a paper from scratch 3.24 ± 0.17; n = 44). Data depicts the means ± SEM of student responses on the Likert questionnaire administered at the conclusion of the semester, where 1 is strongly disagree and 5 is strongly agree.

Students rated using an example paper as significantly more effective than listening to a lecture about how to place experimental design elements into a paper (by t-test; p < 0.01), more effective than one-on-one assistance on paper drafts (by t-test, p = 0.02), more effective than participating in small group workshops (by t-test, p < 0.0001), and more effective than watching someone construct a paper from scratch (by t-test, p < 0.0001).

Students rated the use of a rubric as significantly more effective than watching someone construct a paper from scratch (p < 0.001), and more effective than participating in small group workshops (p < 0.0001).

Students also rated participating in small group workshops as less effective than one-on-one assistance on paper drafts (p = 0.02), and less effective than listening to a lecture about paper construction (p = 0.05). In fact, students rated participating in small group workshops as significantly less effective than nearly every other method.

Mean final course grades were not significantly different between the classes, nor were course or instructor evaluations scores different. The mean class grade for the Rubric Only section was 85.9%, the mean evaluation score for course structure was 4.0 (out of 5), and the mean instructor effectiveness evaluation score was 4.43 (out of 5). The mean class grade for the Rubric + Tutor section was 83.7%, the mean evaluation score for course structure scores was 4.25 (out of 5), and the mean instructor effectiveness evaluation score was 4.33 (out of 5). The mean class grade for the Self-Grade rubric section was 77.9%, the mean evaluation score for course structure scores was 4.07 (out of 5), and the mean instructor effectiveness evaluation score was 4.27 (out of 5).

Scientific writing falls underneath the umbrella of “Ability to Communicate and Collaborate with Other Disciplines,” as one of six core competencies in undergraduate biology education ( AAAS, 2009 ). Scientific writing is a skill that can be applied to the discipline of biological practice, and is also a key measure of biological literacy. AAAS focus groups involving 231 undergraduates reported that students request more opportunities to develop communication skills, such as writing assignments in class or specific seminars on scientific writing ( AAAS, 2009 ). In 2004, approximately 51% of undergraduate educators that attended past American Society for Microbiology Conferences for Undergraduate Educators (ASMCUE) reported that they introduced more group learning and writing components after attending an ASMCUE conference targeting biology education reform ( AAAS, 2009 ).

Additionally, as we noted in the introduction, scientific writing is an important part of undergraduate neuroscience education because it provides students with an opportunity to utilize writing-to-learn strategies to promote the merging of interpretative methods and rubrics with the hypothesis testing and experimental design that typically occurs in STEM fields to create a type of hybrid research paradigm ( Reynolds et al., 2012 ) and a more holistic approach.

As a growing number of schools embrace CURE curriculums, instructors will increasingly need to deal with the problem of how to have their students effectively communicate the results of the experiments they do. Scientific writing is the natural extension of a complete scientific project, and requires students to think clearly about process, argument, and making evidence-based conclusions. These competencies are linked to life-long skills, including critical thinking, and perhaps executive functioning.

Undergraduate students in our biology classes believe that the most effective ways to teach scientific writing are by providing an example paper, a rubric, or by effective lectures. Interestingly, these are all very “hands-off” approaches to learning, indicating that either the students crave more structure in this type of inquiry-based learning course, or that the students’ past experiences with one-on-one tutoring or small group based writing workshops were not ideal. It would be interesting to see if these types of attitudes persist in a more traditional lecture classroom format.

Peer Tutoring

Despite boosted confidence, the group of students who were required to use a peer tutor felt that scientific writing was boring and less enjoyable than students who were not required to visit a tutor. Peer tutoring, particularly in writing, has a long history of improved paper performance, with mostly positive subjective feedback from students. Certainly a student’s experience with a peer tutor may revolve around both the tutor’s willingness to help and competency in the subject matter, but even with a willing and competent tutor, students may be unhappy with what they perceive as an extra assignment (visiting the tutor). Previous studies show an added benefit of self-reported enhanced writing ability in the tutors ( Topping, 1996 ; Roscoe and Chi, 2007 ), a finding that was also reflected in the current study in informal post-experiment feedback from our tutors.

Tutoring services are a staple offering of most colleges and universities, but the training can be relatively general in nature. Tutoring centers can consider developing working relationships between individual science departments and their own subject tutors. Departmental faculty members can take a more active role in the tutoring by offering tutor training sessions, instruct the tutors about specific desirable ways to support students, and possibly follow up with their own assessments to track tutor outcomes.

Rubrics, Example Papers, and Effective Lectures

We find that undergraduate students in our inquiry-based biology classrooms believe that rubric use is a very effective way to teach science writing. As such, we propose that undergraduate neuroscience faculty consider that the use of rubrics may better fit the needs of beginning science students (and future students interested in upper level neuroscience courses) better than more commonly used peer review instructional methods. In particular, rubrics are a logical fit for use in inquiry-based writing instruction, since they provide needed structure, they clearly communicate standards for success in the classroom, and students think they are effective teaching tools. Yet rubrics remain an important tool for all disciplines at all college levels.

Most professors have rubrics that they use to assist with their own grading, but many do not share these rubrics with their students during the writing process. This is similar to withholding the driver’s manual from a Driver’s Ed student as they learn to drive by observation or by practicing driving around the parking lot. Use of the rubric may give the students an element of control otherwise missing from an assignment. Prior research shows that learners who are not in a power position demonstrate poor task performance, but do better when they are in control over their own learning ( Dickinson, 1995 ; Smith et al., 2008 ). Although we did not directly compare the use of a rubric with non-rubric use, perhaps the perception of control during learning is valuable, as more rigorous use of the rubric allows the student to essentially pre-determine what grade he or she will receive on each paper.

Nothing is wrong with teaching students the way they want to be taught. However, more research needs to be done to compare teaching methods. Students stated that a preference for “effective lectures” to teach scientific writing, but characteristics of these “effective lectures” need to be further elucidated. Exposing groups of students to various types of lecture styles and then administering a subsequent writing assessment would allow evaluation of both writing performance and allow students weigh in with their perceptions of what makes an “effective lecture.” Studies comparing use of example papers, very specific rubrics, and effective lectures would be helpful, as well as combinations of the three elements. It would also be helpful to track the specific responses of those students who go to focus their studies on neuroscience to see whether their views deviate from or adhere to the findings for the group as a whole.

Despite the frequent use of peer review or tutoring that is commonly used in writing workshops and with classroom paper rough drafts, we did not find that peer review boosted student perception of writing competence. Students prefer to hold the keys to classroom success in their hands—a printed out rubric or model paper is, in their eyes, more valuable than listening to or talking about writing.

Supplementary Information

Acknowledgments.

The authors would like to thank members of the Randolph-Macon Department of Biology, including Jim Foster, Charles Gowan, Grace Lim-Fong, Melanie Gubbels-Bupp, and Ryan Woodcock for sharing their Integrative Biology vision, as well as the Higgins Academic Center, Josh Albert, Megan Jackson, and Alyssa Warren for tutoring support.

  • AAAS American Association for the Advancement of Science. Vision and change in undergraduate biology education: a view for the 21st Century. 2009. [accessed 19 February 2014]. http://visionandchange.org/finalreport/
  • Bennett P., Jr Using rubrics to teach science writing. Essays On Teaching Excellence: Toward the Best in the Academy. 2008; 20 (8) [ Google Scholar ]
  • Blair B, Cline G, Bowen W. NSF-style peer review for teaching undergraduate grant-writing. Am Biol Teach. 2007; 69 :34–37. [ Google Scholar ]
  • Bowman-Perrott L, Davis H, Vannest K, Williams L, Greenwood C, Parker R. Academic benefits of peer tutoring: a meta-analytic review of single case research. School Psych Rev. 2013; 42 :39–55. [ Google Scholar ]
  • Brownell SE, Price JV, Steinman L. Science communication to the general public: why we need to teach undergraduate and graduate students this skill as part of their formal scientific training. J Undergrad Neurosci Edu. 2013a; 12 :E6–E10. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Brownell SE, Price JV, Steinman L. A writing-intensive course improves biology undergraduates’ perception and confidence of their abilities to read scientific literature and communicate science. Adv Physiol Educ. 2013b; 37 :70–79. [ PubMed ] [ Google Scholar ]
  • Carter C. Images in neuroscience. Cognition: executive function. Am J Psychiatry. 2000; 157 :3. [ PubMed ] [ Google Scholar ]
  • Dawn S, Dominguez KD, Troutman WG, Bond R, Cone C. Instructional scaffolding to improve students’ skills in evaluating clinical literature. Am J Pharm Educ. 2011; 75 :62. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Dickinson L. Autonomy and motivation: a literature review. System. 1995; 23 :165–174. [ Google Scholar ]
  • Frey PA. Guidelines for writing research papers. Biochem Mol Biol Educ. 2003; 31 :237–241. [ Google Scholar ]
  • Goldey ES, Abercrombie CL, Ivy TM, Kusher DI, Moeller JF, Rayner DA, Smith CF, Spivey NW. Biological inquiry: a new course and assessment plan in response to the call to transform undergraduate biology. CBE Life Sci Educ. 2012; 11 :353–363. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Greenwood CR, Terry B, Arreaga-Mayer C, Finney R. The class-wide peer tutoring program: implementation factors moderating students’ achievement. J Appl Behav Anal. 1992; 25 :101–116. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hartberg Y, Gunersel A, Simpson N, Balester V. Development of student writing in biochemistry using calibrated peer review. Journal of Scholarship of Teaching and Learning. 2008; 8 :29–44. [ Google Scholar ]
  • Holstein SE, Mickley Steinmetz KR, Miles JD. Teaching science writing in an introductory lab course. J Undergrad Neuroscience Educ. 2015; 13 :A101–A109. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hoskins SG, Lopatto D, Stevens LM. The C.R.E.A.T.E. Approach to primary literature shifts undergraduates’ self-assessed ability to read and analyze journal articles, attitudes about science, and epistemological beliefs. CBE Life Sci Educ. 2011; 10 :368–378. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Itagaki H. The use of mock NSF-type grant proposals and blind peer review as the capstone assignment in upper-level neurobiology and cell biology courses. J Undergrad Neurosci Educ. 2013; 12 :A75–A84. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Jones LS, Allen L, Cronise K, Juneja N, Kohn R, McClellan K, Miller A, Nazir A, Patel A, Sweitzer SM, Vickery E, Walton A, Young R. Incorporating scientific publishing into an undergraduate neuroscience course: a case study using IMPULSE. J Undergrad Neurosci Educ. 2011; 9 :A84–A91. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Labov JB, Reid AH, Yamamoto KR. Integrated biology and undergraduate science education: a new biology education for the twenty-first century? CBE Life Sci Educ. 2010; 9 :10–16. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Likert R. A technique for the measurement of attitudes. Arch Psychol. 1932; 22 :5–55. [ Google Scholar ]
  • O’Connor TR, Holmquist GP. Algorithm for writing a scientific manuscript. Biochem Mol Biol Educ. 2009; 37 :344–348. [ PubMed ] [ Google Scholar ]
  • Prichard JR. Writing to learn: an evaluation of the calibrated peer review program in two neuroscience courses. J Undergrad Neurosci Educ. 2005; 4 :A34–A39. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Quitadamo IJ, Kurtz MJ. Learning to improve: using writing to increase critical thinking performance in general education biology. CBE Life Sci Educ. 2007; 6 :140–154. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Rawson RE, Quinlan KM, Cooper BJ, Fewtrell C, Matlow JR. Writing-skills development in the health professions. Teach Learn Med. 2005; 17 :233–238. [ PubMed ] [ Google Scholar ]
  • Reynolds JA, Thompson RJ., Jr Want to improve undergraduate thesis writing? Engage students and their faculty readers in scientific peer review. CBE Life Sci Educ. 2011; 10 :209–215. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Reynolds JA, Thaiss C, Katkin W, Thompson RJ., Jr Writing-to-learn in undergraduate science education: a community-based, conceptually driven approach. CBE Life Sci Educ. 2012; 11 :17–25. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Roscoe RD, Chi MTH. Understanding tutor learning: Knowledge-building and knowledge telling in peer tutors’ explanations and questions. Rev Educ Res. 2007; 77 :534–574. [ Google Scholar ]
  • Segura-Totten M, Dalman NE. The CREATE method does not result in greater gains in critical thinking than a more traditional method of analyzing the primary literature. J Microbiol Biol Educ. 2013; 14 :166–175. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Singh V, Mayer P. Scientific writing: strategies and tools for students and advisors. Biochem Mol Biol Educ. 2014; 42 :405–413. [ PubMed ] [ Google Scholar ]
  • Smith PK, Jostmann NB, Galinsky AD, van Dijk WW. Lacking power impairs executive functions. Psychol Sci. 2008; 19 :441–447. [ PubMed ] [ Google Scholar ]
  • Timmerman B, Strickland DC, Johnson RL, Payne JR. Development of a ‘universal’ rubric for assessing undergraduates’ scientific reasoning skills using scientific writing. Assess High Eval Educ. 2010; 36 :509–547. [ Google Scholar ]
  • Topping KJ. The effectiveness of peer tutoring in further and higher education: A typology and review of the literature. Higher Education. 1996; 32 :321–345. [ Google Scholar ]
  • Utley C, Monweet S. Peer-mediated instruction and interventions. Focus Except Child. 1997; 29 :1–23. [ Google Scholar ]
  • Willis J. Rubrics as a doorway to achievable challenge. John Hopkins School of Education: New Horizons for Learning. 2010:8. [ Google Scholar ]
  • Woodget BW. Teaching undergraduate analytical science with the process model. Anal Chem. 2003; 75 :307A–310A. [ PubMed ] [ Google Scholar ]
  • Woodin T, Smith D, Allen D. Transforming undergraduate biology education for all students: an action plan for the twenty-first century. CBE Life Sci Educ. 2009; 8 :271–273. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Units and Programs

Make a Gift

  • Student Resources
  • Faculty Resources
  • Teaching Assistants
  • Undergraduate Assistants

Jester Center Room A115
 201 E 21st St.
 Austin, Texas 78712
 512-471-4421

Analytical Rubric

from John Bean, Engaging Ideas

Scoring Guide for Essays

Quality of ideas: ____ points.

Range and depth of argument; logic of argument; quality of research or original thought; appropriate sense of complexity of the topic; appropriate awareness of opposing views.

Organization & Development: ____ points

Effective title; clarity of thesis statement; logical and clear arrangement of ideas; effective use of transitions; unity and coherence of paragraphs; good development of ideas through supporting detail and evidence.

Clarity & Style: _____ points

Ease of readability; appropriate voice, tone, and style for assignment; clarity of sentence structure; gracefulness of sentence structure; appropriate variety and maturity of sentence structure.

Sentence Structure & Mechanics: _____ points

Grammatically correct sentences; absence of comma splices, run-ons, fragments; absence of usage and grammatical errors; accurate spelling; careful proofreading; attractive and appropriate manuscript form.

Apply to the Joyce A. Berkman Award for Outstanding Graduate Feminist Scholarship

white text on black background: "Joyce A Berkman Award for outstanding feminist scholarship, submission deadline April 29 5pm'

Submissions Due April 29th, 5pm

The Joyce A. Berkman Award for Outstanding Graduate Feminist Scholarship is presented annually by the department in honor of Joyce Avrech Berkman, feminist teacher, activist, History Department faculty member for 48 years (1965–2013), and co-founder of WGSS.   

Essay submissions may be on any topic that falls under the broad rubric of feminist scholarship. Prize-winning essays will exhibit clarity, originality, and ambition.  Submissions may not exceed 25 double-spaced pages (including notes and references).   

Eligibility :

Applicants must be currently enrolled in or have completed the Graduate Certificate in Feminist Studies.

Submissions must be unpublished essays, originally composed in (or fundamentally shaped by) a WGSS graduate-level course.

Only one submission per applicant will be accepted; applicants may not receive the Joyce A. Berkman Award more than once.

Submission Instructions :

Submissions may not exceed 25 double-spaced pages (including notes and references).

Submissions are anonymous. Essays with any identifying information will be disqualified.

Please append to your entry a brief description (500 words or less) of the ways in which your essay was shaped by your graduate coursework in the Department of WGSS.

W401 South College 150 Hicks Way Amherst, MA 01003 413-545-1922 [email protected]

Mae Hampton Watt Presidential Scholarship in Psychology - Leadership and Service

About the scholarship.

The Mae Hampton Watt Presidential Scholarship in Psychology - Leadership and Service is open to undergraduate students enrolled in the Department of Psychology at Florida State University. Students who can submit an essay about their relevant leadership and service are encouraged to apply.

  • Essay Required : Yes
  • Need-Based : No
  • Merit-Based : No
  • Resident of the U.S.
  • Attending Florida State University
  • Undergraduate student
  • Seeking a bachelor's degree
  • Studying psychology
  • Participation in leadership
  • Country : US

USC shield

Center for Excellence in Teaching

Home > Resources > Short essay question rubric

Short essay question rubric

Sample grading rubric an instructor can use to assess students’ work on short essay questions.

Download this file

Download this file [62.00 KB]

Back to Resources Page

Undergraduate Essay Prizes: Submissions due May 2024

Here are the forms to submit papers to the DLCL Undergraduate Academic Prizes / Department Awards.   Before you begin, make sure the student author's name is removed from the essay document, to ensure fair judging. Please sign into your Stanford account before clicking these links.   All submissions are due Monday May 6 by 4:00pm PDT.   Comparative Literature French and Italian: French* French and Italian: Italian German Studies Iberian and Latin American Cultures Slavic Languages and Literatures   For papers written by undergraduate students and turned in as coursework for DLCL and Language Center courses during for Spring 2023 - Spring 2024. Cross-listed courses allowed, if taught by a DLCL-affiliated instructor. Co-terms can submit papers for undergraduate or 200-level courses. Instructors may also nominate student papers using these forms.   Generally, papers written in a language of the program (if relevant) will be given higher consideration than papers written in English. Creative works (prose, poetry) can be submitted if they meet the coursework requirements above. *French and FrenLang papers should have the instructor's comments included, if given.   Questions: email Judy Nugent jnugent2 [at] stanford.edu (jnugent2[at]stanford[dot]edu) . Submissions will not be accepted by email or in-person, unless the submission form doesn't work. The winners will be contacted by email in June.

IMAGES

  1. Informative Essay Rubric

    undergraduate essay rubric

  2. Essay Rubric Storyboard by mkyne

    undergraduate essay rubric

  3. 007 College Essay Rubric ~ Thatsnotus

    undergraduate essay rubric

  4. How To Write An Essay Rubric

    undergraduate essay rubric

  5. 024 Rubrics In Essay Writing Example Analytical Rubric Analysis

    undergraduate essay rubric

  6. 46 Editable Rubric Templates (Word Format) ᐅ TemplateLab

    undergraduate essay rubric

VIDEO

  1. UGRC210 Academic Writing II Lecture 2:- Making Notes From A Text (Writing Skills I)

  2. Нейросети для подготовки к ЕГЭ по английскому 2024

  3. Разбор демоверсии ЕГЭ 2023

  4. Essay Rubric and CUSS

  5. Undergraduate Application: Essay Writing Workshop Part 3 out of 3

  6. Clarissa Wong, law student and winner of the ILBF essay competition 2022-2023

COMMENTS

  1. Academic essay rubric

    Undergraduate Advisory Group (UAG) Distinguished Faculty Fellows (1997-2017) Calendar; Institutes. ... > Academic essay rubric. Academic essay rubric. This is a grading rubric an instructor uses to assess students' work on this type of assignment. It is a sample rubric that needs to be edited to reflect the specifics of a particular assignment.

  2. Rubrics

    Rubrics are tools for communicating grading criteria and assessing student progress. Rubrics take a variety of forms, from grids to checklists, and measure a range of writing tasks, from conceptual design to sentence-level considerations. As with any assessment tool, a rubric's effectiveness is entirely dependent upon its design and its ...

  3. Rubrics

    If an assignment prompt is clearly addressing each of these elements, then students know what they're doing, why they're doing it, and when/how/for whom they're doing it. From the standpoint of a rubric, we can see how these elements correspond to the criteria for feedback: 1. Purpose. 2. Genre.

  4. PDF Essay Rubric

    Essay Rubric Directions: Your essay will be graded based on this rubric. Consequently, use this rubric as a guide when writing your essay and check it again before you submit your essay. Traits 4 3 2 1 Focus & Details There is one clear, well-focused topic. Main ideas are clear and are well supported by detailed and accurate information.

  5. Scoring Rubric for Undergraduate Research

    Scoring Rubric. Category. 0 = Poor. 2 = Fair. 4 = Good. 6 = Exceptional. Abstract. Minimal effort; multiple elements (research problem, goals, significance and outcomes) are missing or inadequately described. Elements (research problem, goals, significance and outcomes) are included but inadequately or ineffectively described.

  6. Rubrics

    This rubric was designed for essays and research papers in history, CMU. Projects. ... This is appropriate for an undergraduate-level course, CMU. Example 2: Advanced Seminar This rubric is designed for assessing discussion performance in an advanced undergraduate or graduate seminar. ...

  7. Creating and Using Rubrics

    Rubrics can be used to provide feedback to students on diverse types of assignments, from papers, projects, and oral presentations to artistic performances and group projects. ... This rubric was designed for essays and research papers in history (Carnegie Mellon). ... This is appropriate for an undergraduate-level course (Carnegie Mellon).

  8. PDF University of Florida Writing Effective Rubrics

    6 University of Florida Institutional Assessment - Writing Effective Rubrics Step 4: Identify the levels of mastery/scale (columns). Tip: Aim for an even number (I recommend 4) because when an odd number is used, the middle tends to become the "catch-all" category. Step 5: Describe each level of mastery for each characteristic (cells). Describe the best work you could expect using these ...

  9. iRubric: Undergraduate Essay rubric

    Excellent. Student excels in explaining all major points of the assignment. An original, unique, and/or imaginative approach to overall ideas, concepts, and findings is presented. The overall format of the assignment includes an appropriate introduction (or abstract), well-developed paragraphs, and a conclusion.

  10. Analytic Rubric

    This rubric taken from John Bean, _Engaging Ideas_ h2. Scoring Guide for Essays h3. Quality of Ideas: ____ points Range and depth of argument; logic of argument; quality of research or original thought; appropriate sense of complexity of the topic; appropriate awareness of opposing views.

  11. Model Rubrics

    Holistic Rubrics. Holistic rubrics typically focus on larger skill sets demonstrated in the writing. They can be as detailed or as general as you like. The descriptions should use specific language without overloading students with information. Assigning grades holistically often speeds up the grading process, and many instructors feel holistic ...

  12. Example 1

    Characteristics to note in the rubric: Language is descriptive, not evaluative. Labels for degrees of success are descriptive ("Expert" "Proficient", etc.); by avoiding the use of letters representing grades or numbers representing points, there is no implied contract that qualities of the paper will "add up" to a specified score or grade or that all dimensions are of equal grading ...

  13. PDF Writing Assessment and Evaluation Rubrics

    Analytic scoring is usually based on a scale of 0-100 with each aspect receiving a portion of the total points. The General Rubric for Analytic Evaluationon page 14 can be used to score a piece of writing in this way as can the rubrics for specific writing types on pages 17, 26, 31, 36-38, and 43.

  14. Creating and Using Rubrics

    Holistic Essay-Grading Rubrics at the University of Georgia, Athens; Quality Matters Rubric for Assessing University-Level Online and Blended Courses ... "The Effects of Using a Critical Thinking Scoring Rubric to Assess Undergraduate Students' Reading Skills." Journal of College Reading and Learning 43, no. 1 (Fall 2012): 31-58.

  15. PDF COLLEGE WRITING RUBRICS

    assignments in college undergraduate classes. A student-centered theory focus will highlight how valuable teacher/student communication can help lead to improving student writing. ... rubric creates a quality standard of assignments (Fluckiger, 2010). There are various types of rubrics; to name just a few include check lists, a before and after

  16. Rubric Best Practices, Examples, and Templates

    A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

  17. Sample Writing Assessment Rubric

    Sample Writing Assessment Rubric The syllabus for a course meeting the UF Writing Requirement must include a writing assessment rubric. The following is a sample rubric, for additional resources and different styles of rubrics please visit the Rubrics page of the General Education website .

  18. Rubric for a Research Proposal

    Matthew Pearson - Writing Across the Curriculum. The following rubric guides students' writing process by making explicit the conventions for a research proposal. It also leaves room for the instructor to comment on each particular section of the proposal. Clear introduction or abstract (your choice), introducing the purpose, scope, and ...

  19. Rubrics & Checklists

    Rubrics & Checklists. Grading criteria can be very simple or complex. They can analyze discrete elements of performance or describe general traits that define papers in a given grade range. You can use them to set up a scoring sheet for grading final drafts, and to create revision-oriented checklists to speed up commenting on early drafts of ...

  20. PDF Rubric for Evaluating Application Essays

    Rubric for Evaluating Application Essays Criteria Not Grad Level--1 point Acceptable—2 points Exceptional—3 points Content Essential qu estions . addressed adequately Supporting details and/or examples. Clarity of purpose. Questions are missed or inadequately addressed. Details and examples are either lacking or irrelevant.

  21. Using Rubrics as a Scientific Writing Instructional Method in Early

    Bennett P., Jr Using rubrics to teach science writing. Essays On Teaching Excellence: Toward the Best in the Academy. 2008; 20 (8) [Google Scholar] Blair B, Cline G, Bowen W. NSF-style peer review for teaching undergraduate grant-writing. Am Biol Teach. 2007; 69:34-37. [Google Scholar]

  22. Analytical Rubric

    Scoring Guide for Essays h3. Quality of Ideas: ____ points Range and depth of argument; logic of argument; quality of research or original thought; appropriate sense of complexity of the topic; appropriate awareness of opposing views. h3. Organization & Development: ____ points Effective title; clarity of thesis statement; logical and clear.

  23. Apply to the Joyce A. Berkman Award for Outstanding Graduate Feminist

    Essay submissions may be on any topic that falls under the broad rubric of feminist scholarship. Prize-winning essays will exhibit clarity, originality, and ambition. Submissions may not exceed 25 double-spaced pages (including notes and references). Eligibility:

  24. Mae Hampton Watt Presidential Scholarship in Psychology

    About the Scholarship. Opens: 3/31/2024. Closes: 5/31/2024. The Mae Hampton Watt Presidential Scholarship in Psychology - Leadership and Service is open to undergraduate students enrolled in the Department of Psychology at Florida State University. Students who can submit an essay about their relevant leadership and service are encouraged to apply.

  25. Short essay question rubric

    Short essay question rubric. Sample grading rubric an instructor can use to assess students' work on short essay questions. Download this file. Page. /. 2. Download this file [62.00 KB] Back to Resources Page.

  26. Undergraduate Essay Prizes: Submissions due May 2024

    Here are the forms to submit papers to the DLCL Undergraduate Academic Prizes / Department Awards. Before you begin, make sure the student author's name is removed from the essay document, to ensure fair judging. Please sign into your Stanford account before clicking these links. All submissions are due Monday May 6 by 4:00pm PDT.