Skip to main content
Bright Minds. College Leslie Nichols
A wide-angle view of a large undergraduate lecture hall partly filled with students taking notes; aisle stairs descend toward a podium; soft warm light through high windows.
The course many of them are taking is the same course almost all of them are taking.
Lab Notes · Foundation · Essay 02

Why a 1,000-student gateway course has unusual obligations.

Curriculum decisions in a small upper-division elective and curriculum decisions in a regional pre-health gateway course are not the same kind of decision. The first affects a handful of students for one semester. The second is a structural feature of the healthcare workforce ten years downstream. Both deserve care; only one deserves caution.

Leslie Nichols, M.S. Lab coordinator & instructor · ~8 min read

Almost every undergraduate science department teaches at least one course that is qualitatively different from the others on its schedule. It is large. It enrolls students from many majors. It is listed as a prerequisite for a long list of professional programs. And the grade students earn in it — not just whether they pass, but how they perform — functions, in practice, as one of the earliest filters in their professional trajectory.

Anatomy and physiology, in our region and in many others, is one such course. It is the gate through which prospective nurses, physician assistants, paramedics, athletic trainers, dental hygienists, and a meaningful fraction of medical-school applicants must pass. The enrollment in any given term is not 30 students; it is closer to a thousand. And the curriculum committee that meets to decide what this course will and will not require is making a decision that reaches considerably further than the next semester's syllabus.

That fact, by itself, is not an argument for any particular curriculum choice. But it is an argument for treating curriculum changes in this course differently from curriculum changes in courses that do not carry the same structural weight. The rest of this essay tries to name what that difference looks like in practice.

Two kinds of academic decision

The taxonomy worth keeping in mind is simple. There are contained decisions and structural ones. A contained decision affects a small number of students for a single semester, in a course with no required-prerequisite relationship to anything downstream. If the decision turns out to be wrong, it can be reversed without harm: the next cohort gets the revised version, the affected students experienced one term of something less than ideal, and life goes on.

A structural decision is different in kind. The course is large, required, and tightly coupled to admissions and licensure decisions in a half-dozen adjacent programs. A change made in good faith but on thin evidence can take three to five years to show up as a measurable downstream effect — and by the time it does, two full cohorts have already moved through and into clinical training. The decision is not more important than its contained counterpart on a per-student basis. It is, however, much harder to undo.

The argument here is not that structural decisions deserve special ceremony. It is that they deserve a higher evidentiary bar before they are made — the same bar a clinical guideline committee would apply to a change in standard of care, for the same reason: the cost of the wrong answer is paid by people who were not in the room when it was decided.

Figure 1 · Downstream coupling of one gateway course
Flow diagram showing how one gateway A&P course feeds five downstream professional pathways with approximate annual cohort sizes. Gateway A&P course ~1,000 students / term Nursing program ~280 admits / yr PA program ~50 admits / yr Paramedic / EMS ~120 certs / yr Allied health ~200 admits / yr Medical-school applicants ~80 applicants / yr
Approximate regional figures; exact numbers vary by year and institution. The point is structural, not numeric: the course sits upstream of every prospective health-professions career in its catchment area.

What downstream coupling means in practice

Imagine a single chain of events that begins on the first day of a gateway A&P course. The student earns a grade. That grade contributes to a science GPA. The science GPA crosses, or fails to cross, the cutoff a nursing program publishes. If it crosses, the student is admitted; if it does not, the student waits a year, or changes plans, or leaves the field. Once admitted, the student progresses through the nursing curriculum. At the end, the student sits the NCLEX-RN licensure exam. Pass, and they are a nurse. Fail, and they retake until they pass or run out of attempts.

Each link in that chain has been studied, sometimes for decades. The relationship between pre-nursing science GPA and later NCLEX performance is not zero. It is also not the only thing that predicts NCLEX, and the relationship varies in size from study to study. Recent systematic reviews place pre-nursing science GPA among the most consistent academic predictors of first-attempt NCLEX-RN success, with effect sizes that are modest individually but accumulate across cohorts.1

The point of citing this is not that harder courses produce better nurses — that is an oversimplification, and it is not what the literature says. The point is narrower: the gateway course is measurably part of a chain that ends at the bedside. Decisions that change what the course requires of its students change, in measurable ways, the composition of the people who reach the bedside several years later. That is a fact about the system, not a value judgment. It is the fact a curriculum committee deserves to consider.

The accessibility argument, taken seriously

A genuine and important concern from colleagues advocating for reduced rigor in gateway courses is access. High attrition in gateway STEM courses disproportionately affects first-generation students, students from historically underrepresented backgrounds, and students with weaker high-school preparation. The data on this are unambiguous, and the concern is not hypothetical.2 Anyone who teaches in a gateway course has watched it happen.

The harder argument, and the one supported by the more recent empirical work, is that lowering the intellectual demand of the course is not the only way to address the equity gap, and on the available evidence is not the most effective way. A growing literature on high-structure course design — frequent low-stakes assessment, structured pre-class preparation, guided in-class practice, explicit metacognitive support — shows that the equity gap can be closed substantially without reducing what the course teaches. Eddy and Hogan demonstrated that the gap between historically underserved students and their peers narrowed by roughly half when course structure increased, while overall course rigor remained constant.3 The Theobald et al. meta-analysis in PNAS extended this finding across the active-learning literature.4

The implication is not that rigor and equity are in tension. The implication is that the standard framing of the choice — rigorous-and-exclusionary versus accessible-and-easy — is a false dichotomy that the literature stopped supporting more than a decade ago. The genuine choice is well-structured-and-rigorous versus poorly-structured-and-rigorous, and the latter is the one that produces the gap.

The choice is rarely "rigorous and exclusionary" vs. "accessible and easy." It is much more often "well-structured and rigorous" vs. "poorly-structured and rigorous" — and the latter is the one that produces the equity gap.

What an unusually-loaded course owes its students

A course that sits where this one sits owes the students in it three things, in roughly this order:

  1. Predictive honesty. Students entering the course should be told plainly what it predicts and what it does not. The grade is part of an admissions chain that they have not been told about, and it is not a kindness to leave them unaware. The first-day syllabus is the right place for this conversation, not the advising appointment three semesters later.
  2. Calibrated rigor. The level of demand should match the demands of the downstream programs — neither inflated for ceremonial reasons, nor deflated under pressure to improve pass rates. Calibration requires actually looking at what the downstream programs and the licensure exam ask students to do, and working backward.
  3. Structural support. If the demand is high because the downstream demands are high, the course owes students scaffolding proportional to that demand: tutoring availability, study-skills instruction, frequent low-stakes assessment, and explicit guidance on how to study a subject most of them have never encountered. The high-structure literature is unambiguous that this combination — high demand plus high support — is what closes the equity gap without lowering the bar.3

What such a course owes the institution

Two further obligations, both procedural:

  1. Curricular conservatism by default. Changes to the course should be hypothesis-tested, not declared. That means pilot sections rather than wholesale rollouts; longitudinal outcome measurement rather than end-of-term satisfaction surveys; and reversibility built into the change from the beginning, in case the outcomes warrant it. The 2015 National Academies report on undergraduate science instruction makes a specific recommendation along these lines for high-stakes courses.5
  2. Transparent reporting. The course should publish its outcomes — pass rates, downstream program admission rates, NCLEX or board-pass rates of its alumni — on a regular cycle, so its decisions are evaluable by people other than the people making them. Programs that report outcomes well tend to make better curriculum decisions, for unsurprising reasons.

What this is not an argument for

The argument above is not a defense of the status quo. It is not an argument that the gateway course should never be revised. It is not an argument that lab-heavy is always right, or that reducing class size is the only honest reform, or that pedagogical experimentation should be discouraged. Several of those changes may be exactly what a particular gateway course needs, and some have evidence behind them stronger than the status quo.

What it is an argument for is a higher evidentiary bar — the same bar a clinical guideline committee would apply to a change in standard of care, the same bar a board-prep course would apply to a change in test-item development. Asking for that standard is not asking for protection from change. It is asking for change to clear the standard the downstream consequences warrant.

A practical implication

For any program with a course like this, one specific suggestion: formally classify it as a structural-decision course in the curriculum-review process, distinct from the contained-decision courses that share its catalog page. Apply additional review steps to changes in this category. Require evidence-grade justification — pilot data, longitudinal outcomes, or peer-reviewed precedent — before approval, in the same way a board-prep course would require it.

The cost of doing this is small, and visible: an extra committee step and a documentation requirement. The cost of not doing it is larger, but invisible until later: it is borne by patients ten years from now, by students who do not yet know they will be filtered out, and by the institutions that will eventually be asked to explain why their pass rates drifted in a direction they did not intend.

References & further reading

  1. For a recent representative review of the predictors of first-attempt NCLEX-RN success, see McCarthy, M. L., Harris, D., & Tracz, S. (2014). “Academic and nursing aptitude and the NCLEX-RN in baccalaureate programs.” Journal of Nursing Education, 53(3), 151–159. doi:10.3928/01484834-20140220-01. See also annual data briefs from the National Council of State Boards of Nursing (NCSBN), which report aggregate first-attempt pass rates and predictor relationships at scale: ncsbn.org/exams/exam-statistics-and-publications.
  2. On gateway-course attrition and its disproportionate effect on historically underserved students, see Seymour, E., & Hunter, A.-B. (eds.) (2019). Talking About Leaving Revisited: Persistence, Relocation, and Loss in Undergraduate STEM Education. Springer. doi:10.1007/978-3-030-25304-2. The follow-up to the foundational 1997 study, with twenty-plus years of additional data confirming the pattern.
  3. Eddy, S. L., & Hogan, K. A. (2014). “Getting under the hood: how and for whom does increasing course structure work?” CBE—Life Sciences Education, 13(3), 453–468. doi:10.1187/cbe.14-03-0050. The canonical study showing that increased course structure roughly halves the achievement gap for historically underserved students without lowering content demands.
  4. Theobald, E. J., Hill, M. J., Tran, E., Agrawal, S., Arroyo, E. N., Behling, S., et al. (2020). “Active learning narrows achievement gaps for underrepresented students in undergraduate science, technology, engineering, and math.” Proceedings of the National Academy of Sciences, 117(12), 6476–6483. doi:10.1073/pnas.1916903117. The largest synthesis to date of how active-learning techniques interact with equity outcomes in undergraduate STEM.
  5. National Research Council. (2015). Reaching Students: What Research Says About Effective Instruction in Undergraduate Science and Engineering. Washington, DC: National Academies Press. doi:10.17226/18687. The standard institutional reference for evidence-based instructional decision-making in undergraduate science programs.

Drafted May 2026.