Skip to main content
Bright Minds. College Leslie Nichols
Two pairs of student hands working together at a microscope and a tray of slides; a hand-drawn observation notebook open between them. Soft window light. The lecture hall is visible, empty, in soft focus through a doorway behind.
Some things are only learnable in the room where they happen.
Lab Notes · Foundation · Essay 01

What the bench teaches that the lecture cannot.

A standing question in any large undergraduate science program is whether bench-based instruction earns the time it costs. The answer depends on what we believe the bench uniquely teaches — and whether those things matter for the careers our students are pursuing. Both halves of that question can be answered honestly.

Leslie Nichols, M.S. Lab coordinator & instructor · ~9 min read

The case against bench-based instruction is rarely made with hostility. It is made, almost always, in the language of stewardship: lab time is expensive, lab time competes with lecture content, lab time produces assessment artifacts that are harder to score reliably than a multiple-choice exam. Each of those observations is true. None of them is, by itself, an argument for replacing the bench — only an argument for being honest about what the bench is doing in exchange for what it costs.

The right question, then, is not whether lab work is expensive. It is whether anything else can teach the things lab work demonstrably teaches. If the answer is yes, the cost question is real and reform is overdue. If the answer is no, then the cost is a price we are paying for a kind of learning that has no substitute, and the conversation has to move on to how to deliver that learning more efficiently — not whether to deliver it at all.

What follows is an inventory. Four kinds of learning that the science-of-learning literature has documented as bench-dependent: not merely better at the bench than in the lecture hall, but measurably difficult to produce anywhere else. Each is uncontroversial on its own. Taken together they are an answer to the question.

1. Tactile and procedural memory

The motor system encodes procedural knowledge in a fundamentally different way than the declarative system encodes facts. The two rely on different neural substrates, consolidate on different timescales, and respond to different kinds of practice. A student who has read about a dissection technique and a student who has performed one know different things in the most literal sense: they have invested in different memory systems, and the systems are not interchangeable.1

The clinical-skills literature has spent four decades documenting the consequences. Surgical residents who studied a procedure on video without practicing it in the wet lab consistently performed worse on subsequent live cases than residents who practiced first, even when the video group scored equally well on knowledge tests beforehand.2 The same pattern shows up in simulation-based medical education more broadly: the meta-analytic effect of deliberate hands-on practice on clinical performance is large, robust, and not reducible to "just give them more lectures."3

What this looks like in undergraduate A&P specifically is familiar to anyone who has graded a lab practical. The student who can name the brachial plexus from a diagram, fluently and quickly, often cannot trace it on a specimen. The knowledge is there. The motor-perceptual program for finding it is not. They are not the same thing, and one cannot substitute for the other.

Figure 1 · Two memory systems, two instruments
Declarative knowledge
Facts, names, definitions
  • ·Consolidated via reading & recall
  • ·Reliable on multiple-choice instruments
  • ·Hippocampus / neocortex pathway
Where lecture excels
Procedural knowledge
Sequenced motor & perceptual actions
  • ·Consolidated via repetition with feedback
  • ·Performance-testable; weakly recognized on paper
  • ·Basal ganglia / cerebellum pathway
Where the bench is required
The two systems are complementary, not redundant. A curriculum that asks one to substitute for the other is making a category error, not an efficiency choice.

2. Decision-making under genuine uncertainty

A textbook problem has a known answer. A bench problem — a slide that doesn't quite match the reference image, an instrument reading that doesn't agree with theory, a tissue specimen that looks like neither of the two examples in the textbook — has whatever answer the student talks themselves into. Lecture cannot produce this condition because the answer is, by design, already in the back of the book.

The cognitive-science literature distinguishes well-structured problems (clear givens, clear goal state, defined solution path) from ill-structured ones (ambiguous givens, contested goals, multiple defensible paths). Students reliably master the first long before the second; the gap is large, persistent, and documented across domains from physics to medicine to engineering.4 Bench work is one of the few undergraduate experiences that puts students in front of an honestly ill-structured problem and requires them to act anyway.

Why this matters for pre-health students is straightforward and worth stating plainly. The clinical-judgment literature in nursing — Benner's From Novice to Expert, Tanner's clinical-judgment model, the entire transition-to-practice research program — converges on a single finding: new graduates struggle most not with what they know, but with what to do when the patient does not match the textbook.5 The bench is where that capacity begins.

3. The find-the-thing gap

Naming a structure on a labeled diagram and locating that same structure on an unlabeled specimen are different cognitive tasks with different developmental trajectories. The first is a recall-recognition task; the second is what perceptual-learning researchers call a structure extraction task — the visual system has to learn which features matter and which do not, and that learning happens almost entirely through repeated, varied exposure to the real thing.6

The radiology-education literature has shown this experimentally for decades. Residents who study labeled images alone improve measurably less, and on a measurably narrower distribution of presentations, than residents who study unlabeled images with structured feedback — even when the lecture content and the testing schedule are otherwise identical.7 The anatomical-sciences-education literature has reproduced the finding repeatedly in undergraduate settings: students who learn structures on cadaveric or model-based specimens outperform students who learn the same structures on diagrams when both groups are tested on novel specimens.8

The downstream consequences are concrete. The nurse who can name an artery on a chart but cannot find a pulse. The medical student who recognizes pathology on a labeled slide but misses it on a clinic-day biopsy. The dental hygienist who can identify a root surface in a textbook image but not in a patient's mouth. These are not failures of declarative knowledge. They are failures of perceptual training, and the bench is where that training happens.

4. Social calibration on a shared task

Pair work at a bench is the original interprofessional rehearsal. Two students agreeing on what they're seeing, disagreeing about a measurement, and resolving the disagreement in real time over a shared physical specimen is the microsocial substrate of every clinical team they will ever join. A lecture hall, by design, suppresses this. A small-group discussion section approximates it. The bench is where it actually happens.

The collaborative-learning literature in undergraduate STEM is unusually robust here. The Springer, Stanne, and Donovan meta-analysis of 1999 found significant positive effects of small-group learning on achievement, persistence, and attitudes toward science across 39 studies and tens of thousands of students.9 The Freeman et al. 2014 PNAS meta-analysis — 225 studies, the largest synthesis of its kind — found that active-learning sections in undergraduate STEM produced examination-score gains roughly half a standard deviation higher than traditional-lecture sections, with failure rates 55% lower.10

Health-professions accreditors have noticed. The Interprofessional Education Collaborative core competencies, now adopted across nursing, medical, pharmacy, dental, and allied-health accreditation, explicitly require the kind of team-based competency that pair-work at a bench begins to build.11 The lab is one of the few places in undergraduate science where students get the prerequisite: extended, low-stakes practice at coordinating with another person around a shared physical referent.

The point is not that lecture is broken. The point is that lecture and bench teach different things, and a curriculum that treats them as substitutes is making a category error.

What this inventory does not say

The argument above is not that lab time is sacred or that no bench activity is ever wasted. Bench time is genuinely expensive. Some lab activities consume that time without serving any of the four learning categories above — rote memorization of structure names, for instance, can usually be moved to lecture or to a worksheet without measurable loss. A program that audits its lab activities honestly will almost always find some that should be cut, restructured, or moved.

The argument is more specific: when a bench activity serves one or more of the four learning categories, the lecture hall and the take-home worksheet are not substitutes for it. They are different instruments measuring different things. Cutting the activity to recover the time is a real choice with real consequences, and those consequences are observable in the downstream programs that receive our students.

A practical implication

For any program weighing reductions to its laboratory component, three suggestions, offered as professional counsel rather than advocacy:

  1. Audit each lab activity against the four learning categories above. Cut anything that doesn't serve at least one. Defend anything that serves more than one.
  2. Develop assessments that demonstrate bench-dependent learning — lab practicals, OSCE-style stations, structured performance demonstrations. Without them, the value of the lab is invisible to the people reviewing the budget.
  3. Treat lecture and lab as complementary, not interchangeable, in curriculum-mapping conversations. The mapping document should make explicit which learning category each activity serves; substitution proposals should be required to name the category they are preserving and the category they are giving up.

The cost question is real. The replacement question is the one that usually gets answered without enough evidence. The literature on what the bench teaches has been accumulating for decades. It deserves a seat at the table when the budget conversations happen.

References & further reading

  1. Squire, L. R. (2004). “Memory systems of the brain: a brief history and current perspective.” Neurobiology of Learning and Memory, 82(3), 171–177. doi:10.1016/j.nlm.2004.06.005. The canonical short reference for the declarative / procedural distinction. For an instructionally-oriented treatment, see also Anderson, J. R. (1996), “ACT: A simple theory of complex cognition,” American Psychologist, 51(4), 355–365.
  2. Reznick, R. K., & MacRae, H. (2006). “Teaching surgical skills — changes in the wind.” New England Journal of Medicine, 355(25), 2664–2669. doi:10.1056/NEJMra054785. A foundational review of why simulation and supervised practice cannot be replaced by reading or video for procedural skill acquisition.
  3. McGaghie, W. C., Issenberg, S. B., Cohen, E. R., Barsuk, J. H., & Wayne, D. B. (2011). “Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence.” Academic Medicine, 86(6), 706–711. doi:10.1097/ACM.0b013e318217e119. See also Issenberg, S. B., et al. (2005), “Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review,” Medical Teacher, 27(1), 10–28.
  4. Jonassen, D. H. (2000). “Toward a design theory of problem solving.” Educational Technology Research and Development, 48(4), 63–85. doi:10.1007/BF02300500. The standard taxonomy of well-structured vs. ill-structured problems and the implications for instruction.
  5. Benner, P. (1984). From Novice to Expert: Excellence and Power in Clinical Nursing Practice. Menlo Park, CA: Addison-Wesley. Tanner, C. A. (2006), “Thinking like a nurse: a research-based model of clinical judgment in nursing,” Journal of Nursing Education, 45(6), 204–211. doi:10.3928/01484834-20060601-04.
  6. Kellman, P. J., & Massey, C. M. (2013). “Perceptual learning, cognition, and expertise.” In B. H. Ross (ed.), Psychology of Learning and Motivation, vol. 58, pp. 117–165. Academic Press. doi:10.1016/B978-0-12-407237-4.00004-9. The foundational treatment of perceptual learning modules and why expert vision cannot be acquired from labeled-image study alone.
  7. Krupinski, E. A. (2010). “Current perspectives in medical image perception.” Attention, Perception, & Psychophysics, 72(5), 1205–1217. doi:10.3758/APP.72.5.1205. A review of the radiology-perception literature on how expert visual identification develops and what training conditions support it.
  8. Wilhelmsson, N., Dahlgren, L. O., Hult, H., Scheja, M., Lonka, K., & Josephson, A. (2010). “The anatomy of learning anatomy.” Advances in Health Sciences Education, 15(2), 153–165. doi:10.1007/s10459-009-9171-5. Representative of a substantial literature in Anatomical Sciences Education on specimen-based vs. diagram-based learning outcomes.
  9. Springer, L., Stanne, M. E., & Donovan, S. S. (1999). “Effects of small-group learning on undergraduates in science, mathematics, engineering, and technology: a meta-analysis.” Review of Educational Research, 69(1), 21–51. doi:10.3102/00346543069001021.
  10. Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). “Active learning increases student performance in science, engineering, and mathematics.” Proceedings of the National Academy of Sciences, 111(23), 8410–8415. doi:10.1073/pnas.1319030111.
  11. Interprofessional Education Collaborative. (2023). IPEC Core Competencies for Interprofessional Collaborative Practice: Version 3. Washington, DC: Interprofessional Education Collaborative. ipecollaborative.org/ipec-core-competencies. Adopted across nursing, medicine, pharmacy, dentistry, and allied-health accreditation as the framework for interprofessional team competency.

Drafted May 2026.