>> HOW do you measure knowledge? And when you decide how, how do you scale it?
This is what makes the problem intractable. Measuring knowledge takes time, lots of time, by a skilled person. That does not scale.
Since we need (want) scale we necessarily have to use (ever weaker) proxies for measurement. And if there's one thing we do know, you get exactly what you measure for.
Hence, the system is not broken - it's working exactly as intended. It's not "fixable" because there's nothing to fix (at this scale.)
Real learning happens either because a) the student is soaking up everything they possibly can using every resource offered or
B) they've left college and are fortunate enough to be in a workplace where there are more knows than know-nots, and they take every opportunity to soak it in like a sponge.
College does not prepare people for the working world (and never will). It is operating exactly as it is designed to do.
So, the Leibniz argument? Our current system for educating citizens of all ages is already the best it can be, and any change or even reflection upon it is a waste of time.
You can't educate someone who is not ready to be educated. Those that get the most out of college are those that put the most in. This was true 1000 years ago, and is true now.
Yes, this system is the best [1] because access is open to all (which it wasn't). So those that want to go, can, and those who want to learn, can.
What probably needs to change is the understanding of what college is for. It's not to give you an education, it is to give you the opportunity for you to take an education for yourself.
[1] for some definition of best. Not all schools are created equal, nor all subjects, scale is in play here as well.
I somewhat agree about it being a chance for students to take education for them, but there is also the issue of an institution offering a limited view on a subject like computer science. For example some time ago I estimate that mainstream OOP was taught everywhere, while there was almost no place teaching FP (This is changing slowly now). Even if you took every opportunity you had, you might not have even a teacher or lecturer, who is familiar with it. You could only learn on your own, which you would not need that institution for.
Teaching quality is not the same in all places. Teachers and lecturers are not the same everywhere.
Indeed not all schools, and not all subjects, are created equally. And your education is not limited to the specific subjects, or competencies of the school you happen to be at.
> any change or even reflection upon it is a waste of time.
That’s a bit extreme; I interpreted their view as, it’s hard to fix because of intractable issues, but it doesn’t mean we can’t have marginal improvements. Radical upheavals and revamps are sketchy.
In the case of testing, it very much can scale. Tests need to be based on long form questions that test comprehensive knowledge. Open book, Open notes, and hell even open-collaboration up to some limit.
If a test is already graded on partial credit, which in the field of engineering at least most are, then it's no harder to grade than an equivalent test that has less but longer questions.
This obviously doesn't translate for multiple choice tests where there is no partial credit but at least in engineering those don't really exist outside of first year and maybe one or two second year classes. And honestly, every intuition tells me that those classes that I remembered doing no-partial-credit multiple choice should not be doing so in the first place.
Maths classes like algebra, precalc, calculus, statistics, and linear algebra should by no means be using no-partial-credit exams. That defeats the entire purpose of the classes as those classes are to teach techniques rather than any particular raw knowledge.
Same for the introductory hard sciences like chemistry and physics.
And for the ability to handle those more "bespoke" exams, we really need to be asking the question of why certain students are taking certain classes. Many programs have you take a class knowing that only maybe 30% of that will be relevant to your degree.
Instead of funneling all the students through a standard "maths education" class, maybe courses would be better suited by offering an "X degree's maths 1-3" or even simply breaking up maths classes into smaller semesters where you are scheduled to go to teacher X for this specific field up to week A, then teacher Y for this other unrelated maths field up to week B, and teacher Z until the end of the semester. In-major classes need not do this but general pre-req classes could benefit by being shorted and split up through the semester into succinct fields of knowledge so that maths or physics departments aren't being unnecessarily burdened by students who will never once apply the knowledge possibly learned in that class.
-------------
The solution to testing students in a way that they can't cheat is to simply design tests that require students to apply their knowledge as if in the real world. No artificial handicaps and at most checks should be made for obviously plagiarized solutions. If that's not a viable testing mechanism, it's probably worth asking why and considering reworking the course or program.
The solution to students not wanting to absorb knowledge is to stop forcing students to learn topics & techniques they'll never use because maybe some X<25% of them will. Instead split up courses into smaller chunks that can be picked and chosen when building degree tracks.
---------------
Edit: I forgot to include it but this is largely based on my experiences not necessarily just on my own as a student but as a tutor for countless peers and juniors during my time at university, and as a student academics officer directly responsible for monitoring and supporting the academic success of ~300 students for an organisation I was part of. This largely mirrors discussions I've had with teaching staff and it always seems to boil down to "the administration isn't willing to support this" or some other reason based on misplaced incentives at an administrative and operational level (such as researchers being forced to teach courses and refusing to do anything above or often even just at the bare minimum for the courses they are teaching).
> Tests need to be based on long form questions that test comprehensive knowledge. Open book, Open notes, and hell even open-collaboration up to some limit.
Coursework is already along these lines, no?
> The solution to testing students in a way that they can't cheat is to simply design tests that require students to apply their knowledge as if in the real world.
How would this apply to a course in real analysis, say?
University education generally isn't intended to be vocational.
It is but exams are not and if the intent of exams is to test knowledge, they should be in a format that is applicable to the real world and one that can't easily be cheated. Also for what it is worth, for essentially all of the courses I took in university, unless they were explicitly projects based classes, exams were the overwhelming majority of the grades in the course (often ~75-90%).
What this meant in practice was that exams that were closed-book, closed-notes often had averages in the 30s or 40s where everyone got curved upwards at the end of the day while open-book exams had averages in the 60s-80s and students who could apply their knowledge passed the exam while students who couldn't didn't. I can't recall a single course with the latter style of exams where I passed without knowing the material or failed while knowing it. For the prior however I personally experienced both and witnessed numerous other students go through this at the same.
> How would this apply to a course in real analysis, say?
Sorry if I wasn't clear but when I said "as if in the real world" I was referring specifically to students having access to the same resources they would have in the real world (aka reasonably flexible time constraints and with access to texts, resources, and tools) not necessarily that the questions needed to be structured as "in your field you'd use this like this" kind of questions.
Unit testing is also frequently very artificial and disconnected from production use of a codebase. Nevertheless, there is a great deal of value in checking whether things you wrote actually do have the effects you intended.
This is what makes the problem intractable. Measuring knowledge takes time, lots of time, by a skilled person. That does not scale.
Since we need (want) scale we necessarily have to use (ever weaker) proxies for measurement. And if there's one thing we do know, you get exactly what you measure for.
Hence, the system is not broken - it's working exactly as intended. It's not "fixable" because there's nothing to fix (at this scale.)
Real learning happens either because a) the student is soaking up everything they possibly can using every resource offered or B) they've left college and are fortunate enough to be in a workplace where there are more knows than know-nots, and they take every opportunity to soak it in like a sponge.
College does not prepare people for the working world (and never will). It is operating exactly as it is designed to do.