I'm not so sure. In most fields, we certainly discover new things (concepts, relationships, laws.) But we can usually describe those new things by analogy to existing things, because—given enough abstraction and analogy—they have recognizable "shapes." There are existing mental tools in our civilization that fit the concepts, and let us toss them around and look at them from all the angles.
Math is where we invent language to refer to new forms of abstraction themselves: novel possible shapes for our thoughts to take when we think about other things. You can't talk about a new shape by analogy to existing shapes. Nor can you abstract an abstraction in a way that gives you anything more familiar. (Instead, you'll usually get more novel, ontologically-primitive abstractions, like going from monads to arrows, or going from numbers to fields to rings.)
Sometimes disciplines like physics will find a concept that we don't have any mental tools in our toolkit for yet. Then we build some. But we still refer to the process of doing that as doing mathematics—and then we apply that new mathematics back in the problem domain to talk about the new concept.
(For a good example, the formalization of quantum theory in physics, required the creation of infinite dimensional analysis in mathematics. Physicists did most of that work, but the work itself was still mathematics, not physics.)
---
Now, other fields do still have a similar problem to mathematics, of historical momentum carrying forward old "things" (again: concepts, relationships, laws) when there are better, clearer "things" that could be used in their place. But when we're not working with pure abstractions—ways of thinking—we can make the effort to compare and contrast old and new "things", and decide that some might be more edifying than others.
It is possible for an especially-gifted physics teacher to write a very accessible physics textbook, because they need only pick all the clearest "things" to demonstrate. That teacher will still be stuck in a given paradigm—a way of thinking, a belief in the worth of some "things" over others, popular in the culture of their discipline at the time and place they worked. But they might be able to (barely) rise above it, if they think hard about the history of their discipline and the paradigms it has gone through in the past, compare-and-contrast those, and synthesize something that isn't quite just the paradigm they're immersed in.
Mathematics is uniquely problematic because it is entirely paradigm. It is a tree of paradigms—each new abstraction only making sense assuming the paradigm it was created in, and then becoming the paradigm for further abstractions still. Every mathematician, all the time, is trying to discover what a particular paradigm—their specialization—can be twisted to accomplish. Not one of us has the brain power to know the total space of things that one of these ways of thinking can be used to express—the problem domains the tool is applicable to—in order to know which tools show more or less promise at being "powerful." We know what we've discovered so far, but we have such an infinitesimal idea of the "space of all possible abstractions" that we could be totally missing some of the best, and using ones that are barely satisfactory.
Point a hypercomputer AI at "solving physics", and it'll spit out a description of the universe that will certainly have more "things" in it than we know about today—but which will still also contain a subset of the "things" we do have. (The most "carving nature at its joints"ing ones, presumably.) Those "things" that get carried over will, of course, be defined much more precisely; their concept-boundaries will be adjusted to include or exclude edge-cases we weren't aware of. But they'll be the same "things."
Point a hypercomputer AI at "solving mathematics"—of giving us the most powerful abstractions possible to solve problems with—and the result might be entirely unlike anything we've invented so far. It might not share any resemblance to current mathematics, beyond a shared fondness for sets. (For a silly hypothetical: it might turn out that there's a better abstraction for thinking about relationships between things than "functions", and that all the rest of mathematics and physics and whatever other problem domains simplify greatly if we think about the relationships in them in terms of that abstraction, instead of trying to specify relationships using the "function" abstraction.)
Math is where we invent language to refer to new forms of abstraction themselves: novel possible shapes for our thoughts to take when we think about other things. You can't talk about a new shape by analogy to existing shapes. Nor can you abstract an abstraction in a way that gives you anything more familiar. (Instead, you'll usually get more novel, ontologically-primitive abstractions, like going from monads to arrows, or going from numbers to fields to rings.)
Sometimes disciplines like physics will find a concept that we don't have any mental tools in our toolkit for yet. Then we build some. But we still refer to the process of doing that as doing mathematics—and then we apply that new mathematics back in the problem domain to talk about the new concept.
(For a good example, the formalization of quantum theory in physics, required the creation of infinite dimensional analysis in mathematics. Physicists did most of that work, but the work itself was still mathematics, not physics.)
---
Now, other fields do still have a similar problem to mathematics, of historical momentum carrying forward old "things" (again: concepts, relationships, laws) when there are better, clearer "things" that could be used in their place. But when we're not working with pure abstractions—ways of thinking—we can make the effort to compare and contrast old and new "things", and decide that some might be more edifying than others.
It is possible for an especially-gifted physics teacher to write a very accessible physics textbook, because they need only pick all the clearest "things" to demonstrate. That teacher will still be stuck in a given paradigm—a way of thinking, a belief in the worth of some "things" over others, popular in the culture of their discipline at the time and place they worked. But they might be able to (barely) rise above it, if they think hard about the history of their discipline and the paradigms it has gone through in the past, compare-and-contrast those, and synthesize something that isn't quite just the paradigm they're immersed in.
Mathematics is uniquely problematic because it is entirely paradigm. It is a tree of paradigms—each new abstraction only making sense assuming the paradigm it was created in, and then becoming the paradigm for further abstractions still. Every mathematician, all the time, is trying to discover what a particular paradigm—their specialization—can be twisted to accomplish. Not one of us has the brain power to know the total space of things that one of these ways of thinking can be used to express—the problem domains the tool is applicable to—in order to know which tools show more or less promise at being "powerful." We know what we've discovered so far, but we have such an infinitesimal idea of the "space of all possible abstractions" that we could be totally missing some of the best, and using ones that are barely satisfactory.
Point a hypercomputer AI at "solving physics", and it'll spit out a description of the universe that will certainly have more "things" in it than we know about today—but which will still also contain a subset of the "things" we do have. (The most "carving nature at its joints"ing ones, presumably.) Those "things" that get carried over will, of course, be defined much more precisely; their concept-boundaries will be adjusted to include or exclude edge-cases we weren't aware of. But they'll be the same "things."
Point a hypercomputer AI at "solving mathematics"—of giving us the most powerful abstractions possible to solve problems with—and the result might be entirely unlike anything we've invented so far. It might not share any resemblance to current mathematics, beyond a shared fondness for sets. (For a silly hypothetical: it might turn out that there's a better abstraction for thinking about relationships between things than "functions", and that all the rest of mathematics and physics and whatever other problem domains simplify greatly if we think about the relationships in them in terms of that abstraction, instead of trying to specify relationships using the "function" abstraction.)