I think what you're getting has truth. For myself, I would generalize what you're saying as any given formula/expression is embedded in some context/environment. If one continues to "step back" or go one meta (or repeatedly so), you'll find that at some point, you, the reader/interpreter, will need to fill in some implicit or unreified context. We could call these things (shared) assumptions or axioms. As you say, if I use a function `f` to form a term in my formula written in first-order logic, the definition of `f` is implied because it is expressed "somewhere else". The question is then "where is that expressed?". Since the definition of `f` is assumed from the point of view (semantics) of first-order logic, the definition of `f` could be interpreted as "variable" at a "higher" level.
Going to your parenthesis point, I agree that it is indicating the scope (i.e., the context/environment), and in that sense, I interpret parenthesis here as a kind of meta function, akin to how macros are "compile-time" functions rather than "run-time" functions.
I think first-order logic and relatedly most (all?) logic programming languages are missing a facility for "reasoning" about terms themselves. The reflexive answer to this is types, but this is largely wielded as a developer tool not as a means of expressing meaning. Adding constraints to logic programming is perhaps some of the way there and has the benefit (hope) that type-like ideas could be expressed compositionally as logical formulas rather than as declarations provided by the language.
My own pursuit of this subject is to see if these "levels", as you hint at, can be traversed and integrated into one logical formula language and notation. This all implies some kind of homoiconic metaprogramming, but easier said than done.
Indeed, I feel like I'm reaching towards a generalized understanding of context.
It seems to me that unless the logical languages are restricted from reasoing about themselves they become either incomplete or worse, inconsistent.
I have a sensation that on the computer science side of things, unless "reasoning about terms themselves" is somehow 'handled' (restricted) one can end up with undecidable systems, or worse...
But I still have questions about complexity so I cannot really undesrtand how completeness and soundness realte to decideability.
Going to your parenthesis point, I agree that it is indicating the scope (i.e., the context/environment), and in that sense, I interpret parenthesis here as a kind of meta function, akin to how macros are "compile-time" functions rather than "run-time" functions.
I think first-order logic and relatedly most (all?) logic programming languages are missing a facility for "reasoning" about terms themselves. The reflexive answer to this is types, but this is largely wielded as a developer tool not as a means of expressing meaning. Adding constraints to logic programming is perhaps some of the way there and has the benefit (hope) that type-like ideas could be expressed compositionally as logical formulas rather than as declarations provided by the language.
My own pursuit of this subject is to see if these "levels", as you hint at, can be traversed and integrated into one logical formula language and notation. This all implies some kind of homoiconic metaprogramming, but easier said than done.