A monoid is a mathematical structure with an associative law of composition and an identity element. It can be proven that if an element of a monoid has an inverse, then the inverse is unique:
Assume y and z are two inverse elements of x, and e is the identity element of the monoid.
y = y • e = y • (x • z) = (y • x) • z = e • z = z
This is an algebraic proof that there cannot be more than one inverse element of x.
We considered two variables, y and z, and assumed they were both inverses to x.
It followed, by substitution of definitions, and applications of monoid axioms, that y = z.
That is, all things claiming to be an inverse to x are equal to each other.
I understand this proof logically, but I find it to be more of a symbolic “trick” than a deep way to represent what uniqueness means.
When you study different formal systems which are logically equivalent to one another, you find that two sentences which technically express the same fact nonetheless don’t appear to be “saying the same thing”, because they portray the world from a different perspective. (As an aside, it would be fascinating if we could formalize the difference between those two things.)
My question is about to what extent does the form of logic - the way we have defined and structured a logical system - predispose us to choose that “idiom” as a way to prove “uniqueness”, when there could be other ways to conceptualize what uniqueness is.
The only insight I can offer so far is this.
It is very common to interpret “P(c)”, where P is predicate “red”, and c is a constant denoting a particular cup, as “Cup c is red.”
My problem might just be that I haven’t studied the meaning of variables in logic enough yet.
If a variable is not quantified over, it is called free.
An interpretation can satisfy a formula, or not satisfy it. Whether it does or not, depends exclusively on the choice of a structure, and the assignment of the free variables.
I personally find the following sentence to be a useful example to analyze:
Trisha is the only doctor.
Which conventionally would be proven by a sentence of the form,
Trisha is a doctor, and for anyone else who is a doctor, that person is also Trisha.
I agree with Mauro that this might be a misinterpretation of what “Doctor(Trisha) and for all x, Doctor(x) implies x = Trisha” is saying - but I think this implies that a common way of explaining the meaning logical sentences is inaccurate, and we need to be more precise.
x is a symbol called a “variable”. An assignment allows us to choose what element in a domain x is mapped to.
There is this aspect of first order logic I haven’t seen discussed much yet which is arbitrary assignments. I feel like this needs to be made more precise. Is an arbitrary assignment an assignment, the details of which haven’t been disclosed to the human reading the sentence? Or is an arbitrary assignment the lack of an assignment?
I will come back and add more. But now the question is more about how to formally evaluate the proposition “Trisha is the only doctor”.
For example, a model satisfies a formula when the formula is true for an arbitrary assignment.
I think what I was getting at is, to say “for all x such that x is a doctor, x is Trisha” is awkward when you think about how this is formally evaluated: in principle, variable x must pass over every element in the domain, and it is asserted that for none of them is Doctor() true, except Trisha.
I realize one mistake I made was thinking of “Trisha” more as a predicate than strictly an identifier for an element in the domain; but I still think the monoid example gives us food for thought, and I am confident there are other presentations of “uniqueness” than the common one I mentioned.
“Uniqueness” sounds like a meta-property. Perhaps second-order logic has a way to express “P(c) uniquely”, such as “Unique(P, c)”. In fact, I know there is a “unique existential quantifier” like: “∃!x P(x)”.