0

In infinitary logic (there's an SEP entry about it), you can have infinitely long conjunctions and disjunctions. But imagine that different logics are like different video games. Usually, to my knowledge, an infinitary logic isn't like a game being put out by a competitor with classical logic. It's more like an infinitely pricey DLC upgrade/content boost for classical logic, made either by the same company (albeit probably under a new name) or an indirect inheritor of the original company's content license (for this game) (or: they handed out the license for free, as widely as could be, so we have a lot of weirdly "official" material produced by fans of Russell and Frege, so to speak).

So then collapsing the representation of one of the two fundamental variables in an infinitary logic as such (the size-of-conjunctions/disjunctions variable) to a NAND-representation: would that make any of the kinds of "reasoning" they do in that sphere ("hyper-reasoning" might be a fairer name for it, if not to fully proclaim it either) more "efficient"? Is NAND-reasoning always somehow more efficient, either perpetually or cumulatively, than having to sort between AND- and OR-reasoning? Or w.r.t. infinitary logic, does the core premise of such systems make them inherently so complex already that it doesn't matter if you tried to "simplify" the nature of the signature by reducing the significance of one of the key variables from two patterns to one?

What, if any, are the special properties of infinitely long NAND sentences, in this kind of context? I feel like there must be something to it, because like if you take Frege's talk of a singular universal fact at "face value," yet that always seems to "look like" a "great conjunctive fact" (as they sort of put it in the SEP entry on the cosmological argument for God): but so couldn't the One True Fact be a huge NAND-sequence instead? And since I did read that NOR has a similar (or equivalent, I'm not 100% sure) "power," then is there a difference between a universal NOR-factual closure and the default representation in infinitary logic? (But why wouldn't the duality of NOR and NAND, such as it still is, yet recapitulate the same theme as before, here? Even if we "know" we can "use just NAND or NOR, as we please," aren't we going to keep on using AND and OR just the same?)

1
  • An attempt to resolve differing opinions in response to this question ported to math.stackexchange.com/questions/4942215/… in response to user11867 offer to provide a marked-up version of the response.
    – J D
    Commented Jul 5 at 23:11

2 Answers 2

0

You ask 2 questions:

Is there any major benefit to using NAND in infinitary logic?

and

Is NAND-reasoning always somehow more efficient, either perpetually or cumulatively, than having to sort between AND- and OR-reasoning?

It is a fact that NAND gates could be used to build entire ICs (spiceworks.com) since they are logically speaking functionally complete. Intuitively, we might reason that there might be some sort of efficiency for manufacturers to build circuits entirely out of NANDs rather than using an assortment of gates. Thus, the question is, what would be the nature of the parsimony, if any?

The answer lies in the notion of functional completeness and its conception as an ontological primitive of logical operations. From WP's article on the Sheffer stroke:

NAND does not possess any of the following five properties, each of which is required to be absent from, and the absence of all of which is sufficient for, at least one member of a set of functionally complete operators: truth-preservation, falsity-preservation, linearity, monotonicity, self-duality. (An operator is truth- (or falsity-)preserving if its value is truth (falsity) whenever all of its arguments are truth (falsity).) Therefore {NAND} is a functionally complete set... This can also be realized as follows: All three elements of the functionally complete set {AND, OR, NOT} can be constructed using only NAND. Thus the set {NAND} must be functionally complete as well.

Thus, what we can take from this jargon is the idea that by just using NAND alone, we could build all possible Boolean truth tables with a single operator. That's more parsimonious than the use of two or more operators. In the world of IC construction, that might mean compositional units of ICs would be less varied and therefore easier to construct since all parts of bigger units of computation can be laid out on a medium without regard to the ordering of constituents, which in an IC of millions of gates can be computationally difficult to optimize.

However, in practice, NAND gates are not the only gates used in the construction of CPUs. It turns out, having a simpler set of logical primitives isn't the same as having the most efficient hardware. CPU construction includes the goal of minimizing the number of transitors used, and NAND gates are more complex than other types of circuits. From 'Is CPU really based only on NAND gates?' (Quora):

In summary, while NAND gates are important building blocks in CPU design, modern CPUs use a variety of logic gates implemented using transistors to optimize performance and efficiency. The design of a CPU involves a complex interplay of various factors, and designers employ a range of techniques to achieve the desired balance of performance, power consumption, and area efficiency.

Might it still hold that fewer primitves is better in applications in logic where transistors aren't involved? In regards to a higher level understanding of parsimony with regards to an infinitary logic, it also implies that processing of the logic requires a simpler primitive recursive function which has complexity implications of efficiency. For instance, one might ground a second-order logic construction ∃X(Xa&Xb) in an infinite sequences of conjunctions and disjunctions:

∃X(X0a & X0b)) || ∃X(X1a & X1b)) || ∃X(X2a & X2b)) ||...

whose operators can be simplified to:

∃X((X0a ↑ X0b) ↑ (X0a ↑ X0b)) || ∃X((X1a ↑ X1b) ↑ (X1a ↑ X1b)) || ∃X((X2a ↑ X2b) ↑ (X2a ↑ X2b)) ||...

which increases the number of calculations by reducing the the variety of operations. So, it seems that from the perspective of parsimony of a formal system, again, there doesn't seem to be an advantage in using NAND as opposed to alternating with ANDs and ORs in an expression.

And that symbolic inefficiency can be seen in the symbolic logic of computation that uses Boolean operators as well. In fact, modern processors' instruction sets don't even include NAND instructions at all. See 'Why is there no nand instruction in modern CPUs? (EESE)' for more information why that is the case. This reflects the idea that there are different notions of efficiency between the computer engineering for building physical circuits and the instruction set designer. See 'What is the point of converting everything to NAND/NOR and how do you do it right?' (EESE) for more information why.

So, it doesn't appear that there is a benefit of reducing these sorts of operations to a single operator. It increases the number of transistors; it increases the length of logical expressions; and even including the NAND instruction is forgone in a processor instruction set as it wastes the name space of instructions.

0
0

We can construct the set of infinitary NAND sentences in a way that is analogous to the construction presented here. If we do this, then every sentence will be equivalent to an infinitary NAND sentence. One benefit we gain from this is the following. Imagine we have a property that preserves equivalence. This means that if a sentence has this property, and that sentence is equivalent to another sentence, then the other sentence also has that property. Now suppose we want to prove that every sentence has this property. Given the above fact about NAND sentences, we can prove this by proving the following:

  1. Every atomic sentence has this property.
  2. Given a countable collection of sentences with this property, the resulting NAND sentence has this property.

This is an improvement in efficiency. The usual proof would require that Step 2 be broken into two substeps: first we prove the infinitary conjunction has the property, and then we prove the negation of that conjunction has the property. Using the NAND sentences, we never have to prove the infinitary conjunction has the property. Instead, we get that part for free.

I am not sure how important this improvement in efficiency is. To a hypothetical automated system, it might matter. But any real automated system will be dealing with finitary conjunctions. To a human being trying to prove things, I suspect this benefit would not be very important. Our intuition seems to be grounded in thinking about AND and NOT separately, so in most cases, we would probably prove (2) by proving the AND and NOT cases separately, which would defeat the purpose of using the NAND sentences in the first place.

If you ask this question on the Mathematics StackExchange, where answers can include LaTeX markup, I can elaborate further on my answer.

2
  • I'm not going to downvote or retaliate or anything like that, but as our answers are in opposition, I'm curious to learn from this response, as it prima facie makes a rather reasonable claim. For certain, you are alleging that simplifies the proof structure by eliminating a step, which certainly is an efficiency in terms of the abstract formal nature of the proof regardless of the implications in a physical system of computation. I'll save Kristian Berry the hassle and port it over the MathSE to the best of my ability and ping you both. I'd love to be wrong in my response if I can learn.
    – J D
    Commented Jul 5 at 22:44
  • @KristianBerry, and as usual, you're questions are to be celebrated! :D
    – J D
    Commented Jul 5 at 22:46

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .