Skip to main content
29 events
when toggle format what by license comment
Oct 3, 2019 at 20:35 comment added maaartinus @EricLippert You're welcome. These strange rules follow how most CPUs work.
Oct 3, 2019 at 20:19 comment added Eric Lippert @maaartinus: I had no idea! Thanks for letting me know.
Oct 3, 2019 at 20:18 comment added maaartinus @EricLippert "<< and >> and requires the operand to be non negative and then this crazy design is copied into Java, C#, ..." ... not exactly. At least in Java, both operands may be negative and the result is well-defined (and things like x >> -1 are used to extract the sign bit).
Oct 3, 2019 at 11:07 comment added Peter Cordes @EricLippert: the only problem / absurdity / thing that looks wrong is early on when you suggest that lazy vs. eager is the difference instead of bitwise vs. logical. As you later say, both those properties are essential to understanding the operators. Perhaps phrasing like "as well as bitwise vs. boolean, the other key difference is eager vs. laze. This difference also informs which operators are needed." (That 2nd sentence of my suggestion is weak, but I like the first one that introduces eager vs. lazy without suggesting that bitwise vs. boolean isn't important or worth thinking about.
Oct 3, 2019 at 10:59 comment added Peter Cordes @EricLippert: It would sometimes be nifty if modern HW had a shift instruction that took a signed shift count. IDK if it would make O(1) hardware implementations (barrel shifters) significantly larger; they already usually support rotates where any bit can end up anywhere. But sometimes it's handy that the hardware masks (x86) or saturates (ARM) the shift count when it's outside the 0..31 range. Perhaps if C had chosen differently, HW would have followed.
Oct 1, 2019 at 20:48 comment added Steve @EricLippert, indeed, it is surprising how terribly long a legacy some design decisions can have, exerting a dead hand of influence even when the circumstances which determined the original decisions have long since disappeared! I would guess what some see as paying the cost of compromise, are what others see as reaping the benefits of integration - including, often, integration merely with existing popular understandings or expectations (of the meaning of programming statements and syntax, of the exact behaviour of operations, etc.)
Oct 1, 2019 at 19:20 comment added Eric Lippert @Steve: But it is downright vexing that our language design today is still so heavily influenced by the small details of the instruction set of a machine from the 1970s.
Oct 1, 2019 at 19:19 comment added Eric Lippert @Steve: But upon more reflection you see that, oh, the original PDP 11 did not have this instruction, and in fact it had shift right by one and shift left by one, and now you realize that the authors of the compiler probably did not want to generate the code that chose which instruction to use at runtime based on the sign of the operand; the author of the code probably knows which direction it is going to shift, and this saves on code size, back when code size was important. So once again we see that design is a process of compromises.
Oct 1, 2019 at 19:14 comment added Eric Lippert @Steve: Well I'm glad it seemed less absurd as time went on. :) I agree that it often comes down to details of assembly language, but in fact there are some true oddities here. For example, PDP 11 assembly language's shift instruction takes an operand that is from -31 to 32, so one naturally assumes that a sensible language would make an operator x shift y where y can be between -31 and 32. But no, C unnecessarily gives us two operators << and >> and requires the operand to be non negative and then this crazy design is copied into Java, C#, C++...
Oct 1, 2019 at 18:54 comment added Steve ...You invite us to forget about the logical/bitwise distinction and instead consider the eager/lazy distinction. But both seem relevant, and not all combinations are implemented. The unary operations do not have a lazy version, because the concept is inapplicable. The bitwise AND and OR operations do not have a lazy version, probably because they map directly to assembly instructions which operate a word-at-a-time, and there is no prospect of efficiency improvements to justify separate lazy versions. (2/2).
Oct 1, 2019 at 18:46 comment added Steve @EricLippert, I only thought your argument absurd at first. As I say in my comment following some reflection, there are really two separate qualities to these operators. First, whether they are eager or lazy, and second whether they are logical or bitwise. One could theoretically have a lazy bitwise-AND, which did not even evaluate the second operand, if the first was zero, and thus the bitwise result must be zero regardless of the second operand - but such an operation is not implemented... (1/2)
Oct 1, 2019 at 16:40 comment added Eric Lippert @senderle: Ah, I see what you're saying. So in the context of C, the semantics of your proposed x && f() on ints x and f() are the semantics of x == 0 ? 0 : (x & f()). That does make some sense, but I agree that it is likely too special-purpose to make it into a mainstream language.
Oct 1, 2019 at 16:32 comment added senderle To be clear, I'm thinking in general, not in the C, etc. context specifically. But suppose your final step lazily reduces an array of booleans to a single boolean using and. You could lazily zip together two boolean arrays, lazily generate a single array using element-wise and, and perform your lazy reduction on the result. If the first value of the first array is False, then no values in the second boolean array will ever need to be evaluated. Probably not practical in most cases, of course.
Oct 1, 2019 at 14:57 comment added Eric Lippert @senderle please say more! I am intrigued by your idea but not following your train of thought
Oct 1, 2019 at 14:26 comment added senderle This is an interesting take. But I can still think of ways to conceptualize a lazy Boolean operation on packed arrays of Booleans. It's just that the laziness would run first along the packed arrays, rather than the sequence of operations.
Oct 1, 2019 at 14:03 comment added user101289 I suspect that there may be a disconnect between people who think of "logical operator" as being "cast each argument to a boolean first" (which seems a bit odd since you really are wanting a boolean type) vs. those people who think of "C is a scripting language for assembly and so of course I sometimes want to treat bits as numbers and sometimes I want to treat them as booleans".
Oct 1, 2019 at 13:13 comment added Eric Lippert @Steve: If the answer seems absurd then I have made a poorly expressed argument somewhere, and we ought not to rely on an argument from authority. Can you say more about what seems absurd about it?
Oct 1, 2019 at 13:12 comment added Eric Lippert @ZizyArcher: As I noted in the comment above, the decision to omit a bool type in C has knock-on effects. We need both ! and ~ because one means "treat an int as a single Boolean" and one means "treat an int as a packed array of Booleans". If you have separate bool and int types then you can have just one operator, which in my opinion would have been the better design, but we're almost 50 years late on that one. C# preserves this design for familiarity.
Oct 1, 2019 at 13:08 comment added Eric Lippert @user2357112: C# was designed to fix many of the oddities of C and C++, while preserving as much familiarity as possible. C's odd behaviour in this regard stems from the lack of a bool type in the language; once you add that, you can come up with a more consistent operator design, as C# did.
Oct 1, 2019 at 9:56 comment added Steve @ZizyArcher, because the bitwise-NOT inverts the bits (which may not equal zero after being applied to a non-zero operand, unless all bits were set or unless the type was a boolean), whereas the logical-NOT yields zero if the operand is non-zero (if any bits are set).
Oct 1, 2019 at 9:50 comment added Steve @user2357112, I must admit, I thought the explanation quite absurd until I saw who had wrote it, and then I paused for breath. It is true what Eric says that it is best to see && as the 'lazy logical', but there is no corresponding 'eager logical' (except for boolean types where there is inherently no logical/bitwise distinction), so we have only eager bitwise and lazy logical. So Eric has touched on our problem: laziness and logicality (or bitwisity?) are two unrelated qualities of operators, and & differs from && on both counts.
Oct 1, 2019 at 9:28 comment added Zizy Archer Assuming 'eager' vs 'lazy' is the main distinction, why even have both ! and ~? A single one would be completely sufficient and lead to no confusion.
Oct 1, 2019 at 2:37 comment added user2357112 For example, in C and C++, 1 & 2 has a completely different result from 1 && 2.
Oct 1, 2019 at 2:04 comment added Eric Lippert @user2357112: We designed C# very carefully so that & and && have logically consistent behaviour for non-Boolean types; what "other languages" did you have in mind?
Oct 1, 2019 at 1:51 comment added user2357112 I don't think this is a good way to think of & and &&. While eagerness is one of the differences between & and &&, & behaves completely differently from an eager version of &&, particularly in languages where && supports types other than a dedicated boolean type.
Sep 30, 2019 at 19:06 history edited Eric Lippert CC BY-SA 4.0
added 638 characters in body
Sep 30, 2019 at 18:49 comment added Martin Maat Thank you. This is the real eye opener for me. Too bad I cannot accept two answers.
Sep 30, 2019 at 18:47 vote accept Martin Maat
Sep 30, 2019 at 18:47
Sep 30, 2019 at 18:17 history answered Eric Lippert CC BY-SA 4.0