Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

26
  • 10
    The parser is unable to disambiguate the two situations, therefore the language designers must do so. Commented Sep 30, 2019 at 14:03
  • 16
    @Steve: Indeed, there are many similar problems already in C and C-like languages. When the parser sees (t)+1 is that an addition of (t) and 1 or is it a cast of +1 to type t? C++ design had to solve the problem of how to lex templates containing >> correctly. And so on. Commented Sep 30, 2019 at 18:25
  • 6
    @user2357112 I think the point is that it's okay to have the tokenizer blindly take && as a single && token and not as two & tokens, because the a & (&b) interpretation isn't a reasonable thing to write, so a human would never have meant that and been surprised by the compiler treating it as a && b. Whereas both !(!a) and !!a are possible things for a human to mean, so it's a bad idea for the compiler to resolve the ambiguity with an arbitrary tokenization-level rule.
    – Ben
    Commented Oct 1, 2019 at 5:12
  • 19
    !! is not only possible/reasonable to write, but the canonical "convert to boolean" idiom. Commented Oct 1, 2019 at 13:50
  • 4
    I think dan04 is referring to the ambiguity of --a vs -(-a), both of which are valid syntactically but have different semantics.
    – Ruslan
    Commented Oct 1, 2019 at 21:08