10
$\begingroup$

Java has the @Override annotation. This annotation, when applied on a method, basically says that this method is intended to be an override of a superclass method. If the annotation is applied on a method that does not share signatures with any method in the superclass, the Java compiler gives an error.

When Kotlin came along, Override was promoted from an annotation to the keyword override.1 Conversely, Java's volatile keyword, which signified an uncached variable2, has its counterpart in the @Volatile annotation in Kotlin. The same situation applies to synchronized, transient, and I think a few more. Additionally, synchronized blocks (not methods) aren't even a core part of Kotlin; they are a function taking a lambda (which looks very much like block syntax in Kotlin).

Another notable example is Python. when dealing with class-based programming, a lot of Python's features are accessible through annotations3. For example, static methods are annotated with @classmethod4, whereas Java uses the static keyword and Kotlin uses a companion object, both syntactic features. A similar situation occurs with @dataclass, with the Kotlin and Java equivalents being data class and record, respectively. To get abstract classes and enums in Python, you have to extend a class found in the standard library, while they are both core syntactic features in Java and Kotlin.

This kind of game where you either put something as a core syntax feature or a part of the standard library intrigues me. I imagine Rust could have made a deriving keyword or use the colon syntax (as it already does with traits) instead of the derive macro (which is similar enough to annotations). What considerations must be taken into account when determining what features go into the standard library versus what makes it as a dedicated syntactic feature?


1As a matter of personal preference, I prefer the annotation.

2As far as I can tell.

3Actually called decorators, but I'll call 'em annotations for the sake of not confusing people.

4Yes, I know about @staticmethod, but @classmethod is closer to the Java/Kotlin equivalents

$\endgroup$

3 Answers 3

11
$\begingroup$

In Java, the @Override annotation was introduced when the language had already been in use for a while, so they had to maintain backwards compatibility (i.e. keep old code working without it). This meant that they both couldn't afford to introduce a new keyword (which could collide with previously valid identifiers), and had to make the override specification optional, so an annotation was pretty much a forced choice. In Python, the philosophy is to implement as much as possible in the form of library features, not language ones, so instead of making static and class methods a special case, they've introduced the concept of descriptor objects, and have implemented staticmethod, classmethod and property in the standard library based on it.

In general, I'd say these represent the two most common motivations affecting this choice.

  • An established language cannot really afford to introduce new keywords (yes, soft keywords are another potential solution), so they are often forced to fall back to attributes
  • If the language aims to minimize the core builtin part, it makes sense to only provide keywords for features that cannot be implemented otherwise
  • On the other hand, if there's a feature that could be implemented in the standard library, but is deemed common enough to deserve a more convenient syntax, a keyword could be introduced for syntactic sugar. (Like what Python 3.5 did with async and await instead of relying on specific generators)
$\endgroup$
10
+50
$\begingroup$

I'll answer from my perspective as a language designer.

tl;dr: it is impossible to generalize design decisions.

Generalizing Design Decisions

In the mind of every language designer, and indeed, designers of every thing that has ever existed, is a set of constraints they have to design by.

The best designers make these constraints explicit in something like design documents, but often, these constraints are implicit, only existing in the mind of the designer.

Even then, they might somehow be brought up to justify decisions after the fact (see the choice-supportive bias, aka rationalization), but again, the best designers consider constraints before making a decision, not after.

@abel1502 has an excellent answer laying out the concrete constraints that those specific languages had, whether before or after. Unfortunately, the answer does only address concrete cases. This is because generalizing design decisions can only take place in the context of understanding the design process and design factors.

So to answer why some design is a certain way, you can only answer the question for that design.

But once you know the design process and enumerate the factors, you can figure out the answers for other designs more easily.

Concrete Examples

With that said, I will talk about a few concrete examples.

C and C++ have added a few keywords since C89 and whatever the equivalent C++ standard is. However, they also needed backwards compatibility, and they don't really have annotations, per se.

Fortunately, the designers thought forward and gave themselves an out: user code is not permitted to use identifiers that begin with an underscore (_).

So the new keywords all invariably begin with an underscore (_Generic, _Static_assert, etc.)

So C and C++ do have annotation-like things now, by adding specific keywords. One example is the _Noreturn keyword.

But wait, C and C++ have macros, and they added a static_assert macro, one does not begin with an underscore.

The reason this is still "backwards compatible" is because of several reasons:

  • You have to #include (import) the assert.h header to get that macro.
  • You will only get that macro if your toolchain is for the newer versions of C. If your toolchain is for an old version, the version you probably wrote your code for, you won't get that macro.

So in essence, C had a reserved set of keywords that still maintain backwards compatibility, along with the equivalent of Rust editions and requiring you to ask for the new stuff by import.

My language has the same sort of thing. You cannot have two or more consecutive underscores in an identifier (for name mangling), and keywords are associated with packages not built-in to the language.

Sure, if I were to add keywords to the base standard library, that would break code because the standard library is essentially imported the same as the Python from std import *.

But I can always add keywords to subpackages instead, which you would have to directly import and then use the package name as a prefix, meaning that you would have to choose to do so.

In other words, if you make the user do work to get your new stuff, that is sufficient to maintain backwards compatibility.

Another example is Zig where "annotations" are not keywords nor annotations, but really built-in functions and mostly correspond to LLVM-specific extensions. These came about because of implementation concerns.

Design Factors for This Case

So now we have a partial list of design factors relating to annotations versus keywords:

  • Backwards compatibility.
  • Language versions.
  • Making the user do work to get the new stuff.
  • Implementations.

Use this list to start judging this type of design decision, and I think you'll easily identify most or all of the constraints that mattered to the language designer.

$\endgroup$
2
  • 1
    $\begingroup$ Nitpick: C++ handled it similarly to but differently from C. There are no C++ keywords starting with underscore, and static_assert is a keyword, not a macro. Also, C++ does have attributes. $\endgroup$
    – Pablo H
    Commented Aug 22, 2023 at 14:22
  • $\begingroup$ @PabloH thank you for the information! $\endgroup$ Commented Aug 23, 2023 at 3:16
1
$\begingroup$

Override is a good example. Depending on how your language works override might be required as a keyword, because the compiler does something before annotations can be processed. Volatile is also a good example, because it exists traditionally and stems from a time where annotations did not exist. But processing of volatile happens very late, so it can be turned into an attribute today. Popular languages that have attributes today often did not have them when their first versions were published.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .