37

I am refactoring a PHP OOP legacy website.

I am so tempted to start using 'final' on classes to "make it explicit that the class is currently not extended by anything". This might save lots of time if I come to a class and I am wondering if I can rename/delete/modify a protected property or method. If I really want to extend a class I can just remove the final keyword to unlock it for extending.

I.e If I come to a class that has no child classes I can record that knowledge by marking the class a final. The next time I come to it I would not have to re-search the codebase to see if it has children. Thus saving time during refactorings.

It all seems like a sensible time saving idea.... but I have often read that classes should only be made 'final' on rare/special occasions.

Maybe it screws up Mock object creation or has other side effects that I am not thinking of.

What am I missing?

5
  • 3
    If you have full control of your code, then it is not terrible, I suppose, but a bit on a OCD side. What if you miss a class (e.g it has no children but it aint final)? It is better let a tool scan the code and tell you whether some class has children. As soon as you package this as a library and give it to someone else, dealing with 'final' becomes a pain.
    – Job
    Commented Jul 2, 2011 at 13:58
  • For example, MbUnit is an open framework for .Net unit testing, and MsTest ain't (surprise surprise). As the result, you HAVE to shell out 5k to MSFT just because you want to run some tests the MSFT way. Other test runners cannot run the test dll that is built with MsTest - all because of the final classes that MSFT used.
    – Job
    Commented Jul 7, 2011 at 4:24
  • 3
    Some blog post about the topic: Final Classes: Open for Extension, Closed for Inheritance (May 2014; by Mathias Verraes)
    – hakre
    Commented May 18, 2014 at 16:16
  • Nice article. It speaks my thoughts about this. I did not know about those annotations so that was handy. Cheers.
    – JW01
    Commented May 21, 2014 at 19:54
  • "PHP 5 introduces the final keyword, which prevents child classes from overriding a method by prefixing the definition with final. If the class itself is being defined final then it cannot be extended." php.net/manual/en/language.oop5.final.php It is not bad at all.
    – Black
    Commented Sep 24, 2019 at 7:19

7 Answers 7

72

I have often read that classes should only be made 'final' on rare/special occasions.

Whoever wrote that is wrong. Use final liberally, there’s nothing wrong with that. It documents that a class wasn’t designed with inheritance in mind, and this is usually true for all classes by default: designing a class that can be meaningfully inherited from takes more than just removing a final specifier; it takes a lot of care.

So using final by default is by no means bad. In fact, amongst OOP experts it’s widely agreed that final should be the default, e.g. Jon Skeet:

Classes should be sealed by default in C#

Or Joshua Bloch:

Design and document for inheritance or else prohibit it [Effective Java, 3rd Ed, Item 19]

Or Scott Meyers [More Effective C++, Item 33].

Which is why modern OO langauges such as Kotlin have final-by-default classes.

You wrote:

Maybe it screws up Mock object creation …

And this is indeed a caveat, but you can always recourse to interfaces if you need to mock your classes. This is certainly superior to making all classes open to inheritance just for the purpose of mocking.

12
  • 4
    1+: Since it screws up mocking; consider creating interfaces or abstract classes for each "final" class.
    – Spoike
    Commented Jul 14, 2011 at 11:49
  • 1
    You can consider the Java API itself and how often you will find the usage of the 'final' keyword. In fact, it is not used very often. It is used in cases were performance might get bad due to the dynamic binding taking place when "virtual" methods are called (in Java all methods are virtual if they are not declared as 'final' or 'private'), or when introducing custom subclasses of a Java API class might arise consistency issues, such as subclassing 'java.lang.String'. A Java String is a value object per definition and must not change its value later on. A subclass could break this rule.
    – Jonny Dee
    Commented Jul 16, 2011 at 20:46
  • 8
    @Jonny Most of the Java API is badly designed. Remember that the bulk of it is more than ten years old and many best practices have been developed in the meantime. But don’t take my word for it: the Java engineers themselves say so, first and foremost Josh Bloch who has written in detail about this, e.g. in Effective Java. Rest assured that if the Java API were developed today, it would look very different, and final would play a much bigger role. Commented Jul 17, 2011 at 10:07
  • 4
    I'm still not convinced that using 'final' more often has become a best practice. In my opinion, the 'final' should really be used only if performance requirements dictate it, or if you must assure consistency cannot be broken by subclasses. In all other cases necessary documentation should go into, well, the documentation (which needs to be written anyway), and not into the source code as keywords that enforce certain usage patterns. To me, that's kind of playing god. Using annotations whould be a better solution. One could add a '@final' annotation and the compiler could issue a warning.
    – Jonny Dee
    Commented Jul 17, 2011 at 17:47
  • 1
    "Jon Skeet" link is dead. Probably it was moved to this SO question. Commented Aug 18, 2016 at 5:49
9

If you want to leave a note to yourself that a class has no sub-classes, then by all means do so and use a comment, thats what they are for. The "final" keyword is not a comment, and using language keywords just to signal something to you (and only you would ever know what it means) is a bad idea.

edited by original author to add: I completely disagree with this now. I cannot even create a model of my mental state 11 years ago that would explain why I would say this. I think this answer, and my comments defending it below, are ridiculous. The accepted answer is right.

11
  • 22
    This is completely false! “ using language keywords just to signal something to you […] is a bad idea.” – On the contrary, that is exactly what language keywords are there for. Your tentative caveat, “and only you would ever know what it means”, is of course correct but doesn’t apply here; the OP is using final as expected. There’s nothing wrong with that. And using a language feature to enforce a constraint is always superior to using a comment. Commented Jul 14, 2011 at 6:51
  • 9
    @Konrad: Except final should denote "No subclass of this class should ever be created" (for legal reasons or something), not "this class currently has no children, so I'm still safe to mess with its protected members". The intent of final is the very antithesis of "freely editable", and a final class shouldn't even have any protected members!
    – SF.
    Commented Jul 14, 2011 at 7:10
  • 4
    @Konrad Rudolph It is not miniscule at all. If other people later want to extend his classes, there is a big difference between a comment that says "not extended" and a keyword that says "may not extend".
    – Jeremy
    Commented Jul 14, 2011 at 13:39
  • 3
    @Jeremy I’m not conflating keywords. The keyword final means, “this class is not to be extended [for now].” Nothing more, nothing less. Whether PHP was designed with this philosophy in mind is irrelevant: it has the final keyword, after all. Secondly, arguing from PHP’s design is bound to fail, given how patchworky and just overall badly PHP is designed. Commented Jul 14, 2011 at 14:38
  • 3
    If you find in the future that you want to extend your final class after all, you can look at the class design, decide whether it is safe to extend, fix it if it isn't, then remove the final keyword. In effect, you refactor the class from final to non-final. Then you can extend it. Seems reasonable to me.
    – David K
    Commented Jun 23, 2014 at 13:20
6

There is a nice article about "When to declare classes final". A few quotes from it:

TL;DR: Make your classes always final, if they implement an interface, and no other public methods are defined

Why do I have to use final?

  1. Preventing massive inheritance chain of doom
  2. Encouraging composition
  3. Force the developer to think about user public API
  4. Force the developer to shrink an object's public API
  5. A final class can always be made extensible
  6. extends breaks encapsulation
  7. You don't need that flexibility
  8. You are free to change the code

When to avoid final:

Final classes only work effectively under following assumptions:

  1. There is an abstraction (interface) that the final class implements
  2. All of the public API of the final class is part of that interface

If one of these two pre-conditions is missing, then you will likely reach a point in time when you will make the class extensible, as your code is not truly relying on abstractions.

P.S. Thanks to @ocramius for great reading!

5

"final" for a class means: You want a subclass? Go ahead, delete the "final", subclass as much as you like, but don't complain to me if it doesn't work. You're on your own.

When a class can be subclassed, it's behavior that others rely upon, must be described in abstract terms that subclasses obey. Callers must be written to expect some variability. Documentation must be written carefully; you can't tell people "look at the source code" because the source code isn't there yet. That's all effort. If I don't expect that a class is subclassed, it's unnecessary effort. "final" clearly says that this effort hasn't been made and gives fair warning.

2

Using 'final' takes away freedom of others that want to use your code.

If the code you write is only for you and will never be released to the public or a customer then you can do with your code what you want, of course. Otherwise, you prevent others from building upon your code. Too often have I had to work with an API that would have been easy to extend for my needs, but then I was hindered by 'final'.

Also, there is often code that should better not be made private, but protected. Sure, private means "encapsulation" and hide things considered to be implementation details. But as an API programmer I might as well document the fact that method xyz is considered to be implementation detail and, thus, may be changed/deleted in future version. So everyone who will rely on such code in spite of the warning is doing it on his own risk. But he can actually do it and reuse (hopefully already tested) code and come up faster with a solution.

Of course, if the API implementation is open source one can just remove the 'final' or make methods 'protected', but than you have changed the code and need to track your changes in form of patches.

However, if the implementation is closed source you are left behind with finding a workaround or, in the worst case, with switching to another API with less restrictions regarding possibilities for customization/extension.

Note that I don't find 'final' or 'private' are evil, but I think they are just used too often because the programmer didn't think about his code in terms of code reuse and extension.

6
  • 2
    "Inheritance is not code reuse".
    – quant_dev
    Commented Jul 2, 2011 at 21:00
  • 2
    Ok, if you insist on a perfect definition you're right. Inheritance expresses an 'is-a' relationship and it is this fact which allows for polymorphism and provides a way of extending an API. But in practice, inheritance is very very often used to reuse code. This is especially the case with multiple inheritance. And as soon as you call code of a super class you are actually reusing code.
    – Jonny Dee
    Commented Jul 2, 2011 at 21:33
  • @quant_dev: Reusability in OOP means first of all polymorphy. And inheritance is a critical point for polymorphic behaviour (sharing the interface).
    – Falcon
    Commented Jul 14, 2011 at 7:30
  • @Falcon Sharing interface != sharing code. At least not in the usual sense.
    – quant_dev
    Commented Jul 18, 2011 at 7:52
  • 4
    @quant_dev: You get reuseability in the OOP sense wrong. It means that a single method can operate on many datatypes, thanks to interface sharing. That's a major part of OOP code reuse. And inheritance automatically let's us share interfaces, thus enhacing code reuseability greatly. That's why we talk about reusability in OOP. Just creating libraries with routines is nearly as old as programming itself and can be done with almost every language, it's common. Code-Reuse in OOP is more than that.
    – Falcon
    Commented Jul 18, 2011 at 8:32
0

One thing you may not have thought of is the fact that ANY change of a class means that it has to undergo new QA-testing.

Do not mark things as final unless you really, really mean it.

7
  • Thanks. Could you explain that further. Do you mean that if I mark a class as final (a change) then I just have to re-test it?
    – JW01
    Commented Jul 2, 2011 at 12:18
  • 5
    I would say this more strongly. Do not mark things as final unless an extension cannot be made to work because of the design of the class.
    – S.Lott
    Commented Jul 2, 2011 at 12:18
  • Thanks S.Lott - I am trying to understand how bad use of final affects the code/ project. Have you experienced any bad horror stories that have lead you to be so dogmatic about the sparing use of final. Is this first hand experience?
    – JW01
    Commented Jul 2, 2011 at 12:27
  • 2
    @JW01: A final class has one primary use case. You have polymorphic classes that you do not want extended because a subclass might break polymorphism. Don't use final unless you must prevent the creation of subclasses. Other than that, it's useless.
    – S.Lott
    Commented Jul 2, 2011 at 12:36
  • @JW01, in a production setting you may not be ALLOWED to remove a final, because this is not the code the customer tested and approved. You may, however, be allowed to add extra code (for testing) but you may not be able to do what you want to because you cannot extend your class.
    – user1249
    Commented Jul 2, 2011 at 12:40
0

I would like to add some nuance to the answer provided by Konrad Rudolph. While his response is certainly true in most contexts, it does not mention the situation where the code may be used as a dependency of another project.

The project that depends on this code may need to modify the behavior of the final class for a use case that was not anticipated by the original creator of the class (it is impossible to anticipate all situations), and marking the class as final could be quite problematic.

Let's imagine a scenario: I have coded a Symfony library that allows sending Batman-themed emails. The template is customizable, as it is often the case with most Symfony libraries because of its native template overriding mechanism.

I have created a service that contains a method sendBatmanEmail(BatmanEmailDto $emailDto) To use this service, the caller must initialize a BatmanEmailDto, which offers two setters setSubject and setText. The sendBatmanEmail method does its job (at least 300 lines of code!), then passes the DTO to the template for rendering. The DTO is marked final.

In most cases, it works fine with a subject and a text! Some slight modifications are possible via the Symfony template overriding engine, which is the icing on the cake.

But if tomorrow a project wants to have two distinct texts in the template, or add an image, manipulate a richer entity, etc... The use of the library will be completely questioned since it will be impossible to extend the BatmanEmailDto and pass more data to the template. The sendBatmanEmail method will need to be rewritten entirely (or maybe entirely copy/pasted? good luck for the maintenance!..)

All of this is to say that if you are developing a library and not a final project, please avoid using final classes. It looks nice on paper, but for future dependencies of your project, it rarely seems like a good idea.

And I forgot to mention if you want to replace some class by dependency injection configuration, which is a very common type of usage in Magento for example (and is very maintainable if used with caution when and only when it is useful btw)

PS: if some developer is not clever enough to understand that your class is not made to be extended, he will probably not be clever enough to NOT remove your final keyword also.

1
  • "if you are developing a library and not a final project, please avoid using final classes." I dont think this is good advice. Even for libraries with certain extension points, it is a good idea to default to final and only open classes/interfaces up explicitly for those extension points.
    – sfiss
    Commented May 3, 2023 at 13:55

Not the answer you're looking for? Browse other questions tagged or ask your own question.