1

With generative AI, you can just ask it to explain the code to you. In my experience, it is reliable and accurate - which cannot always be said about manual comments. I am specifically referring to comments on signatures - not necessarily inline comments, TODOs, or things of that nature.

It would save time (I think) by not writing and maintaining comments. A good generative AI for code isn't always free, but I suspect it will become built into IDEs sooner rather than later. Without comments on signatures, there is no IntelliSense/preview when you hover a function, but I suspect that will be added to IDEs soon.

My question: I make an effort to always comment the signature of anything public or unintuitive. But I am now considering stopping comments (except for exceptional cases). Is this a bad idea?

An example:

public bool IsEven(int number)
{
    return number % 2 == 0;
}

My comments:

/// <summary>
/// Checks whether the number is even
/// </summary>
/// <param name="number">The number to check</param>
/// <returns>Whether the number is even</returns>

Generative AI (JetBrains) comments: enter image description here

5
  • 106
    Your comment in this example is not obsolete because you found an AI which can explain this method. Your comment was obsolete right from the start, because the function signature is self-explanatory. Try an AI with a non-trivial function in the middle of a large program, one which calls several other non-trivial function written by you or another team, where the funcion names are that self-explanatory (at least not without knowing the context).
    – Doc Brown
    Commented Jun 29 at 5:29
  • 10
    A big problem with this idea is that you’d be approximating having someone who didn’t write the code comment it after the fact. That’s not ideal in the best case because they’d have to guess at intent. If there are bugs (I didn’t implement exactly what I intended), the retroactive commenter is likely to give the wrong description. All the more when the commenter is AI not a human, since humans reason with logic; LLMs parrot human language with zero understanding.
    – bob
    Commented Jun 29 at 21:02
  • 10
    "In my experience..." Which makes it essential that you tell us what your experience is. How many years, in what field, on what kinds of projects? This genuinely isn't to say "oh, we've been doing this longer, so you can't be right". If you're correct then that's great - but my experience is that that I can't see it working well for anything I've been involved with (various C and C++ embedded software over 30 years).
    – Graham
    Commented Jun 29 at 22:55
  • 6
    Have you tried it with a less trivial example? Can it actually comment anything substantial? Commented Jun 30 at 8:22
  • I trust generative AI to provide an accurate summary even less than I trust a programmer to keep comments up-to-date.
    – chepner
    Commented 17 hours ago

12 Answers 12

105

"With generative AI, you can just ask it to explain the code to you"

And it will explain to me what the code does. Which I already know, because I can see the code myself with my own eyes. What I need comments to do is tell me things I need to know, that cannot be ascertained simply by reading the code; generative AI cannot do this any more than I can.

New contributor
Moschops is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
27
  • 13
    The claim that you know what the code does, because you can see it is a bit naive. Especially when some sophisticated math or algorithms are involved.
    – freakish
    Commented Jun 29 at 8:58
  • 17
    A great practical example of something you don't see with your own two eyes is an approximation for another function. The other function simply isn't in the code, so AI can't see it. The best one can do with AI in that case is hope that that particular approximation is extremely common and that other human beings commented their code so it could copy the comments for you.
    – Cort Ammon
    Commented Jun 29 at 17:15
  • 10
    @freakish but if the code can't be understood that easily, what're the odds that an AI is going to explain it properly? And how will you be sure that it is?
    – Erik
    Commented Jun 29 at 21:05
  • 10
    @freakish You are kind of making Moschops’s point for him. Anyone can read code. What it means is non-obvious for both humans and AI.
    – Dúthomhas
    Commented Jun 29 at 22:16
  • 20
    @freakish there's a difference between static code analysis (which is built to know and very specific in what it does) and AI language models (which are built to generate reasonable sounding BS, and generally have no relation to reality unless the case is really trivial). If you "believe what AI tells you" in the current era, you should really apply more critical thinking, because any semblance of truth is more accidental than intentional.
    – Erik
    Commented Jun 30 at 7:53
53

Three things:

  1. If AI could sufficiently document code, it should be done once at the time the code is authored, not many times every time someone forgets what the code does. This is primarily an efficiency concern. But, it also allows the author an opportunity to validate or edit the comment to align with their own understanding.
  2. The explanation generated in your example is harder to read than the actual code. But, this is the nature of LLM's: They do with many words what you can often do better with few (or none).
  3. The best comments don't usually need to explain "what" code does, but "why" it is as it is. E.g., look at this:
constructor() {
  this._graphqlParse = someLib.parse;
}

It looks benign enough. We're just using the parse method from "someLib". What if we want to swap that out later? Is it OK to do that? If I don't have some in depth knowledge of "someLib", I have no reason to think I shouldn't swap it out with another library.

But, what if it had this comment on it?

constructor() {
  // someLib's parse method safely escapes template literal params for us
  // so we don't have to worry about injection attacks.
  this._graphqlParse = someLib.parse;
}

How I think about that tiny little assignment and library dependency is now completely different. I now know that if I go hunting for a faster library or a library that emits metrics or whatever, I need to find one that also sanitizes strings.

In the original, undocumented code, "what" the code does is obvious. "Why" I've done it the way I've done it was not. The "why" matters. And an LLM can only "guess" (same as you1) at the original author's intentions.


1. An LLM doesn't really guess like a human does. It's basically a giant statistical auto-completion engine. But, I think you know what I mean here. The LLM doesn't "know" what the author's intentions are.

11
  • 7
    "an LLM can only guess" – I know what you mean, but I feel this statement is misleading and feeds into the widespread misunderstanding about what LLMs can and cannot do. An LLM is essentially spicy autocomplete. It doesn't understand anything, it has no intelligence, and it also can't "guess". Unless someone has already written this same code with a well-written comment, and that code was part of the training set of the LLM, there is practically no chance an LLM will be able to generate a coherent comment. Commented Jun 29 at 20:17
  • 6
    Hmm ... I'm a little skeptical that this is "misleading", TBH. But, I'll add a footnote. You can let me know if you think it's helpful.
    – svidgen
    Commented Jun 29 at 20:37
  • 1
    Except the AI will do a much poorer job at guessing than the collective you. Never trust AI to author whatever an expert can’t or won’t thoroughly review, comments included.
    – bob
    Commented Jun 29 at 21:07
  • Regarding 1) you might want to argue why. Aside from money/resources spent, I'd also see the fact that you cannot predict what an AI used by someone else will spit out when they ask for documentation/a summary. Feel free to incorporate if you agree. Commented Jun 29 at 21:08
  • 2
    @Basilevs this isn't really a pro vs. con AI discussion but a specific does this particular process (writing comments) become obsolete by this particular approach to use AI; That being said, you argue in your comments mostly that if the project is that well maintained that the AI can get to the bottom of everything then the AI will be generating a good enough output - which begs the question why you still need the AI to put out anything, if the project already has all the right information in the right places... (and for the detailed "why"s jira tickets wouldn't be the right place) Commented 2 days ago
26

That function is a great example of why not all functions need comments. Comments (and function docs) are to explain things that aren't immediately obvious. For example this (pretend) hand written documentation is useful:

/**
* Determines if invoices for this product requires the 
* <a href="http://mycompany.confluence.com/wingbat">wingbat</a> 
* payment pre processor. If not it can be sent RAW to accounts
*/
public boolean needsWingbatPreProcessor(ProductIdentifier productIdentifier) {
     // hats come from ACME hat and their payment provider can't handle raw invoices
     if(productIdentifier.hasHatIncluded()){
         return true; 
     }
     // turns out that BOT products have an undocumented bug that requires the preprocessor, See JIRA-89
     if(productIdentifier.getIdentifier().startsWith("BOT-"){
         return true; 
     }
     return false.
} 

For reference I gave an AI my example method and asked it to add javadocs and comments. This is what I got back:

/**
 * Determines if a wingbat pre-processor is needed based on the product identifier.
 * A wingbat pre-processor is needed if:
 * - The product identifier indicates a hat is included.
 * - The product identifier starts with "BOT-".
 *
 * @param productIdentifier The product identifier to check.
 * @return {@code true} if a wingbat pre-processor is needed, {@code false} otherwise.
 */
public boolean needsWingbatPreProcessor(ProductIdentifier productIdentifier) {
    // Check if the product identifier indicates a hat is included
    if (productIdentifier.hasHatIncluded()) {
        return true;
    }

    // Check if the product identifier starts with "BOT-"
    if (productIdentifier.getIdentifier().startsWith("BOT-")) {
        return true;
    }

    // If neither condition is met, return false
    return false;
}

The hand written docs are clearly more useful, giving a link to an article on what wingbat is (Which I'm pretending is an internal tool). And explaining the weird cases in this function. On the other hand the AI one just describes what it does

11
  • 13
    The AI also forgot to mention the JIRA ticket
    – Bergi
    Commented Jun 29 at 19:21
  • Thus is is not a fair comparison. For the task to be done correctly, AI has to have access ro the task statement, Git history and the workspace.
    – Basilevs
    Commented 2 days ago
  • 4
    @Basilevs how is that not a fair comparison. Until we have much better AI trained on local data (which is very very expensive) that is what an AI can do. Sure some day better AIs may be available. But even in that case the AI that wrote the code should add the comment explaining why it did what it did. Not have another AI guessing at it. The point is that comments are "I was there, I saw what happened, here is what I saw". Docs and comments written after the fact (either by a human or by AI) are usually much less useful Commented 2 days ago
  • 3
    @Basilevs I think you are talking about something different than in the question. The question is about not maintaining comments and documentation at the time of writing but having them written "on demand" perhaps years later Commented 2 days ago
  • 5
    I have never seen either issue tracking or Git history have sufficient detail to include why the author did a weird thing on a particular line, or the specific reasons for a function; usually you only find that stuff out as you are writing. Sure you could massively inflate the history to put that information in. But why? Why not put it in the far more sensible position it is currently put in, in a structured way that is much more easily accessible Commented 2 days ago
8

In many cases, the most useful comments are those which don't describe how something works, but those that describe what alternatives were considered, and whether they were found to be inferior or might be worth further exploration. Such information won't be present in code itself, and examination of the code won't be able to create it.

Generative AI is a nice parlor trick, and for some quick graphics tasks like thumbnail generation it may actually be useful, but the results shouldn't be trusted unless vetted. If one uses results without vetting, generative AI may yield some slight benefits if one doesn't mind the more significant losses when it turns out to be just plain wrong. If one vets the results sufficiently to avoid consequences for mistakes, the effort spent vetting could have been spent as well simply doing the original task.

7

I think the fundamental problem here is how many managers think of source code as an instruction manual on both the art of development and on how they do business.

So if they have a piece of code, however pathetically trivial, they follow the logic that there is, within the code, both the information necessary to produce a new developer capable of writing that code or similar code, and an explanation of how their business works (in a rich, meaningful way) and why the code was designed that way in the first place.

Their view on AI is also typically informed by the hype that it is somehow "intelligent" in the way that developers are reckoned to be, and that AI is as intelligent as the most intelligent developers out there.

It is from these ludicrous opinions that they proceed to think that AI can successfully explain a piece of code and answer arbitrary questions about it, in a way that an average developer cannot quickly do for themselves or in a way that eliminates the need for a skilled and familiar developer on their staff.

Now code commentary has only one function. It is to introduce design information, readable by the developer, which would not otherwise be recorded in the code at all (because it is not necessary for computer execution, it is only relevant to the developer engaged in design or re-design).

Sometimes commentary pulls together information that seems to be present latently in the code once you know to look for it, but the problem is that it is spread diffusely and with no visible sign of the connection - the purpose of the commentary is still to introduce new information that is not in the code itself.

Perhaps occasionally, it is to draw the attention of the developer to a specific pitfall which would otherwise get hidden amongst the thickets of code.

So understanding that the purpose of comments is to record things that the code otherwise doesn't record, there is absolutely no reason to think AI will make code commentary redundant, unless your commentary was already unnecessary.

Because an AI, in analysing the code, certainly cannot extract more from it than a skilled developer who is concentrating on reading it. And by waffling about the code, it certainly does not help to highlight pitfalls or important features either, but simply buries them in a different haystack.

Those who study software history will find that there is hype every few years about techniques which are supposedly "intelligent" and will eliminate the need for skilled developer labour. Yet there are more developers than ever before.

The curse of studying history, of course, is to be doomed to stand by and see the same mistakes made over.

30
  • "Because an AI, in analysing the code, certainly cannot extract more from it than a skilled developer" That is not clear at all. Maybe it is so now, but it will likely happen that AI exceeds us in code understanding and description. And perhaps everything else. And besides, skilled developers are ultra expensive.
    – freakish
    Commented Jun 29 at 9:24
  • 7
    @freakish, firstly you seem to be speculating now about an "intelligent" technology that simply doesn't exist (even in theory or prototype). I don't see why you think it would be appropriate to be effectively discussing fiction from Star Trek, as if its actually here. And secondly you haven't grasped the point that the code does not contain all the relevant explanatory information. When dealing with legacy code, a significant part of my role can involve meeting other staff (to interrogate them for explanatory information which they possess). (1/3)
    – Steve
    Commented Jun 29 at 10:22
  • 3
    Another common outcome of analysing legacy code is writing off portions of code that nobody understands and therefore cannot be adapted to any new purpose because nobody knows what existing purpose it currently fulfils (if any). A lot of legacy code in smaller businesses is nonsense anyway - the product of incoherent thinking in the first place, from staff who threw down the gauntlet and left years ago. (2/3)
    – Steve
    Commented Jun 29 at 10:22
  • 3
    Developers of ordinary business applications would not use Rust - you'd use languages like Rust for developing hardware or computer operating systems (in other words, only a small minority of developers in specialist employment). There are a wide array of languages that have no manual memory management and therefore have no risks. (3/3)
    – Steve
    Commented Jun 29 at 10:22
  • 4
    @freakish, (reflecting your edited comment...), I accept developers are not perfect, but (as we know) so-called AI is also far from perfect with its lies, nonsense "hallucinations", empty blather, and inexplicable workings. And a great many of the imperfections in developer activity are due to incoherent business policies or excessive design complexity which makes it difficult for an ordinary intellect to analyse the correctness of the correspondence between the code and the intended purposes of management. Youth and inexperience also plays a part, when businesses hire inappropriately.
    – Steve
    Commented Jun 29 at 10:45
5

Yes comments are obsolete. But not because of generative AI. Because you your function is just IsEven and that's enough.

On the question of AI, we should note its not explaining the function. It's searching millions of lines of code for similar functions and their comments and the documentation for the operations. finding the similarities and the generating similar looking text with your variable and function names substituted in.

try

public bool CheckEven(int number)
{
    return (number ++) % 2 != 0;
}

...In other words, the method returns true if number is odd and false if number is even....

https://deepai.org/chat

13
  • 2
    In the first paragraph You seem to generalize that comments are obsolete, but your argument is specific to OP's toy example. Please disambiguate if you mean that comments in general are obsolete or if comments for simple self-explanatory functions are obsolete
    – Christophe
    Commented Jun 29 at 8:46
  • I mean more generally, but I don't want to get into the "how many comments should you have" argument here. We can just look at the type of comment the OP is talking about and see that its not helpful
    – Ewan
    Commented Jun 29 at 8:56
  • There are plenty of bad AIs over there. I think the question implies that a somewhat good specialized tool would be used.
    – Basilevs
    Commented Jun 29 at 10:58
  • 8
    Interesting. Also interesting is to see how this LMM is sensitive to the names that you use in your code. I tried a random piece of code, and it was giving me a quite impressive deduction over a very partial piece of code. I then renamed all variables and functions to use minimal abbreviations in another human language, and the LMM was incapable of giving any meaning except commenting on the expressions. (experiment both starting a new anonymous browsing session to avoid any history bias). Of course, for this simple example it still works, yet it gives slightly different explanations.
    – Christophe
    Commented Jun 29 at 14:54
  • 2
    Is this meant to imply that the LLM's answer is incorrect? Despite the name, CheckEven indeed returns true if and only if number is odd.
    – Alex Jones
    Commented Jun 29 at 22:56
5

Comments on method signatures have been "obsolete" for decades provided the method signature conveys enough information for the human to understand what the method does at a high level. This isn't a new development because of AI or LLMs.

In September 2022, I would have stated that there is no point to add comments to a method signature if the method name, return type, parameter names, and parameter types are already meaningful. Fast-forward to today, and AI chat bots are a household name. My recommendation still stands. Comments in this narrow use case are unnecessary now, and they were unnecessary then. The method signature comments state the obvious. So obvious, in fact, that a machine learning algorithm can predict the most likely string of words that a human would use to describe it, because many humans have written similar code and comments that were used to train the model.

This is not a criticism of large language models. Instead, it is good to be mindful of the limitations of this technology. LLMs are a tool like any other. You need to understand how to use them for it to be effective. Rather than say "Boo AI! Yay humans!", here are some guidelines for determining when to offload this to an LLM:

  • If the method signature is obvious at first glance to a human, it doesn't need comments, nor does the human need AI to explain anything. To be honest, this is old advice that predates LLMs by many, many years. I believe your code example falls into this category.

  • If an LLM gives an uncommented method a good summary, then don't invest time in adding comments. This assumes other developers have access to this kind of tool. Beware that different LLMs can give different output. Do each of the developers working on this codebase have access to the same LLM? If not, consider commenting the method anyways.

  • When a LLM cannot accurately describe the method, and neither can a human, then the human better write some comments to explain this to everyone. None of us know what the &$#@ this code is doing — carbon-based or silicon-based intelligence.

    • Once the human has written good comments, it would be nice to incorporate this information into the corpus of data used to train the LLM so it has a better chance of summarizing similar-looking code in the future.
  • And finally, remember that the limitations of LLMs are evolving quickly. Better training data or algorithms can change the decision to offload some of this cognitive work to artificial intelligence.

Code comments are a tool, just like IDEs, and large language models. The onus is on the human to understand the use cases and limitations of each tool before using it. I think declaring code comments obsolete is too broad to properly capture the nuance of communication between engineers. More likely, current LLMs will augment our existing tools, rather than replacing them. Comments are not obsolete, but the use cases for comments might change because of the introduction of a new tool.

4
  • 1
    "If the method signature is obvious at first glance to a human, it doesn't need comments, nor does the human need AI to explain anything" Just because new developers may look at this answer, I'd like to suggest a slight departure from this. If one does not have the experience nor discipline to really determine if a signature is obvious, it may be wise to put the comments on the signature anyways. In my personal experience, I've seen units and frames dropped because a developer momentarily forgot that they weren't obvious.
    – Cort Ammon
    Commented Jun 29 at 17:19
  • 2
    I think that is a fair critique. Experience matters. I will try to update my answer later without falling into a black hole on this one. My counter-argument would be that inexperienced developers should either spend time analyzing code or summarize it with an LLM until they become more comfortable. I was hoping to convey that this decision exists on a spectrum, rather than a black-and-white canvas. Commented Jun 29 at 17:30
  • 1
    @GregBurghardt, if novice developers never read code and spend all their time having AI summarise it in natural language, then I struggle to understand how they will ever develop the skills necessary to read the code! One of the key competencies of a developer is learning to be slavishly precise with details (since the computer will perform no further intelligent interpretation of its instructions) - in my experience, the absence of this ability is one of the characteristics of those who want to code but cannot! So I agree with you that code should not be tailored to complete novices.
    – Steve
    Commented 2 days ago
  • 1
    "Do each of the developers working on this codebase have access to the same LLM?" Tools change but code lives for years. Even if every dev has access to the same LLM right now will that hold four years from now on? Commented 23 hours ago
5

Comments should not explain what the code does. They should explain why the code exists. A.I. will never explain what the original coders intent was. We don’t need A.I. for this. We need coders who know what comments are for. Tell me why you wrote this. Tell me why I should ever call it. How it works is really not the business of comments.

I should be able to refactor this code so that it uses a bit mask to do the same thing without touching a comment. If refactoring breaks the comments then those are bad comments.

5
  • 1
    Documentation often explains what the code does. The intent is to allow clients to consumer the API without reading the implementation. This is the very idea of INTERFACE - isolation of implementation from clients. Of course, explaining the reasoning is also important when it is not obvious. How often do you read the source of standard library of your preferred language? What about its documentation?
    – Basilevs
    Commented 2 days ago
  • 2
    @Basilevs bad documentation explains what the code does. Good documentation explains what you can expect. What the code does IS the implementation. What I can expect IS the interface. I don't need to know the implementation to consume the API. All I need from the implementation is to be faithful to the API. Commented 2 days ago
  • I agree that inline comments should not explain how the code works with one exception: if you are required to do something really odd (like instantiate an object that is never used) in order for things to work, there should be a comment so it's clear that it wasn't a mistake.
    – JimmyJames
    Commented yesterday
  • @JimmyJames yes every good rule has an exception. Arcane kludges could make some discussion of what is happening permissible. But in those cases also explaining WHY?!? becomes even more critical. Commented yesterday
  • @candied_orange Maybe I misunderstand why you mean by 'why' and 'how'. I guess I generally don't want to see any comments in the body of a method unless there's something really interesting to explain. Comments like that tend to be useless redundancies or wrong. I generally delete them if I am refactoring. Perhaps ironically, gen-AI is contributing to more such comments because that's one way to tell GH copilot to write code. But in that case, they might be actually useful because if the generated code is wrong, you won't know what the intention was.
    – JimmyJames
    Commented yesterday
3

There are already plenty answers that address the core issue of whether an AI can generate a comment well enough without further supervision and whether the "reader-side" generation strategy rather than an "author-side" supervised variant to use AI's to generate comments is a good one.

However, there are also a few additional indirect limitations to where this approach can work based on outside requirements and economic factors.

For many projects applying an AI to the source code is simply not an option. Because:

  • It is or will be expensive (using the most powerful AI model); it currently is expensive for the AI model providers, they just hope for later return of investment and it is questionable whether AI tailored to software generation will be able to finance itself with advertisement like search engines
  • It cannot easily be self hosted (simple models yes, the really powerful ones are proprietary along with some of the tooling)
  • Code might need to be editable offline: Not every place has internet connection, not every system is connected to the internet and AI systems are typically web based; even if just a temporary issue, it slows down the development flow if a comment/functionality description is not available when you need it
  • If the model is not hosted by the company, they might - e.g. for security or privacy reasons - not want to apply the AI to all their data including their issue tracking systems as both the actual code as well as the issues might be very important strategic assets, the company might want to protect as best as it can; external service providers, potentially even hosted in foreign countries are not exactly trustworthy enough for many companies/projects; Tickets could also have customer data that should not leave the company premises.
  • Many accessible AI models do not learn in general from your data, they just include it in a particular query evaluation. The ones that keep learning are more prone to being misled, more expensive and would either mix your data with external data (if an external service) or be even more expensive to run ^^

Perhaps AI will become massively cheaper to run and everyone has it running locally on their machine. That would alleviate most of the external factors mentioned here, but currently that is not in sight. For example, this article from Business insiders stipulates that 1) Tech companies plan on spending over $1 trillion on artificial intelligence. 2) The return on investment may take a long time and be disappointing 3) some experts said AI might not perform well enough to justify its exorbitant cost

And on the practical side, AI surely might change the way we program, but either it gets so good it can program on its own or it needs human supervision - which then also holds for writing comments - and will remain a tool in writing code and comments by humans for humans.

3

The documentation of a function/method/class/interface should not be what it does, but its contract: what it commits to do, and what it doesn't commit to do, not how it does it. You have the source code for that!

AI can tell you what the function does and how it does it. Only the people who designed it can tell you what they promise it will do.

Remember that:

  • The implementation may change. What works today in cases not in the contract may not work in the future.
  • Most functions are quite a bit more complex than checking whether an integer is even, and will most often call other functions, both in the same package and in other ones, as well as system functions. Some of those may in turn vary in what they do (in time, or on different platforms). Do you want a description that tells you "this functions calls this if X and that if Y"? Or do you want it to tell you what the overall result is?
2

Maybe, soon an AI will write your comments. And yet another AI will read the generated comments to digest it for someone else. I bet that if the generated comments are as fluffy as in your example, the latter will become increasingly popular.

Meanwhile, we all know since clean code that comments are not supposed to waste our time for obvious explanations (here, 11 lines of blabla to explain 1 line of code). Focus should be on what's not so easy to grasp in the code, and in that case, comments are an investment for the future with little overhead. At least you're sure that the vital minimum is there.

1

It is not a bad idea, if your code is self-explanatory.

Comments are a nice tool to express concepts that have meanings in a higher level of abstraction respect the one of source code. Pointless if they just rephrase what code already "say".

But... AI support is welcomed when the coder still has to be trained and those inferences that are obvious for the seasoned developer, would require efforts for her (the apprentice) to be done (at the cost of disturbing her flow).

Instead: even the apprentice can benefit from reading about things (in the comments) that ... well: she does not fully understand, but she will do, one day.

A common behavior is limiting the context (meaning, subject) of what can/should be said in comments. Usually we talk about the software itself, not about the design, not about the method with which we choose one design instead another one.

This behavior is common when/where the same source code must be shared among different developers and other roles. In this case, the source is not a good place in which to talk about design methods.

When the source is handled by just one developer things change and comments become also a reasoning (even thought informal) tool.

It's not said that comments have to express just informal expressions. They can be also formal.

New contributor
pber is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.

Not the answer you're looking for? Browse other questions tagged or ask your own question.