91

Preface, TL;DR

This question is about the implied tradeoff between the speed of development and the quality of code.

I am looking for methodologies which can be followed in order to optimize development speed, even at the expense of code-quality and the accumulation of technical debt.

I am not looking for ways of communicating the importance of code quality to those in charge. Please assume everyone understands the importance, but have chosen to optimize for speed of development and that they are correct in that decision.

I have learned a lot about how to optimize for quality, but nothing about optimizing for speed of development, and am looking for references.

My personal experience taught me quick and clean go together, which is not what I see around me, making me believe I am missing something very fundamental.


Code in real life isn't good

I keep encountering code during my work that is less than optimal. By that I don't mean simple improvements could be made, I mean:

  • Lack of encapsulation ("what" mixed with "how" on several levels)
  • Multitude of duplicated conditionals over different modules
  • Under-abstraction
  • Wrong naming
  • Lack of tests
  • Any other poison you can come up with

This is way beyond things that a short refactoring session can remedy.

I see this over and over across many software engineers, across several seniority levels, and across several (startup) companies.

When talking to colleagues outside the office about it, the vibe is "The code is always bad. You have to learn to deal with it" (which I am trying to do now). When talking gently with colleagues from the office, they are either not aware of the problems (mostly juniors or non software majors), or they excuse it by "we had to get it done fast".

I am beginning to believe that if so many seniors claim they optimized development speed at the expense of code quality, they have some secret method of getting it done, repeatedly, which I somehow miss. Otherwise, how did it get to be like this? I mean, this has to be done deliberately, right? No one wants tie both their shoes together, though sometimes they have to. Even then, there has to be a correct and a wrong way to do it.

I believe this is code in real life. I read a lot about good code. For example, The Gang of Four, Clean Code[non affiliate link], lectures, and SOLID.

I observe that these practices are considered by many as nice to have, but are not followed in practice, especially when under the pressure of business, or if people don't code as a passion, and don't extend their knowledge or notice patterns by other skilled programmers.

This tendency of code not being perfect will probably be amplified on my own career path, which is leaning toward algorithm development, and away from pure software engineering.


What do I do today?

When I write code "from zero", it is very easy for me to implement the good-code principles, and I tend to be faster as I write good code.

I tend to follow my own code much better, thinking less and typing more, having fewer bugs, and able to then being able to explain to myself and others what's going on, when it is organized. I feel this makes me faster, as most of the time coding goes for thinking, and writing tests, which good code minimizes, and encapsulates.

When handling existing, good code, there is a slight learning curve, but then I can treat it quite easily, and not worry that I might break things. I can be quite sure I am on the road to being done and explain what's going on and estimate how long it will take.

When handling existing bad code, I tend to:

  • Not know what I am going to break with my changes.
  • Not know how code is structured and have to read the whole thing prior to writing, which is virtually impossible due to wrong naming and no encapsulation, no external documentation, and generally asking people results in "oh, yeah, there's also that case" if they are still even around.
  • Start to build encapsulations myself, just to know that when I'm done, I am really done, and that I didn't break anything. Also this allows me to not break my own changes elsewhere.
  • This is a really slow process, relative to the person who wrote the code, who knows by heart all the pitfalls in the code.
  • This is much slower than the boss expects.
  • I usually can't give a time estimate for my work prior to going into such code. Many times, I only find out about the code quality after having spent some time in it.

When being asked to code "quick and dirty", I have no idea what to do in practice. I can code quick and clean from zero or slow and clean.

I want a tool in my arsenal, to be able to code quick at the expense of dirty, to be able to choose when to use it.


This topic seems clear to everyone, but no one actually talks about it.

Bosses and colleagues, mostly at startups, all say once in a while "quick and dirty" or "do it faster; we will repay the technical debt later", or "just get it done; deal with it later".

I never hear people talk technically or methodically about how, in practice, to favor quick and dirty over clean and slow. As I view it, quick and clean go together when "actually working code" is implied.

How can I get all three done together, in new code, and in existing code?

  1. Deliver faster than when writing clean code (implied doing something dirty)
  2. Writing working, tested, explainable code (sleeping well, not worrying production servers just broke because of me)
  3. Working regular hours

I understand how to write good code. I want to find out about methodologies of being faster at the expense of quality.

I am looking for methodology, rules of thumb, and book or lecture references.

37
  • 5
    You speak of "existing bad code", but bad code comes in different flavours, which seems to be the source of your confusion. There's a world of differences, for instance, between code that is not tested automatically, and well-tested code that would be refactored to an architecture that better fits the current needs given the team had time to do so. Commented Apr 8, 2022 at 14:15
  • 14
    In the headline you are asking how to write quick and dirty code, but later you ask how to be "Writing working, tested, explainable code [Sleeping well not worrying production servers just broke because of me]"? So which one is it? Maybe the issue of your sleep needs to be addressed before everything else
    – Helena
    Commented Apr 10, 2022 at 9:19
  • 7
    The code in the Gang of Fours design patterns book is not “good code”, never claimed to be good code, and it’s authors have spent a good deal of time explaining that. It’s simply code that works. That’s only needed in languages that don’t provide simpler ways of doing the same thing. Mindlessly following patterns does not create good code. It only creates recognizable code. Having a name for it doesn’t make it good. Commented Apr 10, 2022 at 19:35
  • 3
    @gulzar in that case, the best way to write code quickly is to practice. Having written lots of code allows you to recognize what is needed for being able to test, abstract etc, and what is the actual meat of the code which you by then also know where to put to avoid having to put all the scaffolding in place. So, this comes with experience. Commented Apr 10, 2022 at 22:04
  • 2
    @candied_orange : What makes "good code", then? How do you know if you code is at the quality level the OP seems to be able to achieve near-effortlessly, instead of racking yourself for hours, weeks, or months into analysis-paralysis quibbling about rules and theories on how to make it, continually re-kneading the code because you always can seem to come up with contradictions in the rules in seemingly basic scenarios? How do you get a solid, trustworthy answer on what "good code" is so you can know if you're making it? How did OP manage to learn this instead of being years in rethink hell? Commented Apr 11, 2022 at 20:09

15 Answers 15

81

Use the 80:20 rule (Pareto Principle) And a "TODO" notation.

Or, as given here, "80% of the task is completed by 20% coding".

A large amount, ~80% of your code, can probably be written cleanly, and quickly. Do that. Say you need to add functionality, and a class or module already exists for it. Just add it to the obvious place. Follow the existing naming conventions. Add basic unit tests. Build on what's already there.

~20% of your code requires more thought. You aren't sure how to test some obscure edge case. You can't decide about the added functionality - should it be in a subclass, or use D.I., or some fancy Gang of Four pattern. (etc...) As your mind wanders you start to research a new tech buzzword and spend hours on Stack Overflow.

STOP.

  1. Just do something quick and dirty.
  2. And a TODO comment.
  3. Come back to it in a couple of days.

It's amazing how a few days of background thought can clarify a vexing problem. Chances are, you have figured out a better, cleaner way to do it. If you haven't, that's when you engage the help of a fellow programmer, a mentor, or you bring it up at the standup, code review, etc.

p.s. If only 20% of your coding is "easy", then you or your code base have a problem...

16
  • 29
    The "TODO comment" can be a great accelerator if you use it not only for "obscure edge cases" but really any piece of functionality that you can get away with postponing because no one's pinging you on Slack asking about it right NOW
    – Alex R
    Commented Apr 9, 2022 at 7:22
  • 4
    I second the TODO notation. Other good options are FIXME for known issues (e.g. "FIXME this code breaks if this edge case occurs") and XXX for "dirty" code (e.g. "XXX this is inefficient because xyz, but it gets the job done"). I also sometimes use NOTE for warnings or explanations to future devs, though I've only sometimes seen syntax highlighters recognize this one. Leave breadcrumbs for resolving the technical debt you make.
    – Drake P
    Commented Apr 10, 2022 at 23:42
  • 10
    I haven't seen a project where the TODO comments are actually fixed later Commented Apr 11, 2022 at 8:37
  • 18
    This leads to 10+ year old TODOs scattered throughout the codebase.
    – xehpuk
    Commented Apr 11, 2022 at 9:43
  • 8
    @LauriHarpf: actually the only TODO comments which you find in an old codebase are the ones noone cares to fix. The ones which were fixed are usually deleted from the code, so you don't find them except in the VCS.
    – Doc Brown
    Commented Apr 11, 2022 at 15:08
50

Whether one works clean or dirty is more a question of developer attitude and abilities, and the same holds for coding speed - this is rarely a deliberate decision people make.

Of course, there are devs who appear to work overly slow because they tend to be overly clean. But I have never seen a dev in my life who was really quick because their code was dirty. I have met devs who believed they were quick, but their dirty code haunted back at them the first time a tester or user tried to work with their mess. These devs may got the code quickly out of the door, just to get an even quicker phone call from a person who stumbled over the first few errors and made them clear their work wasn't finished.

Hence the idea of

a tool in my arsenal, to be able to code quick at the expense of dirty, to be able to choose when to use it

is IMHO flawed. The sweet spot for being quick is not "dirty", the sweet spot is found by

  • being clean enough, but not being excessively clean

  • invest enough time into proofreading and testing, because saving time by leaving these steps out never works

  • knowing when to stop with unnecessary abstractions

  • knowing which requirements have to be solved now, and stop worrying about "requirements which may or may not arise within the next ten years", which one actually cannot foresee.

8
  • Often, you can't even forsee requirements which may (or may not) arise next year, let alone the next ten years.
    – Zakk
    Commented Apr 9, 2022 at 22:39
  • 2
    Really, the knowledge of which requirements are "May or may not arise within the next ten months" is a more likely ballpark for most developers (Especially junior or intermediate, and Agile tends to make this harder for everyone, given the flexibility to change future requirements) - but the important aspect is realizing that as you shorten your forward thinking of requirement views, your code may end up becoming dirtier as a result. Commented Apr 10, 2022 at 1:45
  • 7
    @AlexanderThe1st: your sentence "Agile tends to make this harder" sounds like a pretty huge misunderstanding of cause and effect. And my point above is about wasting time for "hypothetical requirements" which never arise, this was about losing coding speed, not about "clean vs. dirty". In fact, some devs tend to justify overengineered solutions by calling their code "extra clean", but missing that they did just waste time for things which cannot be foreseen.
    – Doc Brown
    Commented Apr 10, 2022 at 6:04
  • 2
    @AlexanderThe1st Agile just isn't the fastest way to do development when compared with doing the exact same thing in a waterfall way. The reason why agile is "faster" is because waterfall wastes a lot of time continuing to build stuff even after it's been realized that it was a bad plan. The problem arises when people don't give proper respect to this--when they expect the agile developer to deliver in one week the same amount of project as the waterfall developer delivers in one week, agile is just slower on that time scale. Commented Apr 15, 2022 at 22:57
  • 1
    Quick and dirty can be faster for the first implementation, but it almost invariably adds cost to any followup work on the same code, even if none of the problems are visible to the users. Like copy/pasting and changing one instead of refactoring to handle both. For the first instances its fast, but once you need to change something for all cases, its no longer fast and you probably introduce more bugs. Commented Jun 23, 2022 at 14:29
31

I am looking for methodologies which can be followed in order to optimize speed, even at the expense of code-quality, and the accumulation of technical debt.

Sure, here's the list of all the stuff that's forbidden to do in our team. All of these will improve initial development speed at the cost of other factors:

  1. Don't test edge cases. Reduces the time spent on testing.

    Drawback: Increases the time spent on customer support, debugging and bugfixes. Also reduces customer satisfaction.

  2. Copy & Paste. You need a method that's similar to one that's already been written? Instead of generalizing the old method or factoring out stuff that you can re-use, just copy and paste the whole method and make your modifications. Don't bother to remove dead code.

    Drawback: More technical debt. Bugfixes and later modifications will need to be done multiple times.

  3. Mix your layers. UI layer, business logic, data access layer? Forget it, just put the message box where you need it at the moment.

    Drawback: Moving to different UI framework or a different database backend will be impossible without a complete rewrite. Forget about being able to write unit tests.

  4. Don't think about good names. I spend a lot of time finding good clear names for methods, variables and classes. Just call them x, helper and var42 and spend that time coding instead of thinking.

    Drawback: People having to modify your code will hate you. With a vengeance.

  5. Don't write documentation or comments. It was hard to write, it should be hard to read.

    Drawback: Technical debt. Increases the time required to read and understand the code in the future.

10
  • 11
    @Gulzar: From personal experience in writing both production-quality and single-use throwaway code: Yes, they do make you faster - initially. As you correctly point out, dirty coding can come back to haunt you much sooner than you'd expect. And, obviously, the long-term cost usually outweighs the short-term gains by a large factor. In fact, the only legitimate use case I see for these techniques would be small, time-critical emergency hotfixes (system is down and customer loses $$$ per hour), but even then I'd try to replace it with a clean solution as soon as possible.
    – Heinzi
    Commented Apr 9, 2022 at 11:19
  • 8
    1. But don't attempt to obtain 100% code coverage either. You won't get it. There is a sweet spot for testing; write enough tests to demonstrate that the code works, and no more than that. Commented Apr 9, 2022 at 12:02
  • 6
    2. Copy/pasting code and modifying it is absolutely the right thing to do in many cases. DRY can be taken to absurd extremes that waste time, make the code more complex and difficult to read, and do nothing to improve the overall architecture. Commented Apr 9, 2022 at 12:07
  • 2
    4. It depends. If you're in a small local scope, x, count or var are perfectly good names for variables, and if you are writing your code correctly, you should be in a small local scope much of the time. Commented Apr 9, 2022 at 12:11
  • 5
    5. Your code should be easy enough to read so that you don't need comments most of the time. Comments should be reserved for explaining why, not how, and should be used primarily to point out relationships between classes and methods (i.e. architectural considerations). Commented Apr 9, 2022 at 12:13
11

There is no technical guidance, best practice or "secret sauce" to code "quick and dirty." There certainly is no tool available to ... I guess I'm not sure what a tool could do in this case.

For a moment, think back to code you have written with few time constraints. I've found I stumble through the code base haphazardly until I find a path. I frequently undo my work at this point, because I was doing little more than probing the code base and learning the execution paths.

It's like trying to navigate a maze the first time. You take a whole bunch of wrong turns before you see the bigger picture. Then you go back through the maze a second time taking purposeful turns. You make it through much more cleanly, having learned from your previous stumblings.

The same holds true for writing code. I don't write clean code. I write terrible, messy code that solves the problem. I refactor that code into clean code (or at least "clean" by my definition). I do several iterations of "get it working" followed by "clean it up" as I discover more nuances to the problem.

When writing "quick and dirty" code, I still have a period where I stumble through the code haphazardly. I just go through fewer "clean it up" phases, depending on time constraints. Quick and dirty code means you favor "get it working" over "clean it up." It does not mean you should ship defective code. That slows you down.

Ultimately "quick and dirty" means you ship working code as fast as possible without being fussy about design patterns, code organization or architecture. You solve the problem balancing the time constraints right now with the time required to code the perfect design.

2
  • The iteration method does make sense. However, going haphazardly through the code, by definition is non-methodic. I can't go into a 3 day mission this way and be sure to be done on time.
    – Gulzar
    Commented Apr 8, 2022 at 14:56
  • 3
    @Gulzar: this remark of Greg was clearly about existing, unknown, bad code. And yes, you cannot "go into a 3 day mission in that situation and be sure to be done on time." - research work is never predictable.
    – Doc Brown
    Commented Apr 8, 2022 at 15:08
7

TL:DR just read the second paragraph.

Quick and dirty coding is hopefully mostly applied to an existing code base. These days there are too many tools and techniques and methodologies for people to deliberately cut too many corners on new code.

That's not to say there aren't easy ways to still be quick writing new code. Unit tests are unnecessary. I know my code works, I just wrote it. Edge cases rarely happen so don't worry too much abut handling them until someone finds one. Error recovery isn't worth worrying about, because you can never make the error handling completely clean so why bother trying? This works particularly well in an environment with developers and sustainers: I can get my code done quick and let the support staff deal with the minor inconveniences of my mistakes and oversights. That way I can get on to developing the next feature. I like Greg's comment about iterative design. I do that - most of us do. Quick and dirty just means you stop after the first iteration. Or maybe the second if you're keen. :-) He's right though about defects - you can't knowingly ship those. Dirty should never be about broken or even careless. It's about some intelligent risk-taking. Am I likely to find more defects by rereading or rewriting another section of this code, or spending another day testing? Probably not - so ship it.

Development strategies like this are usually driven by management attitude. I don't need quality, I need you to ship something. Get the feature to the customer, we'll deal with issues when he finds them. If we can't get the feature done he'll go somewhere else to get it. Designers by and large don't want to ship garbage, or be known for shipping garbage, but pragmatism often wins.

Quick and dirty is almost inevitable in a large old code base. In my first job I worked on a code base that was started in abut 1974, and I joined in 1985. By the time I stopped working on it in about 2015 it had reached about 3 million lines of source in a proprietary language, touched by literally thousands of designers. Want to bet how clean that was? Oh, the language had no unit tests. Everything was tested manually. Our compiler used to fail if you let a procedure reach 250 lines of code. We had to stop that because it became a nightmare if two people refactored it in parallel. The last I checked the largest procedure had passed 600 lines of code. One procedure! In a base like that there was no such thing as doing it clean. You got in, figured out the minimum that you needed to to get a fix done, and got out. Development of new features was done, as much as practical, by copying whatever bits you needed from the existing base, because you knew that stuff worked, without needing to understand it completely. The last people in here did it that way, must be good, let's do it that way. You definitely wanted to borrow the internal design patterns, because after 40 years they must be good!

This is not an intent. It's a fact of life. Hardly anyone gets to truly write from scratch all the time. Someone else starts something, and you get to build on it. Make new features or enhancements. Make it look nicer. And since the purpose of a company is to make money it is usually pragmatic to decide that you can take some risks with quality in order to deliver more, faster. I make measurable money selling your next feature. The cost of doing that business is that I have to hire some number of people to fix the bugs, and I take some backlash now and then when a bug causes a noticeable widespread issue. But you know what? Widespread bugs are the ones that are likely easy enough to detect in dev, or whatever level of verification you have, alpha trial if you bother, beta trial. So the ones that a customer sees are usually the ones that are only seen by one or two people at a time, and that's manageable. So just do it quick and don't worry too much about clean. We can clean it up in the field.

Now sooner or later some things will start to haunt you. My code base was built in 16 bit words, upgraded to 24 bits, and then to 32. Some of the fun things that the early designers were forced to do because you couldn't tell an integer from a pointer made for nightmares. The games we played with overlaid structures and shared memory would make you shiver. Fortunately I had a boss with some brains, so I was able to convince him that I could have a full time job consulting for everyone (sustainers, developers, verification, product management, field support) and in my spare time I'd clean up the code base of some of the more dangerous practices. I was allowed to continue that for 15 years. I think if the boss hadn't let me do that we might not have made it the 15 years. One of my first cleanups was to remap our memory space in a meaningful way, through a trivial change. My field support manager proclaimed after that release hit the field that the instances of data corruption were down 97% for one big distributor in the first 6 months of use. How's that for technical debt? My second biggest thing was removing about 200KLOC that supported obsolete hardware and features. That's technical debt! (Quick and dirty - I integrated with only minimal testing. Everyone agreed it was impractical for me to test that broad a change in a meaningful way. I made sure it loaded and I could do basic things and then merged and sent it to Regression testing. I tracked 5 defects overall, two caused by mistakes that were clearly mine, the other three attributable to some of our more challenging practices of coding by obfuscation. 1 defect per 40KLOC of untested change seems pretty good to me.)

(EDIT insertion: I have to give huge props to the original designers of this real-time system code. Think about it: A code base begun on a 16 bit architecture with maybe 32KB of memory on a proprietary byte-coded CPU and no OS in 1974 and still running with that same core software architecture 40 years later on 32 bits, with close to 1GB of memory, a commercial Wind River OS and the language cross compiled to standard VxWorks opcodes. The latest CPUs were probably 1000 times as fast as the original or more. The impressiveness of that fundamental core is incredible. Clearly the original development wasn't dirty!)

That's the biggest thing about doing things quick and dirty. Sooner or later you may have to pay the piper. Something is going to get so out of control that you actually need to do something about it. Sometimes that can be done by one person periodically identifying a critical area with too many defects, or can be done by a developer who decides they just can't make the next feature modification in that plate of spaghetti. But in many cases those occasional interruptions are cheaper than constantly trying to make it perfect in the first place.

7
  • That's really an interesting read, thanks! Still, I would really like to take the pragmatic approach, and don't see how from this. You contradict yourself in the 1st paragraph already- "i know it works..edge cases are not important". Let's leave that. Hpw can you do ANY change in such large code with no order? Do you have to understand all of it? What if you can't?
    – Gulzar
    Commented Apr 9, 2022 at 9:44
  • @Gulzar "Do you have to understand all of it?" -- the OP clearly stated that that's not the case. See "You got in, figured out the minimum that you needed to to get a fix done, and got out." or "Development of new features was done, as much as practical, by copying whatever bits you needed from the existing base, because you knew that stuff worked, without needing to understand it completely."
    – ciamej
    Commented Apr 9, 2022 at 21:17
  • @Gulzar I think you have an obsession about avoiding all errors, while "quick and dirty" is all about making errors ;) maybe not on purpose, but surely by admitting the possibility that you are not sure what you are doing, but still going forward.
    – ciamej
    Commented Apr 9, 2022 at 21:20
  • 2
    @Gulzar If you want to learn to be quick and dirty don't look for books on that topic, just practice. Take some open source code that you've never seen before; think of some new feature you'd like to add; give yourself a constant amount of time, e.g. 1 hour; and then you go -- do all it takes to implement that feature in one hour. Even if it won't be perfect, and there is going to be a high chance of the software crashing don't mind it, just make it work most of the time. Then, you can take another 1 hour iteration to improve that stuff a bit...
    – ciamej
    Commented Apr 9, 2022 at 21:24
  • @Gulzar the two points you claim are contradictory are just two examples of how you avoid perfection. The first is the attitude that test cases are redundant because I know I write perfect code. The second is independent of that - that I don't need to worry too much about the edge cases because they rarely happen. Both assumptions save a lot of time when you just want to get the code done quickly. Savings can be huge with that. My new guy found a one line fix the other day then had to create a 40 line test case to verify old and new functionality. What a waste!
    – Sinc
    Commented Apr 10, 2022 at 21:35
4

In my experience, it isn't really possible to sacrifice quality for speed for new code going into production. Customers are weird about wanting the code to work. If you try to rush the initial coding, you just end up paying more on rework.

However, for code not going into production, you can do things like skip boundary cases, tests, and error handling. This is useful sometimes for things like learning how a certain library works.

The other quality trade off I see way too often is in maintaining existing code. You have ingrained dependencies on some unsupported technology or an architecture that no longer matches your needs. It would take a few weeks to remove that dependency, but you can hack a workaround in a day or two. That's the sort of choice people are usually talking about when they discuss taking on technical debt. It's more about architectural choices than specific coding techniques.

1
  • True, but your company may have a contract "deliver by end of August 2022, or pay a million dollar penalty". So your goal is delivering on that day, no matter what. And then you fix the bugs. So "skip boundary cases, tests, and error handling" makes total development cost larger, but may avoid that huge penalty.
    – gnasher729
    Commented Aug 12, 2022 at 10:32
4

In practice, it's called Agile development.

The first stop for you is the Agile Manifesto (including their Twelve Principles of Agile Software Development.

From an engineering standpoint, this revolves around getting working (albeit minimally featured) software to the actual users/customers as quickly as possible. How exactly this is done depends somewhat on the concrete methodology you use, but think in terms of Minimum Viable Products, quick cycles (e.g. 1-2 week sprints), tools and processes optimized to be able to actually do it quickly (i.e., CI/CD, heavy test coverage etc.) and so on.

From a more social standpoint, the manifesto also has several salient points about how to work together with the different roles (customers, etc.) surrounding modern software development (i.e., avoid developing in an ivory tower) or how to handle change (i.e., to welcome it with open arms). Some of them might sound not so important, but my experiences over the years show that whenever there are problems in a project, and you've done your root cause analysis, eventually if you look at the manifesto you can point at the principle that would have avoided the problem, if it had been taken more seriously.

As it's so important, having some form of test-driven development (be it TDD, BDD or any other variant) is the most important aspect for quick development, in my opinion and experience. Yes, in the beginning it takes a bit of getting used to, and it may take a sprint or two to get the test infrastructure in place. But as soon as you have it, you are free to implement as dirty as you wish, since your tests will make sure that it's as clean as necessary, and you are then also very comfortable to refactor later.

Obviously, there are many other aspects to modern, quick software development - this area is much more extensive and complicated than it was 20 years ago! One final source I want to give to you is the Twelve-factor app; a list of best practices that also indirectly leads to being able to develop quicker and more safely than just winging it.

1
  • Agile doesn't stipulate that your code quality has to be compromised (dirty). You can do minimum viable product with quality code, and keep the code quality up by refactoring as required when you are adding or revising features.
    – rooby
    Commented Feb 23, 2023 at 23:50
2

If:

  1. Your boss thinks you are writing code "quick enough", and
  2. You are happy with the quality of your code

then don't worry about it, just keep doing what you're doing.

1
  • 1
    When having to work with existing bad code, he doesn't think so, which led me to asking this. I am not familiar with a way of being faster while working regular hours.
    – Gulzar
    Commented Apr 8, 2022 at 14:09
2

Respectfully, your question might better be asked in two parts because before you can write code (part 2, writing) you must understand the existing context for it (part 1, reading/thinking).

As you point out, starting from scratch is easy. There is minimal context to ramp up on.

However, the opposite is true for an existing code base. You need to gain enough context to be productive. In practice, the lower the quality of this code base the longer this will usually take and the more likely that you will miss important considerations along the way. This effect is one of the main (economic) costs of low quality code.

Other answers address techniques for optimizing writing code speed (part 2) and there's no need to rehash them.

The challenge we all face is how to optimize for "ramping up" speed (part 1). This tends to be more individualistic and often comes down to a variation on how best to study. This varies (a lot) by person and often includes factors such as:

  • Allow sufficient time to minimize perceived time pressures
  • Avoid interruptions
  • Match scope of inquiry to (perceived) scope of concern (only learn what you need)
  • Get enough sleep & calories
  • Take appropriate breaks (varies by learning style)
  • Where they conflict use techniques that you know work for you over "best practices"

Good luck!

2

I was going to answer points one by one, until I realized one major issue kept coming up. (I've also left uncovered e.g. ETAs. I can't help with that; no-one can.)

Main point:

Every coder has a default writing style. They can deviate from it in the moment with effort, and if this happens often enough their default changes. Nobody starts with encapsulation, SOLID etc. skills in their style, so you've learned it, and that's how. But you need to unlearn some of it, including when you refactor. For example, with unfamiliar existing code, you start adding encapsulation: don't do that. Make the minimal changes to understand (what you need to for your current task) - or at a push clarify, but even that's not the team's current priority - not to make the final result the cleanest comprehensible option. If a line made you think "what does this do?... Oh, that's what", you can change it (if the pause was long enough).

Oh, one more thing:

When handling existing bad code, I tend to:

  • Not know what I am going to break with my changes

If you don't understand the code well enough to know a change is safe, you're in good company. Test, test, test. No, I don't mean write new unit tests: that's slow. Either existing tests will cover it, or your team doesn't rely much on them. What I mean is run some code, then see if anything goes wrong; and when (not if) it does, make whatever changes you realize are necessary. The fact you'll feel you fixed it when you're done is the best unwritten counterpart to a unit test a quick, dirty coder is going to get.

1

Starting with:

Not know what I am going to break with my changes.

Ideally you'd want a very high level of confidence about what you're about to break; to favor "quick and dirty", a lower, but still reasonable, level of confidence may be acceptable. Instead of reading every line of code until you fully understand all of it, you could try:

Trust your toolchain : IDE / editor.

Just about any modern text editor is going to have "find all" instead of just "find", and many of the ones meant for code will have something like "find all within project", or "find all within workspace". Worst case scenario, you fall back on something like grep in the source directory. If the list of occurrences of the name is small, you can start by focusing on just the code around those uses of the name, making notes as you go regarding program flow around it.

Some IDEs can do additional validation. They might automatically find the bits they think you need to refactor, and let you approve/reject each one. If you're targeting an embedded chip, for example, they might include code generation tools for peripheral configuration, and validate the configuration for you, telling you if you tried to use an impossible configuration.

Trust your toolchain : build after each minor change to check for errors

The language may have constructs which can help. Depending on your code base, an incremental build with a minor change may be (much) faster than trying to read over the code yourself. In c++, for example, you can start to find out what will break by changing, for the part of the code you need to modify, any of the following (one at a time):

  • make one previously non-const method or reference/pointer parameter const, so you can see which other calls expected a non-const reference/pointer, and examine which ones actually modify the object (vs nobody considered const correctness), and which ones don't need to impact your reasoning about its state, to limit how much you need to know about what you're changing
  • make a previously public (or protected) data member private (tells you something about what might break if you change the data type, or want to enforce preconditions or postconditions?)
  • change a pointer parameter to a reference parameter (tells you something about what might break if nullptr isn't allowed)
  • make an integral type with macro defined values into an enum class (tells you : who might break if I add or remove possible values; most compilers will also give you better warnings about unhandled cases in switch statements)
  • where other constraints are meant to apply to a type (for example as indicated via comments on function parameters or return values), make a new type that enforces those constraints. For example, if a floating point input or output is not allowed to be NaN or infinity, you could make a thin wrapper class that throws an exception (or uses an alternate default value) if constructed with one of those values. Initially the converting constructor and conversion operator can be explicit so you can see where in the code those assumptions were being made, but if you're satisfied that they're ok, or just want to try it and see if it breaks, you can make them implicit instead.
  • enforce your assumptions via compile-time assertions when possible. Recall that even in Test-Driven Development, "failing to compile" counts as a failing unit test, and static assertions can be the simplest and fastest tests to add. For example, //TODO: fix this for platforms where float is too large be copied into 32 bits becomes static_assert(sizeof(float)*CHAR_BIT <= 32u, "ASSUMPTION VIOLATION: float cannot be stored in 32 bits"); If placed near the relevant code, it becomes easier to reason about that code, too ("what happens if float is too large? becomes "we know the float is small enough if this compiles").

Trust your toolchain : tactically enable/disable warnings.

Many compilers have different warning levels that can hint at code that is likely either already broken, or likely to break in the face of any changes; in new code, false positives are less likely, so all warnings can be enabled all the time, but in legacy code, this can bury useful warnings about new code in a mountain of warnings on old code that aren't likely to be a problem (or may not be fixable).

The last time I built anything for Windows CE, for example, the number of warnings Microsoft's compiler issued for their own system headers dwarfed the warnings in even the legacy code, let alone my changes. Fortunately, compilers also tend to have some (typically non-portable) means to conditionally disable warnings, maybe one individual type of warning, or all warnings temporarily around included headers (like system headers), or even completely disabled except fully enabled around your new code.

Do be aware that warnings are only emitted for the files being compiled, so you may need a clean build to see relevant warnings.

Trust your toolchain : sanitizers / static analysis.

Some of these can pretty dramatically increase build times (I've seen this for example with gcc and libasan), so you might consider setting up both a "sanitized debug" build target (uses these tools) and a "rapid debug" build target (does not). This lets you switch between a slower build that checks more possible sources of bugs for you, and one that won't make you wait so long when your changes are minimal.

Trust history: Review change logs for mentions of the thing you are trying to modify

If you're lucky, the legacy code will already be under version control. If you're luckier, the commit messages there will provide useful information about what had to change and why--similar to engineering change control--rather than just being snapshots at a particular time.

My personal experience has been that even legacy code outside of version control usually at least has some sort of change log; a text file in the project, comments at the top of the main file, etc. If you can find a previous change that involved the same part of the program, you've got a head start on where to look. If the change log is only a summary and doesn't describe which parts had to change, you may be able to find an older version (developers of programs not kept in version control sometimes retain copies of older source versions) and use a diff tool to find the changes.


Next, onto:

Not know how code is structured and have to read the whole thing prior to writing

Once again, instead of waiting for full understanding of everything, for "quick and dirty" we're going to try to reach a lower, but acceptable, level of confidence. Specifically, instead of reading the entire code base, we're going to target what to read first by making educated guesses.

Educated guesses: Familiarize yourself with domain jargon and abbreviations

The names that are unclear to you may become obvious if you know the appropriate jargon.

"FFT" for example is just three letters, but in signal processing code, it most likely means "Fast Fourier Transform", and a function called FFT(...) is therefore probably what you might have named "calculate_fast_fourier_transform(...)". It is even more likely to be an FFT if it is being used near names like LPF (low-pass filter), FIR (finite impulse response), etc. You might have already known those ones, but the less familiar you are with a problem domain, the less likely you are to be familiar with the relevant jargon.

In my experience, employers will be happy that you want to learn more about their industry and may at your request provide some resources for you to do so; perhaps trade magazines, site visits, textbooks, introductory classes, etc.

Educated guesses: Familiarize yourself with common idioms in your language

Languages tend to have idioms besides the usual OOP runtime polymorphism and design patterns.

In c++, for example, compile-time polymorphism might look weird if you aren't used to templates. Functional languages will look differently, too.

This extends to short names, too. In c++, for example, if you see "itr", that's most likely an object that satisfies one of the iterator categories or concepts, whereas in other languages it might be an object that implements an "iEnumerator" interface, for example.

Educated guesses: Identify (possibly undocumented) patterns in the names

Most programmers develop habits in naming, even if it is not found in a style guide and/or is not always followed. Do all caps mean something different than all lower case? Does camel case mean something different than lower case with underscores? Is there a common prefix or postfix for globals, members, type names? Do some names resemble names from something else (another programming language's standard functions, mathematical set theory, etc)?

Even names that aren't immediately obviously may have patterns that let you make educated guesses.

Leverage your toolchain: navigation and readability

This one isn't about guessing, and you might already know these, but make sure you are making use of features your editor provides. Modern editors usually have things to make reading easier, via context menus or hotkeys, like:

  • context-sensitive "go to implementation", "go to declaration", etc.
  • backward and forward navigation to/from prior cursor positions or the context-sensitive jumps
  • splitting large files into two or more views with independent seek positions
  • iterative macro expansion

Then, we come to:

Start to build encapsulations myself

Others have already suggested various forms of "write less". So, how and why can we reach an acceptable, but less than "clean", level of encapsulation?

Familiarize yourself with the reasoning behind the rules

I don't mean a generic "it will save time in the long run". Actual, specific kinds of changes it makes faster to implement. Actual, specific types of defects that are prevented.

For example, consider the answers about how to iterate an enum in c++:

Some of the answers suggest putting the enum values into a container (vector, initializer_list, set) and iterating the container. Some suggest making a custom iterator type. Some suggest, for enums with continuous values, using sentinel values in the enum itself and incrementing the underlying value.

The "sentinel" version is simplest and fastest to write, but less robust against change in some specific ways. Specifically: it will be easier to miss updating a loop somewhere when adding a new value, making the iteration miss a valid value. Specifically: if you remove a value, leaving a "hole" in the interval, the iteration will attempt to use an invalid value. Specifically: if you write the underlying value to a file somewhere for serialization, an added value could replace a sentinel value, meaning the new code may interpret old files incorrectly.

The other methods reduce the potential future maintenance burden for some or all of those issues, but take more time today to implement and/or are more complicated and/or have performance impacts and/or make any code that wants to iterate dependent on the container type (suppose you started with std::set before realizing boost::flat_set would have been better).

Generally the clean approach is more robust against unexpected change, and thus makes a reasonable default choice, but how likely is it that the values will change for the specific enum, in the specific use case, of a specific program?

Understanding the specific problems the rules are meant to prevent is crucial to deciding when you can break them with little risk of suffering significant consequences.

Make use of business logic for simplifying assumptions

If you are picking unit systems to display road vehicle speed, for example, you can limit yourself to km/hr and mi/hr, using a simple flag to track which one should be displayed (you could use something like boost::units to do the actual calculations).

A more robust design could allow for multiple unit systems, perhaps letting the user make up any unit system they wanted, having an adapter pattern for changing them between internal representations and display, some way to uniquely identify each unit system that is encapsulated to avoid leaking its internal representation, etc.

But, if the business case is to display road vehicle travel speed, that additional abstraction and complexity doesn't provide any additional value; no application other than silly jokes would want a road vehicle speed readout in decameters per day or millimeters per microsecond. So, we make the simplifying assumption: exactly two unit systems, which will not change. A boolean flag would be adequate, though if you really wanted, a two-value enumeration with more descriptive names (but no container or iterator, please) would be appropriate.

That case is familiar to those of us who drive, but there are other opportunities in other business cases that may not be immediately obvious to the casual observer. So, get to know the business use of your software.

Beyond asking your employer generally for resources to gain industry knowledge, technical sales people -- specifically, sales people who have spent time doing the actual work in the industry ("gotten their hands dirty"), not just a salesperson with "technical" in their title -- can be a good option. In addition to familiarity with customer needs, they (usually) have a different perspective from your developer colleagues, and are less likely to have any stake in the politics of code development.

Leverage cost to identify what can (currently) be left out

Keeping with the speed unit example, if you'd been building some industrial process control device instead of an automobile, you might start with in/min (USA) and cm/min (others) initially instead of mi/hr and km/hr. In the case of industrial process controls, it is more likely that you'd encounter a customer who wants another option, say cm/s, or m/min, but do they want it badly enough to wait extra time if you already had the other two working?

In my experience, people--including customers, clients, managers, salespeople, and even other software engineers--will ask for anything and everything they think is free (to them). So, make sure they know it isn't free for them. On teams with formalized schedules--true for both waterfall and Agile--the time cost comes out as a result of the schedule, but in less formal circumstances you might have to just point out the time (and any other) costs yourself. If the perceived business value doesn't exceed that cost, the customer / manager / salesperson may cut the unnecessary work for you.

Cut time on application code before you cut time on libraries

Since we're trying to choose specific tradeoffs that provide more benefits than costs, it is good to remember that the benefits and costs for code in a shared library are not necessarily the same as the benefits and costs for code in an application.

Firstly, the application itself is a kind of "encapsulation", in that changes to objects and interfaces wholly contained within just that application (as opposed to output data, shared static libraries, or library APIs) shouldn't break other applications, limiting the costs of breaking things.

Secondly, time spent on code wholly contained in just an application can only benefit that application, limiting the benefit of not breaking things, whereas code in a library has the chance to benefit multiple applications.

As an example, the non-virtual interface (NVI) pattern has some nice benefits, and for a reusable library, I'd be inclined to use it myself. But for application code, changing the interface functions later from virtual to non-virtual ones that call virtual implementations may not be as painful a change as it would be in a reusable library, because it probably doesn't break as many things, so one of the benefits of NVI is reduced. For an unambiguous interface that has no "steps" to perform, and for which you don't even expect to need debug / logging hooks, another expected benefit of NVI is less. So, the cost/benefit comparison of NVI vs a simpler pure abstract implementation may come out differently for a library than for application code, depending on the specific class involved.

In some cases, composition may be preferable to inheritance, even in a language like c++ that allows multiple, virtual, public vs private inheritance. And, again, if you're writing re-usable library code, it may well make sense; maybe even template your class on the composed type so you can substitute in different but similar types (I appreciate, for example, that boost::flat_map is designed that way). It may also make sense to stick with composition if the existing code is already written following data-oriented design. But if you're solving one specific problem in one specific part of one specific application by wrapping an existing class in a new type that enforces additional constraints, private inheritance may get it done faster than composition. "using private_base_class::function_name;" on the member functions that are not concerned with the new constraints is less code to write than writing boilerplate wrappers for a dozen functions and their overloads; less boilerplate to scan during code reviews; and, fewer lines of insignificant code to distract new developers looking for the bits that are interesting. Maybe you still pick composition, but do so having considered the tradeoffs in that specific situation.

Consider compile-time abstractions instead of run-time abstractions

Particularly for application code, maybe also for static library code, but maybe not for shared library code, unless it is wholly contained within the library.

In c++ for example, instead of writing your own fancy gang-of-four iterator to abstract a container implementation, consider whether you can't just use a type alias to refer to the iterator of the container implementation you're using today. If the container does change in the future, the alias definition can change to match, letting the compiler do the work of changing it for you when you rebuild. You'll have more to recompile, and other objects will still know about that implementation detail, but you won't have to edit much source code yourself, and the existing iterators are already written, tested, and compatible with the already written and tested standard library algorithms; that's a lot of code you don't have to write.

Not everything needs an interface

Not everything needs to be abstracted behind a reference or pointer to an interface.

For example, if some function in your code takes screen pixel coordinates, it's ok for it to be public knowledge that those are integers, and it is ok to use them as a value type. If you work with screen pixel positions / counts, having two integers is a natural part of that interface. You can put them in a std::pair<std::uint_least16_t, std::uint_least16_t>, or your own simple structure, but you don't need to hide the fact that there are integers in there, nor which kind of integer they are. You don't necessarily need to take 32 bits of numeric data and hide it behind a potentially 64-bit pointer that needs a vtable lookup to get the actual data you already know you need.

Can I promise that there will never be a screen more than 2^16-1 pixels wide? Of course not; we've seen how that kind of hubris played out with IPv4, 32-bit timestamps, etc. But we already scale pixel buffers that are different sizes than the display resolution, so the specific case of screen sizes increasing is not likely to be as significant a problem as those examples.

This can also be true for objects more complicated than "plain data". Writing, testing, and maintaining interfaces for every class, whether or not it has multiple implementations, is overkill for application code.

Consider simplicity over abstraction if they would be at odds

I've seen non-robotic (that is, non-automomous), industrial machinery, that runs on what was, printed out, less than half a page of BASIC (including comments); basically a simple state machine built on gotos. It was not robust. Initially, for example, the software de-bouncing was very limited because it relied on filtering in the sensor hardware; after the obsolete original sensor was replaced with one having a noisier signal, the state machine had to be changed to add more robust bounce transitions.

But with just half a page needed to understand the entire program, that program was easier to maintain than some more robust programs that were larger and more complicated. I wasn't there when that program was first written, but at half a page I'd bet it was pretty darn quick to write, too.

Some larger, complicated programs aren't ever going to fit half a page, and benefit from encapsulation and abstraction. But, it would have been a mistake to add a bunch of abstraction to that half page program to make it more robust, because it was already so simple that it would have been a net drain to do so.

Don't fall victim to "Not Invented Here" syndrome

Is there a tested, reputable, appropriately-licensed library that implements most of the functionality you want, only not quite the way you wanted it?

Maybe they use a data structure, algorithm, or API that isn't what you prefer, but unless it is unacceptable for some reason, it may be faster to use it than to write, test, and maintain your own.

You don't have to fix everything right now

Others have already noted that there's room to improve encapsulation in existing code without cleaning up everything, and a small improvement may be enough for the change you are trying to make for the next release. You can fix more on the release after that, if it ever even happens; as the Agile advocates have noted, priorities often change, and by the time a full fix would be prioritized, you might be working somewhere else, or the application may be rewritten from the ground up, or the company may no longer be in business. Until then, consider small improvements:

  • Maybe add const qualifiers around just the thing you're changing if it didn't have them, instead of adding const-correctness to the entire interface.
  • Maybe make the thing you're changing private if it used to be public, instead of making all data members private by default in all objects.
  • Maybe add an RAII wrapper around one resource whose lifetime is changing, if it used to be manually managed, instead of adding RAII to every resource right off the bat.
  • Maybe inject differing concrete behavior with a functor instead of making a whole new abstract interface class with different implementations; existing code can keep taking references to the existing concrete class without caring which functor it was given on construction.
  • Instead of changing lots of tightly-coupled objects, maybe write one adapter/facade/decorator thin shim layer to decouple them from new/changed code, so the existing code can be left as it is (for now).

Finally, we come to:

Writing ... tested ... code

Others have already suggested various forms of "test less"; after all, the quickest way to test is not to. How, and why, can we find an amount of testing that is acceptable, but less than that of "clean" code?

Test based on risk analysis and cost/benefit tradeoff

Tests have a cost--time to write, time to validate, time to perform, time to maintain--and hopefully a benefit in the increased confidence that the code is correct (even in the face of potential changes).

Consider, for example, the following c++ function:

constexpr std::uint64_t wide_sum(std::uint32_t lhs, std::uint32_t rhs) noexcept
{
   return static_cast<std::uint64_t>(lhs) + static_cast<std::uint64_t>(rhs);
}

Now, in c++, if types uint64_t and uint32_t exist (via #include ), they are exactly 64 and exactly 32 bits, respectively. The mathematical interval of [0,2^64-1] can always contain the sum of any two values both in the interval [0, 2^32-1]. Both 32 bit values are widened into 64 bits before the summation via static_cast, so the summation is performed in 64 bits and returned as 64 bits, so there is no chance for it to overflow. We trust the built-in operator+ as much as we trust the rest of the compiler. We already have about as high a level of confidence as we can get; if this function compiles at all (maybe there's a typo, maybe the target platform doesn't support those fixed-width integers, maybe we forgot to include cstdint, etc), it's going to be right (barring compiler bugs). Spending any additional time to write a test for it (as opposed to just using "compilation success" as the test) is not going to provide any additional value.

More generally, what and how thoroughly to test comes down to risk assessment. Except in applications with significant hazards or regulatory requirements, you probably don't need to spend time on a risk matrix or exhaustively following a standard for risk assessment, but it's usually not bad to think about, before deciding how thoroughly to test:

  • How bad is it if this code is wrong? (is it likely to cause physical harm? will it cause service infrastructure to fail? is it likely to result in a lawsuit? is it likely to lose sales? is it likely to only be somewhat embarrassing?)
  • How likely is it that the code is wrong (is this a trivial function I can think up an informal proof about, or, is it using an algorithm that is not as well understood, does it have high cyclomatic complexity, does it involve implementation-defined behavior, etc?)
  • What else is done to mitigate possible errors (is there a redundant calculation performed another way, with the results serving as a run-time check on each other? Is the user allowed to override the result themselves if they need? Does the system as a whole put itself into some "known, acceptably safe" state when it detects errors like memory corruption, access violations, etc?)
  • How likely is it that this code will change (did we incur some technical debt with an incomplete implementation we know we'll need to rewrite later? did we make assumptions in which we aren't very confident? is our business case well-understood and our industry stable, or is this a rapidly changing industry, or one still figuring out best practices, etc?)

Some situations may justify having significantly more test code than product code, but don't start out assuming that every situation does; think through whether less would be appropriate.

Find obvious problems quickly : don't be afraid to just try it (safely, in a mock environment)

Don't push questionable code to production, but that doesn't mean you can't safely test questionable code in something closely resembling its environment. Maybe you've got a simulator or an emulator, or can mock up your own test hardware. For example, a program that controls the motion of a plasma cutting torch might be tested without any torch or plasma by putting a marker or pen where the plasma torch would normally be clamped, with paper taped to the work surface so you can see the path it has drawn.

Sure, one quick "does it even look close to right" test doesn't give you much confidence that the code isn't wrong, but if it makes an obvious error quickly, you might have spent five minutes finding that you have a problem in the position feedback loop instead of spending hours or days trying to exhaustively figure out what to test and how. In particular this can be useful if you're making one change to otherwise working software and can compare the output of both versions.

Depending on your risk assessment, you may find a way to adequately test your changes on emulators / mock hardware instead of implementing a whole testing suite in a codebase that didn't previously have one.

2
  • Are you a lecturer or an AI? Commented Jun 24, 2022 at 3:46
  • @jumping_monkey Neither; it seemed Gulzar's frustration with some of the other answers was that those answers lacked specific actionable techniques or methods, which, as those authors noted, is difficult to provide because it is situational. I thought some examples of actions, as well as how to think about tradeoffs in different situations, might be helpful. Commented Jun 24, 2022 at 12:57
1

"Clean code" motto, repeated to the point of ridicule, is that:

the only way to go fast, is to go well

Philosophically, "writing clean code" and "rushing to ship something" are polar opposite.

Some options to compromise on each axis, while keeping options open:

rushing to ship something

  • Make it clear that expectations are not realistic

You're going to produce a sub-par product because you're asked to.

Everyone needs to share part of the blame here. (You assume that the conversation has already been had, but in so many cases only half the side really understand what it means)

  • Find ways to ship less

Reduce scope to the bare minimum. Use the acronym "MVP" as much as possible (it subconsciously reminds people of "IPO" in the start-up world, which put some in a better mood ;) )

  • Find ways to ship incrementally

A large part of your "soon-to-be-dirty" code is actually not needed because the business requirements are not clear. If you can implement 10% of the feature in a "quick and dirty way", and make it obvious enough that the feature was misunderstood / not needed, you've spared you and your codebase.

  • Identify the time when you won't be "rushing" any more.

If this is never going to happen, than don't delude yourself too much - you're only ever going to write bad code, and you'll be slow because of that.

"We're rushing because we have a tradeshow last week"

...can be redeemed in two weeks ;

"We're rushing because our normal process is to cram a large roadmap on a small team with short milestones"

...is called a "death march" for a reason.

writing "less clean" code

Some strategies:

  • write "knowingly dirty" code with comments

(Suggested by other answers). Especially document any potential "surprise". It's interesting to follow the // TODO(xxx) convention where xxx is your name, acronym, etc... so that coworkers know to ping you when they hit this code later.

// FIXME(bob) This assumes that function X has been called before. This might
// fail if the user has done Y, but this should not happen in production. 
  • delay the cleanup

(Suggested by other answers). However, I would add that it's critical to set a date for the cleanup, and maybe add it in the code to automate listing your technical date

// TODO(bob, before: 2020-06) There are no tests at all for the edges cases here
  • tolerate some duplication

I have yet to find a situation where not factoring a function as soon as you notice duplication was really saving substantial amount of time.

However, when you notice duplication in other code, again, it might make sense to document it next to the code (if only, to schedule fixing it)

// DUPE(bob) This is almost the same code as in `SomeOtherModule`
  • aliases and synonyms

Sometimes a code is just hard to understand because naming is poor. This will be controversial, but if your language / IDE does not support renaming function and classes, it can never prevent you from creating aliases purely for naming reason

// Original code
function xty(z) {
 ... 100 lines of undecipherable code
}

// Your code
function mapStandardUnitToImperialUnits(meters: number) : number {
  return xtz(meters)
}

  • if you can devote just a bit more time to cleaning up, "Working effectively with legacy code" has some other techniques. (Isolating dirty modules, strangling code, etc...) but I'm afraid you're not in the situation do this at the moment.

outro

In the long run, you know that you're going to be better off writing the "clean enough" code at a sustainable pace. Try your best at doing that.

Hoping this is just a rough patch.

Good luck !

0

I worked as a programmer for 30 years before I retired. Some things I noticed were

  • there were programmers that produced good (not always perfect) code all the time, worked regular hours and were never a bottleneck.
  • there were programmers that were always working stupid hours on every project they were assigned, and were always a bottleneck.

So what is the difference? I put it down to up-front planning. The good guys challenged the design, fixed the underlying database design, got decisions on what features could be delayed, then wrote code. It is not difficult to write code against a correct design. In my experience bad code need never be written.

It was also obvious that many serious runtime bugs were often the result of bad underlying business design and/or bad code written to work around such bad design.

Your millage may vary, but I don't ever recall knowingly writing "quick and dirty" code just to get something into production.

3
  • 2
    And how did you handle the issues from trying to interface clean code with a codebase full of design and code issues from the bad coders? Or handled situations where challenging a bad design gets you labelled as a troublemaker and the implementation passed over to a bad coder who isn't going to challenge the architects underlying flaws? Commented Aug 12, 2022 at 10:46
  • 1
    Or what about when your managers ask why you aren't completing as many tickets as the coders who complete the original ticket with quick and dirty and broken implementation, get that commited and closed, and then spend loads of time in seperate bugfix tickets fixing the issues they caused? Commented Aug 12, 2022 at 12:05
  • I would like to take something away from this answer, and that is probably better up-front planning. Could you elaborate on that? How do you approach a large code base with for example not enough encapsulation layers, and plan ahead in that? Usually what I tend to do is create those encapsulations as I go, just to understand what's going on and name it. Understanding it and coding it is almost the same work. But how CAN I plan ahead in such cases? Please elaborate
    – Gulzar
    Commented Aug 13, 2022 at 17:11
-1

This question is founded on a misunderstanding of code quality, or what "good" code is.

"Good" vs "bad" code is very much dependent on context. Code that does the job that it needs to and never needs to be modified is not bad code for your purposes, even if it's virtually unreadable. It becomes bad code only when you need to read it (whether to figure out what it does, or because you've been asked to change what it does).

This is entirely disconnected with things like how much abstraction you're using; I've as much (or perhaps even more) code that ended up being bad through the use of too much abstraction as too little.

The "dirty trick" used to code up features or changes faster is not a programming trick but one that's in fact used more often by managers: declare the code to be done without examining whether it really does what you want, perhaps even skipping a reasonably precise definition of what you want.

Edit: From the downvotes and comment, I suppose it wasn't clear from the above that there is only sometimes a "tradeoff between the speed of development and the quality of code." Sometimes the tradeoff goes the other direction, in fact: it's frequently the case that in the long run develop most quickly by writing high quality code. (We don't focus on code quality because it's an inherent good; we focus on code quality because it generally saves us time and money if one of our requirements is that things work properly.)

If you're looking for short run productivity, that depends on how you define "done." (As mentioned above, this may or may not include the feature being shown to fully work, the code being left in a state where it's best suited for further development demands, and so on.) Especially if you actually need the feature to work properly, writing bad code and then figuring out what's broken without making the code better may actually take longer than just writing good code in the first place.

As one final note: if you can fairly easily write what you need with less code and fewer abstractions (which will often be faster), and the code is not causing you problems in the long run, yet you consider this "not clean" code you have a definition of "clean" code that make it sometimes less good than "less clean" code.

2
  • 2
    Regardless of the assumptions in the preface and background, this doesn't answer the fundamental question of "how to favor fast over clean in practice"
    – Gulzar
    Commented Apr 11, 2022 at 9:15
  • @Gulzar I've expanded my answer to try to make it more clear. The TLDR is, writing "unlclean" code is not only not a guarantee of getting things done faster, it may in fact slow you down. If you want fast and dirty, just write as fast as you can and write dirty code. It may get you to your endpoint faster or may not, and the "dirty" code may or may not be a disadvantage then or later on.
    – cjs
    Commented Apr 12, 2022 at 9:26
-3

Two important principles that get left out of good programming practices are:

  • KISS, keep it simple, stupid.
  • YAGNI, you ain't gonna need it.

Write code that does what it needs to, and nothing else. Classes don't need to inherit from abstract interfaces. The open/closed principle is irrelevant if nothing will ever inherit from this class. You don't need a factory to create an instance if "new" will do.

Don't fret about the stuff that the users won't see. If it works, it's good. So what if the variable names turn out to be a bit odd in hindsight.

Go "old school". There is a design pattern for a state machine. There's also a switch statement inside a while loop.

Test by using the software. Leave out the test suite. If it works correctly on real data, it works.

Not the answer you're looking for? Browse other questions tagged or ask your own question.