11

There is a dependency jar containing tons of service classes mostly used to retrieve data from database, and this jar is used among several different micro services in one cluster.

There is a big problem in such architecture: once a new method is added to any of the service class in the dependency jar, it is next to impossible to change the method signature, since the method is used already by several microservices. This causes big maintenance cost on those service classes

Question: how to reduce the maintenance cost on the shared service jar? is there better alternative to share those service dependency jar among the microservices?

8
  • 9
    The question is ok (and you are not the only one who had issues with such a design), but it is missing information on the organization maintaining this system. Are their individual teams for maintaining each service? Is there a separate team for the shared service? How independent are these teams organized? Or is all in the hand of one small team, so changes to the system can be propagated easily even when they affect different services? I recommend to add this information by editong the question.
    – Doc Brown
    Commented Jul 1 at 11:23
  • 6
    Obvious solution: export the jar's API as another microservice. (I'm joking, but this is actually a serious criticism of microservice nonsense)
    – pjc50
    Commented Jul 1 at 14:38
  • 4
    This is not a specific problem to microservices. This is something you'll see with any library used by many people. It may be prudent to see how big libraries (like Spring and even Java itself) handle this problem. Commented Jul 1 at 21:06
  • 1
    @pjc50: I believe that is indeed what microservice proponents do (splitting off a new service) - so not everyone thinks that's a joke...
    – sleske
    Commented Jul 2 at 8:15
  • 1
    Without knowing some more specifics, it's hard to speculate on particular solutions. Changing method signatures in a Jar is going to cause this type of situation whether you use microservices or not; it's simply more frequent in microservices.
    – eques
    Commented Jul 3 at 11:23

7 Answers 7

29

Combine the dependency and the cluster of micro-services into one monolith.

Adopting microservices should be about enabling independent development and deployment of individual parts, reducing interdependancies of work, and solving a problem that can't be solved in any other reasonable way.

It sounds like you aren't getting these benefits, since work on any one service probably often requires changes to that dependency. And since presumably the database is shared between the different microservices, all services will have to have a relatively recent version of the dependency at all times, as an old version would assume a database structure that no longer exists and not work.

So effectively you have to deploy the various services in near lockstep.

Combining them into a monolith would mean you deploy them exactly in lockstep, with a single command that deploys all of them together, and the possibility to easily run tests and static analysis in advance on each version of the monolith well before deploying it to production.

Changing a method signature at the method declaration and all method usages can then be done in a single commit, and often with an automatic refactoring command in an IDE.

(I don't know the OP's organization, so this answer may or may not be right for them, but I present it as an option to be considered and voted on)

5
  • This is what I was thinking as well, however I don't know if combining these services constitutes a monolith. Don't confuse the word "micro" in microservices with "a tiny application." But there is a line where you cross into monolith territory. Commented Jul 1 at 16:21
  • 2
    @GregBurghardt That's true, it might just make a bigger microservice. I think microservice and monolith architectures are two ends of a spectrum. If this doesn't make a monolith then it would at least move the OP's org a little in that direction.
    – bdsl
    Commented Jul 1 at 16:35
  • 1
    It's funny, because my first thought was that this organization tried splitting a monolith into microservices, and bringing it back towards a monolith might be good until they can properly separate things. Commented Jul 1 at 17:04
  • 4
    Rather than a monolith, make it a monorepo - which is a monolith with good management instrumentation that prevents dependency hell, does actual incremental build and deployment, plus a bunch of other good stuff. Really good intro: monorepo.tools
    – Jonathan
    Commented Jul 2 at 8:23
  • 2
    @Jonathan It might be worth expanding your comment into a full answer.
    – bdsl
    Commented Jul 2 at 9:49
18

Exported methods should be part of a stable API, which means that you never change the signature of a method once it is published, unless you create a new major version of the library.

If you use the semantic versioning terminology, addition of new backward compatible functionality, such as adding a new method, increases the minor version number when you publish that version. Clients of your library will normally specify which minimum minor version of a fixed major version they will need, and that's enough. They will continue to work with newer versions, as long as you keep the promise given by the semantic versioning rules.

If you make breaking changes to your library (such as removing previously exported methods, or changing signatures, or changing names), you need to create a new major version. Since old clients of your library will not know about this version, you will need to provide and maintain the latest minor version of that old version as long as those clients have not been migrated to the new version. This is normal, and the maintenance overhead for the old version should be minimal (it does not get any functional enhancements, you would just patch critical bugs, creating patch releases).

19
  • 2
    I guess what I'm seeing is that Semver is dying.
    – Basilevs
    Commented Jul 1 at 14:08
  • 5
    "you never change the signature of a method once it is published" This may be somewhat of a nitpick but I'd rephrase this to not introducing breaking changes. Extending the signature in a way that does not break the old usage (e.g. upgrading an int to a double, or adding an optional param) don't break existing consumers and don't cause issues. Also, while an edge case, critical bugfixes are allowed to make breaking changes if and only if the endpoint was flawed on release to the point of being unworkable (at which point you're not breaking it, since it wasn't working in the first place).
    – Flater
    Commented Jul 1 at 23:54
  • 3
    @Basilevs "Clients tend to then do X" != "The guideline prescribes this as the correct approach". As per semver.org: "Deprecating existing functionality is a normal part of software development and is often required to make forward progress. [..] Before you completely remove the functionality in a new major release there should be at least one minor release that contains the deprecation so that users can smoothly transition to the new API." Service owners should not be bulled into staying with the same major version just because clients don't want to upgrade. [..]
    – Flater
    Commented Jul 2 at 20:41
  • 3
    @Basilevs: I get the feeling you misunderstand semver. Semver does the opposite of forcing the client's hand. A non-semver service which introduces a breaking change would forcefully overwrite the old way into the new one, forcing all clients into immediate action. A semver service would introduce this breaking change on an alternate path, providing the option to stick with the old way for as long as it would not be deprecated, allowing clients to upgrade to the new version at their own pace. [..]
    – Flater
    Commented Jul 2 at 20:55
  • 3
    @Basilevs [..] That is the polar opposite of "semver just forces all clients to take responsibility for integration of a breaking change" as you claim. It seems like you think that applying semver somehow increases the amount of breaking changes that get introduced (as per your mention of "a carefully managed upgrade plan that does not affect them in particular"). This is not the case. Semver does not increase the breakage, it tells you to contain the breakage (that was already going to happen), specifically to avoid having to force the client's hand on a schedule that's not their own.
    – Flater
    Commented Jul 2 at 20:58
8

There really is no easy way to resolve this. The microservices are coupled at a much deeper layer than they should be. Reducing the maintenance cost means properly decoupling these things so that microservices are coupled to each other via web API endpoints or message brokers. You and your organization will need to reorient your perspective on code reuse.

Any change to a monolith requires the whole monolith to be deployed. As a result, it becomes very easy to couple code between modules in a monolith. Even loosely coupled code changes require the whole thing to be packaged, retested, and deployed. Microservices change the economics of coupling because coordinated deployments for multiple services are more costly in time and effort — the beating heart of your question. While shared libraries of code spanning wide swaths of behavior can be viable for a monolith architecture, microservices become even more complicated.

Instead of reusing code at a lower abstraction, for example sharing business classes or service classes, one entire microservice becomes the minimum unit of behavior that gets reused.

Get rid of the shared library.

This thing is a monolith disguised as a dependency.

The boundary of a microservice is defined by the business functions it serves, and the data for which the microservice should be considered the single source of truth. A microservice should wholly own a business process and the necessary data to perform that business process independently of other microservices. It's ok to copy data to other services, as long as those other services all agree on which one acts as the single source of truth for some particular data.

If one service needs to implement behavior owned by another service, then call the other service. Delegate that behavior to the service that owns that business process. Doing this allows you to remove the monolith service layer that leaves these services tangled and intertwined.

Of course, this is a tone deaf answer if your organization can't or is unwilling to support this separation. If this common dependency cannot be eliminated, then I wouldn't consider the ecosystem to be a microservices architecture. You will be stuck with versioned APIs or combining these microservices together. bdsl mentions this in their answer, but you aren't necessarily making a "monolith microservice". Remember the main characteristics of a microservice are that it is independently deployable and scalable. If you can combine these services and achieve that level of autonomy, you can still call it a microservice. It just might be a big one, that's all.

It's too bad that microservice has the word "micro" in it. Too often people equate microservices with tiny applications, and that is not the main characteristic. Smaller applications are a side effect of separating main lines of business functions into independently deployable and scalable units. The independently deployable and scalable unit becomes the unit of code reuse within your organization. Deeper levels of reuse often land you precisely where you are right now.

1
  • 3
    "so that microservices are coupled to each other via web API endpoints or message brokers." - I don't think it matters much whether the services are coupled by web APIs, message formats, SQL schemas, or Java APIs. The real work in decoupling this mess is redefining the service boundaries.
    – Bergi
    Commented Jul 2 at 1:41
4

TL;DR: Change the signature and don't upgrade until ready.

I am, actually, in a similar situation at work, and it's an architecture I chose. In order to protect IP, our codebases are organized in the following way:

  • A "public" codebase, to which every developer gets access to, which define common functionality, communication protocols, client libraries, etc... for all applications.
  • Many "need-to-know" codebases, which are built atop the previous one.

Any change to the API of the public codebase requires a bit of integration work in the other codebases, and that's completely fine by us.

How it works:

  • Dependency by commit, not version. Every release is a potentially breaking change, so no SemVer is used.
  • Eager upgrade to affected applications in case of bugs.
  • Lazy (deferred) upgrade to affected applications in case of functionality change/improvement; typically until the application actually needs to be changed for other reasons.

It does require commitment, and it does mean that sometimes you'd want to make a quick change to core functionality for an application that hasn't been upgraded in a while, and must first wade through unrelated integration changes.

It's a trade-off we accept so far, in exchange for the productivity we get out of the setup.

2

I think there are two main ways to think about this and how to deal with it.

One is that when you have a shared library in your code base, you need to really think about managing it the way an open-source library would be managed. It makes no sense to me to pretend like this is an unsolvable problem. Think about all the libraries that you use in a given application and all the other applications that use those libraries as well. We don't even know the authors and the other users in the vast majority of these cases. Building a shared library that is used and maintained by your own team has a lot of the same challenges but is a much simpler problem to solve. No professional developer would seriously suggest writing an entire application without any external dependencies. Suggesting that building and maintaining a shared lib on a much smaller scope is an intractable problem is just as ridiculous. Hans-Martin Mosner's answer has some practical advice relating to this. I've seen teams that cannot seem to move beyond copy-paste-modify approaches but don't let anyone convince you that is a sophisticated way to go about things.

The other thought is, if you are thinking of microservices as arbitrarily splitting up every endpoint/operation, perhaps consider a more (IMO) reasonable idea of microservices where you divide groups of related capabilities into mini-monoliths. That is, components of your design which operate on the same databases and/or are otherwise inherently intertwined with each other should be thought of as part of a single application, regardless of whether they are deployed independently. There are a lot of nonpragmatic recommendations in microservice literature which treat the benefits of independent deployment as inherently worth the costs. In many cases, this results in a lot of complication for little or no benefit. At this point, I think it's non-controversial to assert that blind adherence to a naive microservices model has resulted in a lot of poorly executed solutions and project failures.

1

Put all your code - services+libraries - in a monorepo

You've hit the classical problems with micro services. A better way would be to manage your project as a monorepo - a single repo, but managed with monorepo instrumentation so you don't problems associated with a less-well-managed monolith - dependency hell and others.

I found this site extremely well-written in describing the problem and suggested solutions: https://monorepo.tools

We use NX - one of the proposed monorep management tools - and it works great.

-1

is there better alternative to share those service dependency jar among the microservices?

No, there isn't. Probably packaging method is package by tier and the entire service tier it is packaged in the service.jar. Packaging by feature might reduce the cost of service.jar maintenance by splitting the service.jar in archives with a reduced size thus improving the release cost, though the improvement will get lost in a new type of cost, the cost for administrating the versions matrix of micro services using services packaged by feature, it will be just a cost move around, without improvements on the overall cost. Package by feature might increase development flexibility that might increase the release frequency that might increase the revenue thus reducing the percentage from the revenue the service tier maintenance cost represents since the overall maintenance cost will barely change.

Not the answer you're looking for? Browse other questions tagged or ask your own question.