318

After I had published my paper, some people asked me to share the software that I developed. At first, I was very happy that my paper attracted some attention, and I was happy to share not only the binary but also the source code, case studies etc. But looking at my software, I feel very embarrassed.

My software is just horrible: the source code is just a mess, containing several of my unsuccessful attempts; I have never used design patterns, so duplicate code is everywhere; for simplicity and quick implementation, I often prefer recursions to loops etc etc.

I'm always under pressure to produce new results, and cleaning those code would cost me significant effort.

My question is if sharing this horrible software will give people a very negative impression of me? Would it do harm to my career if the people I share are prospect collaborators, employers, as they work in the same field.

28

18 Answers 18

324

Yes, you should.

First, most scientific software is terrible. I'd be very surprised if yours is worse than average: the mere fact you know design patterns and the difference between recursion and loops suggests it's better.

Second, it's unlikely you'll have the incentive or motivation to make it better unless, or until, it's needed by someone else (or you in 6 months). Making it open gives you that incentive.

Potential upsides: possible new collaborators, bugfixes, extensions, publications.

Potential downsides: timesink (maintaining code or fixing problems for other people), getting scooped. I'll be clear: I don't take either of these downsides very seriously.

26
  • 45
    Most scientific software is NOT terrible. (I've been making a living working with & optimizing it for a couple of decades now, so I think I have an informed opinion.) It just has rather different criteria for goodness: working, getting correct answers in practical time, being extensible to the next theory, etc, rather than conforming to the latest quasi-religious design paradigm or language trend. And for the OP, your "horrible" software might, if cleaned up & commented a bit, be more accessible to other scientists than "good" code.
    – jamesqf
    Commented Jan 23, 2015 at 4:19
  • 64
    No mention of the elephant in the room, reproducibility? Commented Jan 23, 2015 at 10:26
  • 38
    @jamesqf : I exaggerated for effect, but my thinking is this. Most scientific software is short proof-of-principle, throwaway code. It rarely gets released outside of a small group and is, in my experience, poorly written. Most scientific software that lasts at a reasonable scale is not terrible (I've also worked with some on a similar timescale): it can't be to produce multiple publications. But I'm thinking about "all code written by scientists" here.
    – Ian
    Commented Jan 23, 2015 at 10:43
  • 39
    @jamesqf: don't take it personally, but that ranks pretty high on the list of dumb reasons not to use source control (besides, with "normal" VCSs it takes a script of maybe 20 lines to "fix" this "major design flaw"). Commented Jan 24, 2015 at 1:50
  • 28
    @jamesqf Commits (the core of most source control systems) have timestamps. In git for example, create a commit for each modification of your data files, for instance "reran simulation using new code from revision XXX". For each modification of code, make a commit that says "improved code in XXX for reason YYY". Then, instead of having a "last modification date" for your files, you get a nice list of commits, along with when they occurred, exactly what files were added/modified/deleted, and a helpful comment. There is no fix to be done, you simply don't know how to use source control properly.
    – Thomas
    Commented Jan 25, 2015 at 8:23
86

I would clean it up a little and share it. I've released a lot of code over the years, and also not released code for the reasons you give.

Go through it and and comment it, at whatever level you can. Leave in "failed attempts" and comment them as such. Say why they failed, and what you tried. This is VERY useful info for people coming after you.

Make a README file that says you are releasing it on request in the hope it helps someone. Say that you know the code is ugly, but you hope it's useful anyway.

Far too many people hold things back because it isn't perfect!

8
  • 42
    I can't endorse leaving in failed attempts. You should use version control instead. Including brief comments that explain why the initial attempt failed is fine, but including the actual failed code may be actively harmful.
    – David Z
    Commented Jan 23, 2015 at 9:57
  • 6
    @DavidZ On the contrary, show it. If you use version control, people can see previous variations of your work, which is far from being useless. But if like here you don't use VC, then don't remove failed attempts. Put them in another file with appropriate comments. How harmful could it be?
    – coredump
    Commented Jan 23, 2015 at 10:51
  • 29
    @coredump It can make the entire program virtually incomprehensible. I've seen this happen. If you don't use VC, start using it. The only way I could support a recommendation not to remove failed attempts is if you're forbidden from putting the code in VC for some reason which I can't imagine, and it's essential to see the previous code in order to understand what the current code does (which probably means the current code is bad too, though I admit that exceptions may exist).
    – David Z
    Commented Jan 23, 2015 at 10:56
  • 4
    @DavidZ Sorry, but "Say why they failed, and what you tried" is a good advice, IMHO. If your code is messy and/or if you are not accustomized to software engineering practices, please leave it as-is and comment as much as you can. Removing useful informations could make things virtually incomprehensible. I've seen this happen ;-). Okay, so maybe there is a middle-ground between showing all the horrible things that were attempted and leaving useful traces. I think "I would clean it up a little" is also a good advice.
    – coredump
    Commented Jan 23, 2015 at 13:21
  • 5
    If the code is an implementation of a research paper, and he tried to implement it different ways, my assumption is that the solution is non-obvious. In research, someone else might see the working code and think "I could do this better this way" which might be one of the ways the author tried first. We do poorly in CS at sharing our failures, which leads to lost work sometimes. That's my point. Whether it's a good choice in_this_ case can't be known without seeing the code, but I know plenty of other profs who share this view Commented Jan 25, 2015 at 16:08
61

Yes! Especially if your paper is e.g. about a new/improved algorithm that you've implemented, or you do significant non-standard data analysis, or basically anything where reproducing your results means re-implementing your software.

Papers seldom have room to give more than an outline. I know I've spent (= wasted) much too much time trying to implement algorithms from papers that left out critical (but not strictly relevant to the paper) details.

2
  • 10
    Very true comment about reproducibility, particularly, the second para. Commented Jan 23, 2015 at 10:50
  • @E.P: Yes. I'm sorry, it's my dystypica cropping up again :-)
    – jamesqf
    Commented Jan 24, 2015 at 18:56
53

¿You think your code is messy? I have seen (and attempted to work with) code that gave me nightmares:

  • Five levels of if True nested, scattered at random places through the code.
  • Create an array of zeroes, convert it to degrees, take the cosine, and back to radians. Then, throw away the result.
  • On a software under heavy development, the list of "supported architectures" is so ancient (and they do say themselves) it would difficult to get your hands on one of these computer nowadays.
  • Features broken or modified several versions ago, still recommended in the docs.
  • Code that goes from using a standard format input to some format of their own. How to generate it? No one really knows, and the developers handwave a response.
  • Releases that don't even compile. (Did you even test it?)
  • GUI menus that you have to access in a specific order. Otherwise, you get a segmentation fault and have to start from the beginning.
  • Hard-coded paths scattered through the code. So you have to shift through several files finding and changing all the occurences of /home/someguy/absurd_project/working/ to yours.

And, my personal favourite, a certain program of thousands of lines of code, only used comments to eliminate random bits of code, except for one:

Here we punch the cards.

Still, no idea what it was doing.

And this is only leaving outside the classical good practice stuff, like one-letter variables all over the code, algorithms not specified anywhere...

If you are concerned about the quality of your code, it probably means you care enough to have made it better than the average. If you wait until the code is clean, it may never get out, and your scientific contributions will be partially lost.

In my opinion, the important things that you should care about, in order, are:

  1. Input and output formats. Use standards when available, make it simple when not. Make using your program as a black box easy.
  2. Commented. Brief descriptions of the functions, quick overview of the algorithm.
  3. Legibility. Using idiomatic code, good variable names...
  4. Structure. This is easier when you know what you want to do, that is usually not the case in research code. Only if there is interest in the community, you may consider refactoring it.

So, release your software whenever you have 1 (2 and part of 3 should come in as you are writing it).

3
  • 6
    +1 but I would also add appropriate error handling to the list of important points (all too often missing from rushed research code). In particular, take special care with any error that could affect the output silently - is that function return value the real number zero or a default-return-on-error zero? (Don't plot those!) Also, errors should be handled, but not over-handled. I've seen naïvely written "bulletproof" code that could silently recover from garbled input data and go on producing output without complaint. A crash may be frustrating, but inaccurate results can be a disaster. Commented Jan 24, 2015 at 11:23
  • 9
    Re your point #3, and someone else's comment about single-letter variable names: in scientific software, you are often more-or-less directly translating math equations to code. If the variables in the equations are single letters, it makes perfect sense to use them as variable names in the code. And, as I admit I should do more often, include LaTeX for the equations in a comment. For the rest, you haven't really lived until you've tried to debug FORTRAN 66 with computed & assigned GOTOs :-)
    – jamesqf
    Commented Jan 24, 2015 at 19:05
  • +1 for answer from @imsotiredicantsleep. Code that silently fails is difficult to work with. If its going to generate inaccurate results, make sure that it generates a warning or throws an error instead.
    – Contango
    Commented Jan 26, 2015 at 13:02
26

You're asking whether sharing low-quality software would give a bad impression of you. I think that sharing software at all gives a good impression.

  1. As a computer scientist, I like when colleagues make their source code available. It makes me more likely to look deeper into their work, maybe contact them, maybe cite them, because there is one more artifact to interact with (not just the paper, but also the code).

  2. When a paper reports a result that is "proven" by source code, but the source code is not public, I'm often wondering whether the result is real. Looking at the source code (or just the availability of the source code, without ever looking at it) can convince me.

So sharing your source code, horrible or not, would always give me a good impression of you.

Now, if you want to impress even more, it would help ...

... if you react to issues or pull requests on a site like github, that is, when I see that others try to contact you and you react.

... if your code contains a readme file which relates the claims from your paper to the source code. This way, when I read the paper and want to know more, I can use the readme to jump to the appropriate place in the code. Typical phrases from such a readme could be: "The algorithm from Sec. 3.2 of the paper is in file algorithm/newversion/related/secondtry/foo.c" or "To repeat the run with the small dataset described in Sec. 2 of the paper, run "make; make second_step; foo_bar_2 datasets/christmas.dataset. This run takes about 2 days on my laptop".

You might also be interested in Matthew Might's CRAPL (Community Research and Academic Programming License), available on http://matt.might.net/articles/crapl/. It contains this term: "You agree to hold the Author free from shame, embarrassment or ridicule for any hacks, kludges or leaps of faith found within the Program". It is not clear to me whether this "license" has any legal effect, but the intent is clear: Release your ugly code, and don't think bad of the ugly code of others.

14

Tangentially related, I will addresses how to share the software given your concerns (not should you share the software which you already have an answer for).

Putting the failed attempts in version control effectively means that nobody will ever see them. The way I handle this is to put each attempt in a method, and each failed attempt in a separate method:

def main():
    get_foobar(x, y)


def get_foobar():
    return x**y


def get_foobar_legacy_1():
    """
    This attempt did not work for values > 100
    """
    return x + y


def get_foobar_legacy_2():
    """
    This attempt did not work on Wednesdays in September
    """
    return x - y

As per the comments below, it may be a good idea to put these methods in a separate FailedAttempts or BadIdeas class. This has the nice effect of compartmentalizing the various stages for the process as per actual need. I find that computer programmers often have a knack for when to break logic off into a method and when not to, but computer scientists often do not. This approach helps the computer scientists break off into a method when necessary.

4
  • That's not part of any best programming practice. Supposedly unused code should actually be commented out, lest some code keep calling get_foobar_legacy_43. And when it becomes clear it is broken, it should be removed if possible. If understanding some failed attempt is worthwhile for readers of the current version (which happens), you should put it in version control and add a comment pointing to the relevant commit ID — possibly with a permalink. Commented Feb 27, 2016 at 13:40
  • 4
    @Blaisorblade: You are right, if the goal is to develop a well-functioning application then unused code should be removed, either via commenting or by relegating it to the depths of the source control software. However, that is not the goal stated by the OP. The OP needs to have his failures documented. This is the way to do that. Though I do see value in your point, and perhaps each method could be commented out with /* */ block comment syntax. Interestingly, one of the few languages that do not support block comments is Python, the language that I used for pseudo code above.
    – dotancohen
    Commented Feb 27, 2016 at 20:03
  • 2
    @Blaisorblade: An even better solution may be to have a separate file, class, or directory which encompasses the failed attempts, separate from the mail code of the application.
    – dotancohen
    Commented Feb 27, 2016 at 20:04
  • Documenting failures is not stated in the question, and I think it's a good idea in few cases (say, for interesting but failed attempts to achieve the paper's contributions). "Leaving in failures" seems to come from another answer—where people had a strong debate: academia.stackexchange.com/a/37373/8966. Commented Feb 28, 2016 at 10:29
11

Of course you should share the source code.

Academically speaking, a software-based result using code that is not readily available is not very valuable, as how would other people be able to verify your claims, if needed? Do you expect them to program on their own for this purpose? Sharing binaries only is much less valuable, and often leads to nightmares for people trying to run them.

9

I think you should share it. First of all you should do some basic clean up. (e.g.: no earlier code which is not used anymore; no code in comment; valid way of commenting and so on) Moreover if you put some "to do" in the code others can see that you were out of time and they can see your intentions. (e.g.: todo: this should be changed to enum) I also think you should share the most important part of your algorithms. When I share a code I have never share unimportant parts. Everyone can handle reading/writing of files, communication between threads, gui and so on. But don't share unreadable code. It would make no sense. So I think the middle way is the best as many times. :-)

1
  • 7
    Cleaning up is good in principle. However, if one waits for a good time to clean up, that time may never happen. I'd suggest putting it in a version control repository on Github or Bibucket or similar right away, and cleaning it up as and when you get around to it. Anyone downloading it will mainly be looking at the HEAD, anyway. Commented Jan 23, 2015 at 10:49
7

Talk to some of the professors in your computer science department. See if any of them are looking for a project where students can clean up messy code to make it more presentable.

For the students who revise the code, this can be a good learning experience. What happens when coders program with a results-first mindset – or results only mindset? They get to see that first hand. They also get to apply some of those best practices they've been learning about. And they might be motivated to do an especially good job knowing that other professionals are already interested in seeing the code.

A professor might even make this into a contest, where teams of students all take a crack at revising the software, and the best result is shared with the rest of the world.

If their refactoring efforts flop, you're no further behind than you were. If that's the case, disclaimers are a wonderful thing. Simply share the code, but add a caveat: "It isn't pretty. When I wrote this, I was trying to get my research done – I wasn't thinking it would ever go outside my computer lab. But you're welcome to take a look if you really want to."

1
  • I like this, I would have loved to had this chance when I was at Uni. Reading and understanding other people's code is a skill, and it has to be learned.
    – Contango
    Commented Jan 26, 2015 at 12:52
7

Lots of points in favour of publishing the code have been named in the other answers, and I completely agree with them. Hence, as the basic desirability of publishing the code has been discussed, I would like to supplement this with a checklist of further points that need to be considered. Many of these issues probably appear in virtually all academic software, so even if you cannot respond "This does not apply to my project." to all of them, you should at least be able to respond "This is a concern, but we can deal with this issue by ..." before publishing your code:

  • Are you allowed to publish the code?
    • Can you guarantee you only used code fragments that you are allowed to redistribute? Or did you possibly use code from non-open sources that you may use for your own internal software, but that you are not allowed to publish? Can you guarantee all the code that you used is allowed to be published in one complete package? License compatibility is a non-trivial issue.
    • Can you even reliably find out? Did you outsource any parts of your coding work, or integrate unpublished code from elsewhere? For instance, did you supervise any students during their graduation theses or employ any student research assistants, whose work was based upon your research and thus their code was added to your codebase? Did any co-workers contribute code to your codebase? Did they get some of their code from students? Did all of these people involved properly pay attention to licensing issues (if at all they had the knowledge to make an educated judgement about these licensing questions)? Can it even still be determined where which parts of the code originated? Do the people who contributed each part still know? Are they even still "within contact range" for you?
    • Was the code developed during working time based on third-party funds? If so, do the funding contract terms allow to publish the code, or do they include any requirements that software created within the funded project must be shared exclusively with the project partners?
  • Do you have sufficient resources (time and otherwise) to spend the effort to clean up the code and its comments in a way that it is still meaningful, but does not provide any information that must not become public?
    • Do you have any comments giving away who worked on the code? Were the people who contributed code officially allowed to work on the respective research, as per their funding? (Software developers are well aware that teamwork and reuse of components are core aspects of software development. Funding agencies, unfortunately, are typically very unaware of this and assume that if developer A is funded from project X and developer B is funded from project Y, A works exclusively on X and B works exclusively on Y, and revealing that, w.l.o.g., A spent only half an hour doing something that ended up in project Y could lead to severe consequences, such as reclaiming parts of the funding.)
    • Does anything in the published data give away any information about the particularities of how the work was done that must not become public? This is especially important if the whole commit history in a VCS is going to become public (or, practically, means that the commit history should never be published), but may also play a role in other situations. For example: Was any work on the code done outside of the officially assigned working times (e.g. during weekends)? Do working times give away that you worked more than the legal limit of your country for working hours per day? Do working times give away that you did not adhere to legally required breaks? Do working times give away that people assigned to other projects made contributions? Do working times provide any reason to distrust any of the statements you made otherwise about your working times (e.g. in project success reports that required a detailed assignment of working times to pre-defined work packages with certain maximum allotments)? Does anything give away that you worked in situations where you should not have been working (e.g. during a project meeting)? Does anything give away that you worked in locations where you should not have worked (e.g. from home, when your contract does not allow you to do home office, e.g. for insurance-related complications)?
    • Is there any secret information in the code, such as passwords, user account names, or URLs that must not be publicly known (because the servers are not laid out to handle larger amounts of users beyond a small number of select people who were given the URL for the test setup personally)?
  • Is the code usable by anyone else?
    • Will the code run, or does it require extensive configuration efforts? Can you spend the effort required to explain what configuration is necessary?
    • Can the code be compiled? Have you used any unofficial modified or custom-built compilers that are not publicly accessible? If so, does the code add anything beyond what may already be provided as a pseudo-code algorithm in your papers?
    • Does the code require any external resources? Will the code only be useful if it can access servers, libraries, or datasets that you cannot publish along with the code for one reason or another? Can at least a description of these resources be provided, or are their interfaces subject to some kind of an NDA?
    • Does the code make any irreversible changes to systems it runs on? For example, does it automatically change any system configuration (such as overwriting the system search path)? Does it perform any low-level access to hardware components that could, in certain constellations (that you internally avoid in your test setups) cause permanent damage to any components? Can you reliably warn users of the code of such possible unwanted side-effects?
2
  • 3
    You consider funding agencies or employers sifting through commit logs to determine legal consequences. That's a clear theoretical concern. So, do you have any evidence of it ever happening? My limited experience with funding agencies, ERC grants in particular, is in fact the opposite, even though that doesn't count. Commented Feb 27, 2016 at 13:32
  • @Blaisorblade: "So, do you have any evidence of it ever happening?" - the motivation for the funder to discover possibilities to reduce their costs seems clear, and the possible repercussions that might be enforced (paying back some of the grant money) are sufficiently severe (losing previously granted money is probably one of the few things that can get uni employees into severe trouble with uni administration) that it seems reasonable not to open up this possible attack point in the first place. Commented Feb 27, 2016 at 17:12
6

Of course. The only way you are going to get better at writing good software is to get feedback (all types). If you're afraid of feedback then you won't really get very far. The three basics to writing great software are practice, practice, and practice.

Now on as to the question of whether it would harm your career if people found out that your software writing skills aren't top notch. I think that no, on the contrary, they would respect you for your academic integrity. And would look forward to collaborating with you.

5

You may just push it to GitHub and try to maintain a project in case other people who are interested about your project can access your code easily and maybe they can help to improve your code.

1
  • +1 - This is the first thing that sprang to my mind. It is a safe place to store one's code, better than having it lurking on a hard disk or USB stick that will die or get lost. In addition, the code is easily maintainable as well as any code changes being tracked, and, as you say, others can collaborate and contribute (provided the right access settings are chosen). Commented May 16, 2018 at 0:05
4

Yes, you should. After all, the Linux kernel source code is quite a mess and that haven't prevented many professional developers from studying it and contributing patches and additions to it. Remember also that the Linux kernel is the base of the operating system that runs the fastest and most powerful supercomputers and most devices in the world. P.D: Linus Torvalds, the guy who invented the Linux kernel have a very profitable and successful career which have not been affected negatively or harmed in any way by the fact that the Linux kernel source code is messy.

4

A reason that no one has mentioned why you should share your code is that you might find someone who is interested in collaborating with you, but who is prepared to spend more time cleaning up the code and making it work on different systems, etc. than on doing the innovative development that you have done.

Lots of people find this kind of work very satisfying and if your code is genuinely useful to them they might be happy to do it. In any case, you might find that getting feedback from people who have tried to use it, but need some kind of help, is a good motivation for you to make it more maintainable/easier to use and understand.

1

Share it if you want to, don't share it if you don't want to. I know this sounds snarky but I think there is too much pressure nowadays to "share everything" and people will try to make you guilty for not sharing, but really you have no obligation to share anything.

5
  • 3
    Reproducable results are one of the cornerstones of the scientific method. And that requires sharing. You comment is akin to saying "... but really, Scientists have no obligation to adhere to the scientific method."
    – Contango
    Commented Jan 26, 2015 at 12:56
  • 3
    Sure, sharing may be optional outside the scientific community, but it sure is not optional inside the scientific community.
    – Contango
    Commented Jan 26, 2015 at 12:58
  • 2
    @Contango Yeah that's a fair point if releasing the software helps to reproduce the results. Commented Jan 27, 2015 at 23:06
  • @JeffE I didn't share anything, what are you talking about? I find your message cryptic. If you wish to be understood, please be a bit more clear. Commented Jan 29, 2015 at 22:27
  • You shared your opinion, of course.
    – JeffE
    Commented Jan 29, 2015 at 22:38
1

You should definitely share your code.

For sorting things, make regions of the same parts of code like make a region of a failed attempt, and explain why it failed. Also, if you develop in Visual Studio, install the “CodeMaid” extension from Extension Manager and clean your complete solution. It will remove spaces and also remove unused references making most of the things look better.

If you develop in C# then share your code with me. I can also help you with sorting things out :)

2
0

Put up a disclaimer that the code is provided "as is" with no promises of support, etc. And then share the code.

Case study: Turning a cloud of isolated points into a watertight surface is an extremely important practical problem, used everywhere from robotics to computer vision to processing data from 3D sensors like the Microsoft Kinect.

Poisson surface reconstruction is 7 years old and has long stopped being the state of the art for solving this problem. But everybody still uses it to this day. Why? Because the author released the code and it has since been incorporated into a bunch of popular geometry processing libraries. The paper now has over a thousand citations.

0

Yes. You should release your code, probably under the CRAPL license. The goal is to build a better future - and your lousy code will help people do that. A caveat is that you should document how to successfully operate the code well enough for someone to have a decent chance of reproducing any published results.

And, don't worry - one bit of research code I worked on had been developed by 5 postdocs of indifferent programming ability for a series of projects over the course of about 8 years.

The list of global variables (just the names) was roughly 4 pages.

Roughly one third of them were used to set default behavior to change the functionality that functioned at a given moment. Another 20% were parallel data structures - meaning that they stored approximately the same data - and therefore functions in the code pulled from the data structures more or less at random. Yes. They were sometimes out of sync. And sometimes needed to be out of sync.

There were roughly 50 undocumented versions, stored in random portions of the group's server - each of which served at least one specific purpose - and only one admin kept those specific purposes in his head. It was more common than not to have people using the 'wrong' version for a given purpose.

The use of incredibly complex recursive procedures to, eg, write a file, was standard. Seriously - a few thousand lines to save image results.

Oh, and the remains of a butchered attempt to solve a memory leak (actually an invisible figure) by never creating a new variable.

Oh, and the database, that lovely database. About half of the data was unusable owing to (a) database design errors (b) data entry errors (in automatic programs). The code to retrieve files from the database was several hundred lines of logic long... The database itself was also kind enough to contain many copies of the same data, much with broken links between tables. Constraints? No. I watched a statistician proceed from disquiet to fear to tears to quitting within a month of being entrusted with the database...

There were somewhere between 0 and 1 ways to operate the software and retrieve correct results at any given instant...

And yes, there were gotos.

Oh, and in an effort to ensure opaque and nondeterministic operation, a series of computations was performed by calling GUI buttons with associated callbacks.

Approximately 90% of any given function was, quite reliably, not relevant to the result or to debugging of the result - being composed, rather, of short-term projects inserted and then never removed. Seriously - I wrote a feature complete version that actually worked that was 1/10th the size... Significant fractions were copy-pasted inserted functions, many of which differed from each other.

And, no Virginia, there is no documentation. Or descriptive variable names.

Oh, and the undocumented, buggy, dlls and associated libraries - generated using code that no longer existed.

All written in Matlab. In terms of Matlab coding practices, assume that copious use of eval would be the highlight of your day.

Seriously, your code isn't so bad.

That said, if you've done something actually useful, it might be career-enhancing to release a cleaned-up version so that other people will use and cite your library. If you've just done something, then reproduction is probably as far as you'd be well-advised to go.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .