8

On one of my web applications, I accidentally left the .git directory readable by the web server for the last few weeks. Index listing was disabled. Visiting the website.com/.git URL would result in a 404 error that was indistinguishable from any other 404 error, but browsing to website.com/.git/config for example would download the file.

What kind of risks are there with my applications? Is it possible that enough information is exposed that someone could have downloaded the entire application's source code?

2
  • Did you try a git clone http://website.com/.git? That is the major risk (being able to clone the full repo).
    – VonC
    Commented Aug 24, 2011 at 14:28
  • I'll have to test it on my dev server... I just realized the vulnerability while daydreaming and fixed the bug ASAP. Commented Aug 24, 2011 at 14:45

2 Answers 2

4

Yes, it's possible to download the entire repository contents (including history) – a simple git clone would do it. However, this assumes someone knew about the existence of that .git directory...it's more likely that nobody has even noticed it. You can always check your web server's logs to be sure.

9
  • 1
    except if you have some "auto-testing" folks around there which curl $URL/.git/config all day long for every url they see ... :)
    – akira
    Commented Aug 24, 2011 at 14:30
  • With the .git folder being in the root, someone only would have needed to run git clone domainname.com? And it would have worked with the directory listing disabled? Commented Aug 24, 2011 at 14:41
  • @Thomas: Git does not use directory listings, since their format varies greatly between web servers. All required information is in refs and packed-refs. Commented Aug 24, 2011 at 15:05
  • Thanks for the advice. I checked my lighttpd access logs and didn't see any requests to the git directory (other than my own panicked checking today). Commented Aug 24, 2011 at 15:19
  • 1
    @Willem: I'm curious where in the source code it performs the directory list parsing and how it deals with the dozen different formats. Commented May 2, 2015 at 23:45
3

A simple git clone on the document root is not entirely accurate. Cloning an exposed GIT repository is not possible with "dumb" servers, such as an accidental exposure of .git via HTTP, unless git update-server-info is executed on the server. While some of the metadata is available, obtaining the contents of the .git/objects directory (aka the juicy stuff) is not always possible. It is possible to recover objects that aren't packed. Should not be the case with a working copy/repository on a production server.

It's a different story for a development machine with committed changes that aren't pushed to a remote. In this case, the garbage collector is not usually called, unless you invoke git gc, hence the files aren't part of packfiles, yet. You may recover via HTTP the files which are committed since the last push.

update-server-info basically creates a map of the refs (.git/info/refs) and of the packfiles (.git/objects/info/packs). While .git/packed-refs may be used to substitute the first, obtaining the packfiles isn't possible without having the directory index enabled or actually brute-forcing SHA-1's (which is a bad idea from the start).

1

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .