It depends on how you describe a mathematical object, and what you call "a single" mathematical object.
Contigency in Axioms
The example of zero being a successor of another number is a good one; I use this myself quite frequently to explain some of the mathematics I work on to my non-mathematically-inclined friends. Sometimes it pays to assert that 6+1 = 0; and sometimes it doesn't.
When do we distinguish one mathematical object from another? If we operate within a formal system, we may choose to adopt the premise that 0 has no predecessors (giving rise to the non-negative integers) or not (giving rise instead to a group, which may be the integers, or perhaps an additive cyclic group). If we suspend any axiom determining which, then both systems are models for the resulting set of axioms. What does contingency mean, apart from having significantly different possible models for the axioms?
Is zero a different object in the two models? If you believe that mathematical objects obtain their 'identity' through their relationships with others, you might say so. But in the axiomatic theory, there is only the one name for the object in the different possible models; as if we could identify Charlemagne-equivalents in two different alternative histories involving different conditions for the collapse of Alexandrian Greece. You might say that perhaps that the axiomatic system has only one object, whose properties are underdetermined. But of course, without being able to prove whether or not the system is consistent (or whether or not the model being described is the group of order one), we cannot say whether or not 0 = 1 holds in the axiomatic system; that's a pretty important relationship to be able to ascertain, and yet the best we can do is to suppose that it does not hold (and hope that we're right). So in fact any consistent system is underdetermined in this way. One can then make a good argument that there is one object, some of whose properties are contingent on further axioms.
As a deeper example, one may consider the continuum hypothesis, in the guise of the question: are all subsets of the real numbers countable, or equipotent to the real numbers themselves? If you restrict your axioms to the usual suspects of ZF±C, (Zermelo-Frankl with or without Choice), there simply is no answer. What does one mean by 'contingent', here — do we mean "depending on information that we don't have"? Independence from axiom systems seems like a bit of a hollow way of obtaining that dependence, but it would fit the bill. Of course, we usually mean by 'contingent' something which depends on something beyond our control, such as a random process or even some sort of diabolical adversary. Perhaps you could argue for the contingency of the Continuum Hypothesis if you believe in some manner of One True Set Theory; the truth-value of the Continuum Hypothesis would be definite but unknown to you, and dependent on 'factors' as yet unknown. If you were a formalist who believed that there might in some aesthetic sense be One True Set Theory (or perhaps A Small Handful of True Set Theories), some of these contingent factors might boil down to arbitrary decisions such as those surrounding whether or not Pluto is "a planet".
If we declare that the Continuum Hypothesis is false, do there suddenly exist for us objects, whose existence were previously only contingent? Is the power set of the real numbers — an analytically defined construct — a different object depending on whether or not we assert the Continuum Hypothesis to be true? This does not seem a particularly constructive way to consider things; but tastes vary.
Contingency apart from axioms
Consider for example random graph theory. In their seminal paper on the subject Paul Erdős and Alfred Rényi (Erdős+Rényi 1960, "On the evolution of random graphs") consider a process where one connects n abstract 'points' or 'vertices' in pairs by edges, selecting up to some number N of pairs to connect, uniformly at random and in a random order. The description of "what is going on" is presented in highly suggestive time-dependent language. (The very idea of the fact that there is something which is "going on", as opposed to just simply being statically the case, is already a hint of this.) The word 'evolution' in the title is meant literally, for example: they speak of the graph changing with time, of connected components "melting" into one another, and so forth.
Of course, this is no different in principle than how we talk about the position of a random walker with time, or even deterministically about the position of a particle moving smoothly according to Newton's Laws. Some mathematical process is described in which one of the parameters describes time; and then we are inclined to talk about what's true of the particle at particular points in time. Is this a series of facts about the object? Or is it a collection of propositions, of which only one is 'true', depending on 'the time'? Making the distinction is a question of semantics, which can either be helpful or pointless: the leverage mathematicians have gotten out of naïve mathematical Platonism certainly suggests that thinking of things as being absolute and static gives some advantage in certain scenarios, but obviously Erdős and Rényi got more mileage out of thinking of their graphs growing with time, maturing as edges are added.
Something even more provocative happens for N = n(n-1)/4 (so that each pair of points is connected with probability 1/2), and take the limit as n goes to infinity. This is known as "the" infinite random graph; and the reason why it is called "the" infinite random graph is that although different pairs of points are connected depending on which edges are chosen to be in the graph, with probability 1 the outcome will be a particular graph up to relabeling of the names of the points. That is, the outcome is just as much the same graph as all circles of fixed radius 1 are in some sense representations of "the same circle". And so it turns out that a random process, infinitely extended, gives rise to what one might call a deterministic outcome. Yet there are other infinite graphs which you could consider which are not the same as "the" infinite random graph: but the probability of realising them by this process is zero, because it would require a conspiracy of infinitely many events which have some probability of failing. Is there anything contingent about the structure of this graph, then? Much is known about it (see the linked article above), but it comes about as if by an inevitable accumulation of accidents. And even though it is "the" infinite random graph, whether or not any particular realization of it (in the construction process) joins two particular points by an edge is still a random event with probability 1/2. We are speaking essentially of a single mathematical object whose representation is contingent.
Summary
Whether there is anything actually contingent going on depends in part on your philosophy of mathematics, on your interpretation of probability or of the idea of 'contingency', or perhaps merely on what the most convenient way to speak about the subject is.