76

Simple question - why does the Decimal type define these constants? Why bother?

I'm looking for a reason why this is defined by the language, not possible uses or effects on the compiler. Why put this in there in the first place? The compiler can just as easily in-line 0m as it could Decimal.Zero, so I'm not buying it as a compiler shortcut.

2
  • 3
    I do not think these answers adequately explain why these values are there. I'm hearing some effects and usage, but what I'm looking for is WHY was this designed into the language, and why it's not there in Float, for example, or Int32...
    – Jasmine
    Commented Apr 17, 2009 at 18:26
  • They help compilers to generate more compact assemblies. Still used today Commented Aug 18, 2020 at 21:28

4 Answers 4

38

Small clarification. They are actually static readonly values and not constants. That has a distinct difference in .Net because constant values are inlined by the various compilers and hence it's impossible to track their usage in a compiled assembly. Static readonly values however are not copied but instead referenced. This is advantageous to your question because it means the use of them can be analyzed.

If you use reflector and dig through the BCL, you'll notice that MinusOne and Zero are only used with in the VB runtime. It exists primarily to serve conversions between Decimal and Boolean values. Why MinusOne is used coincidentally came up on a separate thread just today (link)

Oddly enough, if you look at the Decimal.One value you'll notice it's used nowhere.

As to why they are explicitly defined ... I doubt there is a hard and fast reason. There appears to be no specific performance and only a bit of a convenience measure that can be attributed to their existence. My guess is that they were added by someone during the development of the BCL for their convenience and just never removed.

EDIT

Dug into the const issue a bit more after a comment by @Paleta. The C# definition of Decimal.One uses the const modifier however it is emitted as a static readonly at the IL level. The C# compiler uses a couple of tricks to make this value virtually indistinguishable from a const (inlines literals for example). This would show up in a language which recognize this trick (VB.Net recognizes this but F# does not).

7
  • 1
    It is incorrect that those values are readonly, looking at the .net framework Decimal metada you can see the following [DecimalConstant(0, 0, 4294967295, 4294967295, 4294967295)] public const decimal MaxValue = 79228162514264337593543950335m; [DecimalConstant(0, 128, 0, 0, 1)] public const decimal MinusOne = -1m;
    – Paleta
    Commented Mar 16, 2011 at 18:29
  • 1
    @Paleta, no they are readonly. I've verified this by looking at the metadata and the MSDN page for hte values. msdn.microsoft.com/en-us/library/system.decimal.one(VS.80).aspx
    – JaredPar
    Commented Mar 16, 2011 at 19:03
  • 1
    I have to disagree with you and your answer, look at the reflected mscorlib.dll code here reflector.webtropy.com/default.aspx/4@0/4@0/DEVDIV_TFS/Dev10/… those are declared as constants, MSDN is incorrect
    – Paleta
    Commented Mar 19, 2013 at 3:57
  • 2
    Well, I just saw that on the IL is declared as field static initonly, are all const in C# translated into static readonly fields? Thanks for the clarification
    – Paleta
    Commented Mar 26, 2013 at 22:52
  • 3
    @Paleta i shared the same confusion as you. I had to sit down for a few minutes and play around with the generated IL and C# source to understand what was going on here. For your question though, no the majority of C# constants are emitted as .literal values. It appears that only DateTime and Decimal are emitted in this mixed manner
    – JaredPar
    Commented Mar 26, 2013 at 23:47
25

Some .NET languages do not support decimal literals, and it is more convenient (and faster) in these cases to write Decimal.ONE instead of new Decimal(1).

Java's BigInteger class has ZERO and ONE as well, for the same reason.

3
  • 1
    Can someone explain this to me? 6 up votes means it's got to be a good answer - but: How does a .Net language that doesn't support decimal as a datatype benefit from having a shared read-only property that returns a decimal and is defined as part of the decimal class?
    – Rob P.
    Commented Apr 14, 2009 at 0:55
  • 9
    He probably meant that if a language doesn't have decimal literals, using a constant would be more efficient than converting an int literal to a decimal. Every .NET language supports the System.Decimal datatype, it's part of the CLR.
    – Niki
    Commented Apr 14, 2009 at 5:05
  • 2
    yeah Niki that is what I wanted to say. You can of course use System.Decimal in all .NET languages, but some support it better (like C# which has a decimal keyword and decimal literals) and some worse. Sorry, English is not my native language...
    – mihi
    Commented Apr 14, 2009 at 13:18
-1

My opinion on it is that they are there to help avoid magic numbers.

Magic numbers are basically anywhere in your code that you have an aribtrary number floating around. For example:

int i = 32;

This is problematic in the sense that nobody can tell why i is getting set to 32, or what 32 signifies, or if it should be 32 at all. It's magical and mysterious.

In a similar vein, I'll often see code that does this

int i = 0;
int z = -1;

Why are they being set to 0 and -1? Is this just coincidence? Do they mean something? Who knows?

While Decimal.One, Decimal.Zero, etc don't tell you what the values mean in the context of your application (maybe zero means "missing", etc), it does tell you that the value has been deliberately set, and likely has some meaning.

While not perfect, this is much better than not telling you anything at all :-)

Note It's not for optimization. Observe this C# code:

public static Decimal d = 0M;
public static Decimal dZero = Decimal.Zero;

When looking at the generated bytecode using ildasm, both options result in identical MSIL. System.Decimal is a value type, so Decimal.Zero is no more "optimal" than just using a literal value.

7
  • 1
    Your argument hurts my head. They are numbers, they only become magic numbers when you start attaching random meaning to them such as -1 means do the dishes and 1 means bake cakes. Decimal.One is just as magical as 1 but arguably harder to read (but perhaps more optimal). Commented Apr 13, 2009 at 22:29
  • my point was that if someone types Decimal.Zero, they are more likely to have done that deliberately because Zero has some meaning - rather than just arbitrarily setting it to 0 Commented Apr 13, 2009 at 22:32
  • I take issue with saying assigning something to 0 is arbitrary. It makes sense for enumerations with symbolic constants being mapped to numbers but for numbers mapped to numbers seems pretty insane. Sometimes 0 is really... zero. Units on the other hand would be a nice construct. 1km != 1. Commented Apr 13, 2009 at 22:41
  • 8
    Saying Decimal.Zero is just as arbitrary as saying 0.0. If we were talking about something that changes on an operating system level, like "/" vs some library constant that describes the filesystem separator, it would make sense, but Decimal.Zero is always just 0.0.
    – Benson
    Commented Apr 13, 2009 at 22:51
  • 6
    Damn, it was just my opinion. I thought I made that clear :-( Commented Apr 14, 2009 at 4:57
-6

Those 3 values arghhh !!!

I think they may have something to do with what I call trailing 1's

say you have this formula :

(x)1.116666 + (y) = (z)2.00000

but x, z are rounded to 0.11 and 2.00, and you are asked to calculate (y).

so you may think y = 2.00 - 1.11. Actually y equals to 0.88 but you will get 0.89. (there is a 0.01 in difference).

Depends on the real value of x and y the results will varies from -0.01 to +0.01, and in some cases when dealing with a bunch of those trailing 1's, and to facilate things, you can check if the trailing value equals to Decimal.MinusOne / 100, Decimal.One / 100 or Decimal.Zero / 100 to fix them.

this is how i've made use of them.

2
  • 5
    What are you talking about? You're doing bad math with inexact values and then using the division operator to come up with your epsilon? What makes you think that the result will always be off by exactly 0.01, and not (for example) 0.005? If these values represent money, I'd be scared to do business with your application. Commented Aug 2, 2010 at 23:54
  • :) I'm not calculating an end result, i have to give the value of (y) for the rounded format. how will you do it ?
    – Hassen
    Commented Aug 3, 2010 at 0:15

Not the answer you're looking for? Browse other questions tagged or ask your own question.