For my project, I need to convert dollar to cents, so I wrote the following function to safely do the conversion. The input dollar is float
and the output should be int
.
def safe_dollar_to_cent(dollar):
parts = str(dollar).split('.')
msg = 'success'
status = 0
if len(parts) == 1:
return status, int(parts[0])*100, msg
decimal_part = parts[1]
if len(decimal_part) > 2:
decimal_part = decimal_part[0:2]
msg = 'dollar has been truncated: {} -> {}'.\
format(parts[1], decimal_part)
status = 1
ret = int(parts[0]) * 100
multiplier = 10
for i, s in enumerate(decimal_part):
ret += int(s) * multiplier
multiplier /= 10
return status, ret, msg
I am posting here for seeking other some pythonic way of doing this job.
Update:
my input is expected to be float, the return value should be int. The reason of this implementation is that I found the following incorrect computation.
18.90 * 100 = 1889.9999999999998
dollar
to be, a number or a string? \$\endgroup\$return int(dollar * 100)
? \$\endgroup\$int(d * 100.), ((d * 100.) - int(d * 100.)) / 100.
to return a pair having the result and the data you lost \$\endgroup\$