This project has moved. For the latest updates, please go here.
1
Vote

Pfc_n_cst_numerical.of_Decimal_ULong and .of_Decimal_UInt return only 0 or 1

description

According to the documentation the pfc_n_cst_numerical.of_Decimal_ULong and pfc_n_cst_numerical.of_Decimal_UInt functions should return the decimal representation of their binary number argument. Instead, they return only 0 or 1, depending on the (valid, non-null) argument.

If you change this part of the code in pfc_n_cst_numerical.of_Decimal_ULong
lul_decimal = 0
lul_factor  = 0

// Process the binary digit characters from least significant to most (Right to Left).
For li_i = li_numdigits To 1 Step -1
    lul_factor *= 2
    If li_i = 1 Then lul_factor = 1
    
    If lc_digit[li_i] = '1' Then lul_decimal += lul_factor
Next
to this
lul_decimal = 0
lul_factor  = 1

// Process the binary digit characters from least significant to most (Right to Left).
For li_i = li_numdigits To 1 Step -1
    If lc_digit[li_i] = '1' Then lul_decimal += lul_factor
    lul_factor *= 2
Next
the result should be a decimal representation of the binary number argument.

Make a similar change in the pfc_n_cst_numerical.of_Decimal_UInt.

comments

Teacups wrote Oct 18, 2016 at 5:58 PM

The code of the function pfc_n_cst_numerical.of_Decimal_Byte seems very identical to the previous ones, too.