I just finished adding the last few pieces to make the full suite of expected primitive numeric types work. This includes i32, i64, u32, u64, f32 and f64. I made the choice to limit the degree to which integer and floating point literals can be infered.
For example:
let x: float32 = 0.4;
let y: float32 = x * 0.5;
won't pass type checking. This is because any floating point literal that doesn't explicitly have the f32 or f64 suffix is assumed to be a float64. Thus x * 0.5 is float32 * float64. This could be fixed by making the expression x * 0.5f32. While this is more explicit, it is also more verbose. This same issue exists with the integer types.
I think there could be an alternative where an unspecified literal like 0.5 takes on a generic FloatingPoint during parsing, then has it's true type asserted when it is encountered during type checking.
I just finished adding the last few pieces to make the full suite of expected primitive numeric types work. This includes
i32,i64,u32,u64,f32andf64. I made the choice to limit the degree to which integer and floating point literals can be infered.For example:
won't pass type checking. This is because any floating point literal that doesn't explicitly have the
f32orf64suffix is assumed to be afloat64. Thusx * 0.5isfloat32 * float64. This could be fixed by making the expressionx * 0.5f32. While this is more explicit, it is also more verbose. This same issue exists with the integer types.I think there could be an alternative where an unspecified literal like
0.5takes on a genericFloatingPointduring parsing, then has it's true type asserted when it is encountered during type checking.