Start with x = 1.0 and keep dividing it by 2 until you get 0.0. If n + 1 is the number of such divisions, then 1/2^n is the smallest number representable in the floating point format in the programming environment of your choice. Similarly, if at the stage m + 1, 1.0 + x 1.0, then 1.0 + 1/2^m is the floating point number closest to 1.0. Now start doubling 1.0 until you get +INF (or NaN) at some stage, N + 1: then 2^N (2 – 1/2^m) is the largest floating point number. Obtain m, n, and N. Discuss your findings: deduce the numbers of bits that are used in your environment for the exponent and significant of the floating point numbers.