Explain your answers clearly.
Have fun.
Module type Propagation delay ================================================= Single gate (AND, OR, etc.) 1 ns Multiplexor 2 ns ALU or Adder 10 ns Registers, PC, Target, etc. 4 ns Sign extender 2 ns Control, ALU Control 5 ns Memory 25 ns Wires 0 ns
Suppose you're going to run software with the following
instruction frequencies:
Instruction type Frequency ============================================ Arithmetic and immediate 55% Branches 15% Loads 20% Stores 8% Jumps 2%
In Pascal, it was convenient to write data to a file in a stream of items of identical type. Since some of the data Jack wanted to put in the image files was of real type (the RGB--red, green, blue--values of the colors), he translated everything--character data, integer data, and real data--into reals and wrote the data to the file as a stream of real numbers.
The format of the data files was this: first, an integer between 0 and 9. Next, the name of the image as a list of ASCII characters. Finally, the (very long) list of RGB values, coordinates, etc. Because of the translation to reals I described above, everything was an 8-byte double-precision floating point number. Suppose, for example, that the file started with the integer 5, and the name of the image was "ball". Then the contents of the file would be an 8-byte floating point 5.0 followed by an 8-byte floating point 98.0 (ASCII for 'b'), followed by an 8-byte floating point 97.0 (ASCII for 'a'), and so on.
After some digging into old documentation, Jack was able to find the double-precision floating point format for the microvax. It consisted of a sign bit followed by an 8-bit exponent E, followed by a 55-bit mantissa M. The corresponding real number, expressed in the hybrid binary-decimal notation we've used in class, is plus-or-minus 0.1M x 2^(E-128). Note that this format differs from IEEE 754 in several ways, including using 0.1M as its normalized form instead of 1.M, using excess-128 for the exponent instead of excess-127, and using only an 8-bit exponent for double precision instead of 11.
At the end of this problem, you will find a hexadecimal dump of the beginning of one of Jack's data files using the Unix command "od -x datafile" on Jack's Indy, "red", followed by an octal dump of the same file ("od datafile"). Note that "od -x" prints out a sequence of 16-bit integers in hexadecimal notation, and "od" prints out a sequence of bytes in octal.
Given that long preamble, answer the following questions.
Dump of 16-bit hexadecimal integers ================================================= 0000000 c041 0000 0000 0000 de43 0000 0000 0000 0000020 c443 0000 0000 0000 d443 0000 0000 0000 0000040 ca43 0000 0000 0000 c643 0000 0000 0000 0000060 e843 0000 0000 0000 0043 0000 0000 0000 0000100 0043 0000 0000 0000 0043 0000 0000 0000 0000120 4443 0000 0000 0000 0000 0000 0000 0000 0000140 0000 0000 0000 0000 0000 0000 0000 0000
Character-by-character dump, in octal. ========================================================================= 0000000 300 101 000 000 000 000 000 000 336 103 000 000 000 000 000 000 0000020 304 103 000 000 000 000 000 000 324 103 000 000 000 000 000 000 0000040 312 103 000 000 000 000 000 000 306 103 000 000 000 000 000 000 0000060 350 103 000 000 000 000 000 000 000 103 000 000 000 000 000 000 0000100 000 103 000 000 000 000 000 000 000 103 000 000 000 000 000 000 0000120 104 103 000 000 000 000 000 000 000 000 000 000 000 000 000 000 0000140 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000