you’ve been around computers for any length of time, you’ve heard the
terms reboot or boot up used in connection with either resetting the computer to its initial state or powering it on initially. The term boot is a shortened
version of the term bootstrap, which is itself a reference to the seemingly
impossible task a computer must perform on start-up, namely, “pulling itself
up by its own bootstraps.”
I say “seemingly impossible,” because when a computer is first powered
on there is no program in memory, but programs contain the instructions
that make the computer run. If the processor has no program running when
it’s first powered on, then how does it know where to fetch the first instruc-
tion from?
The solution to this dilemma is that the microprocessor, in its power-on
default state, is hard-wired to fetch that first instruction from a predetermined address in memory. This first instruction, which is loaded into the processor’s
instruction register, is the first line of a program called the BIOS that lives in a special set of storage locations—a small read-only memory (ROM) module
attached to the computer’s motherboard. It’s the job of the BIOS to perform
basic tests of the RAM and peripherals in order to verify that everything is
working properly. Then the boot process can continue.
At the end of the BIOS program lies a jump instruction, the target of
which is the location of a bootloader program. By using a jump, the BIOS
hands off control of the system to this second program, whose job it is to
search for and load the computer’s operating system from the hard disk.
The operating system (OS) loads and unloads all of the other programs
that run on the computer, so once the OS is up and running the computer
is ready to interact with the user.
34
Chapter 2
P I P E L I N E D E X E C U T I O N
All of the processor architectures that you’ve looked at
so far are relatively simple, and they reflect the earliest
stages of computer evolution. This chapter will bring
you closer to the modern computing era by introducing
one of the key innovations that underlies the rapid
performance increases that have characterized the past
few decades of microprocessor development: pipelined
execution .
Pipelined execution is a technique that enables microprocessor designers
to increase the speed at which a processor operates, thereby decreasing the
amount of time that the processor takes to execute a program. This chapter
will first introduce the concept of pipelining by means of a factory analogy,
and it will then apply the analogy to microprocessors. You’ll then learn how
to evaluate the benefits of pipelining, before I conclude with a discussion of
the technique’s limitations and costs.
NOTE
This chapter’s discussion of pipelined execution focuses solely on the execution of arithmetic instructions. Memory instructions and branch instructions are pipelined using the same fundamental principles as arithmetic instructions, and later chapters will cover the peculiarities of the actual execution process of each of these two types of instruction.
The Lifecycle of an Instruction
In the previous chapter, you learned that a computer repeats three basic
steps over and over again in order to execute a program:
1.
Fetch the next instruction from the address stored in the program
counter and load that instruction into the instruction register.
Increment the program counter.
2.
Decode the instruction in the instruction register.
3.
Execute the instruction in the instruction register.
You should also recall that step 3, the execute step, itself can consist of
multiple sub-steps, depending on the type of instruction being executed
(arithmetic, memory access, or branch). In the case of the arithmetic
instruction add A, B, C, the example we used last time, the three sub-steps
are as follows:
1.
Read the contents of registers A and B.
2.
Add the contents of A and