Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Pipelines - Computer Organization - Lecture Notes | CSCI 2500, Study notes of Computer Architecture and Organization

Material Type: Notes; Professor: Teresco; Class: COMPUTER ORGANIZATION; Subject: Computer Science; University: Rensselaer Polytechnic Institute; Term: Spring 2009;

Typology: Study notes

Pre 2010

Uploaded on 08/09/2009

koofers-user-bgq
koofers-user-bgq 🇺🇸

10 documents

1 / 11

Toggle sidebar

Related documents


Partial preview of the text

Download Pipelines - Computer Organization - Lecture Notes | CSCI 2500 and more Study notes Computer Architecture and Organization in PDF only on Docsity! Computer Science 2500 Computer Organization Rensselaer Polytechnic Institute Spring 2009 Topic Notes: Pipelines We have all seen and experienced examples of pipelining in our daily lives. The book uses a laundry analogy (Figure 4.25), but any kind of “assembly line” type of operation might be a good example. The laundry analogy is a good one. Consider how much more quickly laundry can be finished if we take advantage of the fact that the washer and dryer (and, in the book’s example, the folder and the storer) can all operate in parallel, and each stage can start doing its work as soon as the previous stage completes its work. We don’t process a single load of laundry any more quickly, but the overlap in successive loads leads to a faster overall completion time. Similar ideas can be used to create a pipeline of instructions being executed by a processor. MIPS instructions can be executed in phases. These phases can be used to create such a pipeline. We’ll consider a pipeline for MIPS, which is typical of many RISC pipelines. We have seen the instructions in our MIPS subset include five steps: 1. IF: instruction fetch 2. ID: register read and instruction decode 3. EX: execute or calculate address 4. MEM: memory read or write 5. WB: write back register For our example, we will assume that the register access stages cost 100 time units each, while memory access and ALU operations cost 200. Figure 4.26 in the text shows how long the different instructions in our MIPS subset will take given these latencies. Figure 4.27 in the text shows the single-cycle and a simple pipelined execution. Note that the speed of our pipeline in this case is limited to the speed of slowest component. A single instruction now takes as much as 900ps to complete. Note also that the register accesses are strategically set up so that a register read takes place in the second half of a 200ps slot and the register write takes place in the first half. This will be beneficial later on. CSCI 2500 Computer Organization Spring 2009 Figure 4.28 shows a symbolic representation of the execution of an add instruction showing which components are used in each step. The MIPS instruction set was designed with pipelines in mind, so it has features that make pipelin- ing easier: 1. all instructions are the same length, allowing the next IF in the pipeline to proceed immedi- ately 2. there are very few instruction formats, allowing both register read and instruction decode to be in the ID pipeline stage 3. the limitation on memory access to just the lw and sw instructions allows EX to combine execution and address calculation 4. memory alignment means we always retrieve the entire instruction in a single memory access A typical RISC system might have a 5-stage pipeline like this. A system with a more complex instruction set may have a 12-18 stage pipeline. Goal: 100-200 stage pipelines to get very significant speedups. Many architectures now have multiple pipelines as well. The original Pentium had two pipelines, and a smart compiler could keep both pipelines busy, effectively doubling the number of instructions completed in the same number of cycles. If 2 is good, why not 4 or more? Too much duplication of hardware. Another option: just have multiple functional units, not all stages of the pipeline. FPUs IF ODID X WB X X X X ALUs LOAD STORE This is especially useful if the execute stage takes longer anyway. This is a superscalar processor. Hazards 2 CSCI 2500 Computer Organization Spring 2009 Which is more likely? Programmers probably make the “then” part the more likely case. So a compiler might want to set things up to start pipelining S1 after the condition is checked. How about a while loop or a for loop? while (C) S1 Here, C will be false only once for the life of the while loop, so the best assumption is to predict a successful branch (another time around the loop). The UltraSparc III actually has special branch instructions that a compiler can use when it knows a certain branch is likely to be taken the vast majority of the time. Some rules of thumb: 1. If a branch leads deeper into code, assume the branch will fail. 2. Otherwise, assume the branch will be taken. This gives about an 80% success rate for human-written code. Today’s branch prediction techniques in optimizing compilers are more intelligent and clever and can get more like 98%. No matter how good our branch prediction is, it will sometimes fail and we need to be able to make sure instructions can be cancelled. One possibility: allow instructions to do everything but store their result until we’re absolutely sure. Another headache: multiple conditional branches in the pipeline. Pipelined Datapath and Control We will now consider how to construct a data path and control to manage the 5-stage pipeline for our MIPS subset. In Figure 4.33, we see the single-cycle data path we looked at before, redrawn to show the pipeline phases. For the most part, information flows left-to-right in this diagram. The exceptions (in blue) represent hazards: • WB puts a result back into the register file – this is a data hazard • MEM may replace the PC with a branch/jump target – a control hazard 5 CSCI 2500 Computer Organization Spring 2009 Figure 4.34 shows instructions being executed by a pipeline. • stages are labeled by the components being used in each • note that the register file is written in the first half of a cycle, and read in the second half; this reasonable assumption helps us avoid some potential hazards later on • in this case, no hazards arise We will need to add registers to our data path to support pipelining. These registers are shown in Figure 4.35. • each set of registers holds the values passed between each pair of adjacent stages • each is large enough to hold the necessary values The text presents a series of figures showing the active parts of the pipeline during the execution: • The top half of Figure 4.36 shows the IF stage: – the instruction from memory is retrieved and stored in the IF/ID pipeline registers – PC+4 is computed and stored in the IF/ID pipeline registers – The PC is updated with either PC+4 or the result of a branch instruction • The bottom half of Figure 4.36 shows the ID stage: – the instruction stored in the IF/ID pipeline registers is used to retrieve 2 values from the register file, which are both sent to the ID/EX pipeline registers – the immediate field of the instruction is sign-extended to 32 bits and stored in the ID/EX pipeline registers – the PC+4 value is passed along from the IF/ID pipeline registers to the ID/EX pipeline registers for use later – we don’t need all of these values, but we don’t necessarily know which, yet, so we pass them all along • Figure 4.37 shows the EX stage for a lw instruction – the sign-extended immediate value is added to the base register, both of which come from the ID/EX pipeline registers – this sum (the effective address for the memory access) is stored in the EX/MEM pipeline registers • Figure 4.38 (top) shows the MEM stage for a lw instruction 6 CSCI 2500 Computer Organization Spring 2009 – the effective address stored in the EX/MEM pipeline registers is used to retrieve a value from the data memory – this value is stored in the MEM/WB pipeline registers • Figure 4.38 (bottom) shows the WB stage for a lw instruction – the value retrieved from memory, saved in the MEM/WB pipeline registers, is sent back to the register file for storing • Figure 4.41 adds extra values to the pipeline registers in recognition of the fact that the regis- ter number needs to be retained for the WB stage (if we don’t, we’d be using the destination register from a different instruction!) • Figure 4.42 highlights the parts of the datapath that are used for lw • Figures 4.39 and 4.40 show the completion of a sw instruction – here, we need to remember the value from the register file that is to be stored in memory. It must be passed along during the EX phase from the ID/EX registers to the EX/MEM registers – during MEM, we store the value at the location specified by the effective address, both coming from the EX/MEM pipeline registers – the WB stage does nothing for sw • Figure 4.43 shows a sequence of instructions in a pipeline, and Figure 4.45 shows the in- structions in execution at the fifth step of this execution sequence Augmenting control to support a pipelined control may seem daunting, but it really is not as bad as we’d expect. We can use the same control lines as we did for the single-cycle implementation, but each stage should be using the control as set for the instruction it is executing. Control values can be stored in the pipeline registers to make this happen. Figure 4.46 shows the pipelined data path with the control added. We need only store control signals at each stage that are to be used in that or in subsequent stages (Figure 4.50) Figure 4.51 shows the complete pipelined datapath. Dealing with Hazards We noticed earlier that our pipelines cannot always operate at full capacity. • some instructions do not need to use all stages of the pipeline 7 CSCI 2500 Computer Organization Spring 2009 ForwardA = 01 if (MEM/WB.RegWrite and (MEM/WB.RegisterRd != 0) and not (EX/MEM.RegWrite and (EX/MEM.RegisterRd != 0) and (EX/MEM.RegisterRd = ID/EX.RegisterRt)) and (MEM/WB.RegisterRd = ID/EX.RegisterRt)) ForwardB = 01 Figures 4.56 and 4.57 show the data path augmented with additional lines and a forwarding unit that can resolve data hazards. We next consider an even more unfortunate data hazard, a “load-use” hazard as shown in Figure 4.58. This one cannot be resolved through forwarding – the value has not yet been retrieved from the data memory by the time it is needed. In his case, we need to stall the pipeline to wait for the value to become available, as shown in Figure 4.59. This is accomplished by adding a hazard detection unit. Using our notation from before, we know a stall is necessary when: ID/EX.MemRead and ((ID/EX.RegisterRt = IF/ID.RegisterRs) or (ID/EX.RegisterRt = IF/ID.RegisterRt)) Note that we are detecting this condition as early as possible – when the offending sequence in- structions are in the IF and ID stages. This makes it easier to stall. The stall is accomplished by adding a bubble to the pipeline – an instruction that does nothing, a nop, or “no op”. This requires that: • the PC is not updated • the IF/ID pipeline registers are not updated • the control entries in the ID/EX pipeline register are loaded with all 0’s, which results in the bubble/nop. The data path augmented with a hazard detection unit is shown in Figure 4.60. Stalls reduce the performance – we’re spending time executing a nop instead of meaningful in- structions. However, they are essential to ensure correct behavior. An optimizing compiler should be aware of the details of the pipeline and can rearrange instruc- tions (in many cases) to avoid the need for a stall at execution time. 10 CSCI 2500 Computer Organization Spring 2009 Control Hazards Recall that a control hazard occurs when a branch/jump instruction is being executed and subse- quent (partially-executed) instructions in the pipeline need to be cancelled since they never should have been executed. The specifics of our pipeline mean that a conditional branch’s outcome is not known until the MEM phase. Figure 4.61 shows a branch instruction that results in a control hazard – instructions already in the pipeline that are not to be executed are flushed (their control lines are set to 0). Figure 4.62 shows how we can reduce the cost of a control hazard by adding hardware to determine the result of a conditional branch sooner (during ID): • The branch target adder is moved to ID. • An equality checker is added that compares the values coming out of the register file that can quickly determine the result of a beq (or bne). Another technique that works well with short pipelines (such as our 5-stage pipeline) is the use of a branch delay slot. • Here, we always execute the instruction immediately following a branch or jump. • This way, a programmer or (hopefully) a compiler can reorder instructions so that some useful work can be done in the slot after the branch. • This is not always possible, so a nop instruction may need to be inserted in some cases. • However, it is a quite effective way to reduce the cost and frequency of control hazards. Real implementations of MIPS (among other ISAs) make use of branch delay slots. Figure 4.64 shows some examples of how code can be reordered to take advantage of branch delay slots. We already discussed some branch prediction techniques. The text discusses some of them in a bit more detail. 11
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved