Thursday, 17 March 2016

1.1.2
Types of Processor
Reduced  instruction set computer(RISC)
complex instruction set computer(CISC)

                                                                                                                             
RISC
  • supports only a small number of very simple instructions , which can be completed in one clock cycle. making it faster
  • This means that individual instructions are executed extremely quickly , but more instructions are needed to complete a given task so if it is more complicated   it will take longer.
  • A side effect of the simpler instructions set is that RISC architectures need a greater number of registers to provide faster access to data when programs are being executed
  • mainly used for embedded systems such as washing machines

  • Can use more RAM to handle intermediate results
  • This has simple hardware but more complicated software code

CISC
  • supports a large number of complicated instructions, This means that instructions can take many clock cycles to complete
  • a single CISC instruction might be made up of a number of smaller RISC - type instructions
  • tends to be slightly slower than RISC
  • can support much wider range of actions and addressing modes.
  • When many programs were written in assembly language , CISC were very popular as writing programs for them is much easier
  • This has more complex hardware but more compact and simple software code
  • Can use less RAM as no need to store intermediate results
CPU time = (number of instructions) x (average cycles per instruction) x (seconds per cycle)

Risc can support pipelining while CISC cannot support it

RISC
CISC
Simple instructions
Complex instructions (often made up of many simpler instructions)
Fewer instructions
A large range of instructions
Fewer addressing modes
Many addressing modes
Only LOAD and STORE instructions can access memory

lower energy requirements and can go into "Sleep mode" when not actively processing
Many instructions can access memory
   
 


 Multi core systems :

  • Classic von Neumann architecture uses only a single processor to execute instructions . in order to improve the computing power of processors , it was necessary to increase the physical complexity of the CPU
  • traditionally this was done by finding new ingenious ways of fitting more and more transistors onto the same size chip
  • there was even a rule called Moore's law , which predicted that the number of transistors  that could be placed  on a chip would double every two years
Multi core systems are now very popular , where the processor will have several cores allowing for multiple programs or threads to be run at once.

parallel systems

  • One of the most common types of multicore system is the parallel processor. The chances are that you're using a parallel processing system right now They tend  to be referred to as dual core or quad core computers.
  • In parallel processing , two or more processors work together to perform a single task. The task is split into smaller sub- tasks.
  • These tasks are executed simultaneously by all available processors. This hugely decreases the time taken to execute a program , however software has to be specially written to take advantage of these multicore systems

multiple instruction single data (MISD) - multiple processors (Cores)on the same set of data
single instruction , multiple data (SIMD) - multiple processors that follow the sae set of instructions
multiple instruction , multiple data (MIMD) - multiple processors that process a number of different instructions simultaneously.

All parallel processing systems act in the same way as a single core CPU, loading instructions and data from memory and acting accordingly
However different processors in a multi core system needs to communicate continuously with each other in order to ensure that is one processor change a key piece of data the other processors are aware of the change and can incorporate it into their calculations , also there is a huge amount of additional complexity involved in implementing parallel processing because when each separate cycle has completed s own tasks , the result from all cores need to be combined to form the complete solution to the original problem .

     
     
     
     
     
     
     
     
     
     

No comments:

Post a Comment