Thursday, 24 March 2016

Transaltors

















Translators https://www.youtube.com/watch?v=UutE0IAWzBA

A translator takes source code and turns it into machine code there are 3 types:

Assemblers
  • Assemblers will convert low level assembly code into machine code , usually in a 1 to 1 conversion , one line corresponds to a machine code instruction.
  •  Both compliers and interpreters will compile source code into machine code,
  • They both have a 1 to a many relationship , a single line of source code may become many lines of binary equivalent machine code.
  •  responsible for converting low level assembly language into machine code.(short code is mnemonics)
  • process specific , not generalizable with other processors
  • one line of assembly language turns into one line of binary language
Interpreters
  • interpreters take one line of high level source code and convert it directly into machine code and run it tis allows a program to run immediately but will stop when it reaches a line with the first error,
  • but this requires to access to the whole source code every time it is ran. This is a security issue as anyone can see the source code and it means its slow.
  • could create security issues (source code has to be public)
Compliers
  • However a complier will take the whole source code and convert it all into machine code , this can take a long time. Errors will need to be fixed before running.
  • Compliers will convert source into object code (close to machine code) this means the source code is not required once complied.
  • The compiled code will run faster , if there are any changes, responsible for converting high level language
intermediate code (object code) the in-between part between complier and machine code (bit of both)

each line of compliers or interpreters (high level language) can turn into dozens of lines in machine code(one to many relation ship)
  1. .complier generates some code 
  2.  this code will be save as an executable file
  3. Another name for this exe file is "object" file
  4. When the exe file (or object file) is run then the machine code commands that it contains start to processed by the CPU
  5.  example of an executable file is Notepad.exe (in Windows executable files have this extension)
  6. Because different CPUs have a different set of machine codes, the compiler will necessarily need o have the target CPU defined before it carry out the translation
  7.  You could, for instance, purchase C++ commercial compilers and these would target the Intel x 86 family of processors specifically







Thursday, 17 March 2016



Bubble Sort:
  • Sorts items in pairs, if one is larger it is swapped and move down one number
  • The largest value 'bubbles' to the top
  • Stops after 0 comparisons
  •  Once the algorithm finishes there is usually a final pass to check the number is in order (gives 0 comparisons)
  • Easy to program but is bad when the list is in reverse order
  • Has a quadratic order - time complexity of  O(n^2) so can be inefficient for larger lists

Pseudo Code:

Start

    noSwapps = False

    WHILE noSwapps == False

        for i to length of list - 1

             if i  > i+1

                  swap

Stop

Insertion Sort:

  • Splits' the list into two halves working from left to right
  • Gets a new item from the list, finds it place in the sorted list and places it here
  • Finds the correct place using swaps
  • Starts with the first element as the sorted list of length one, then adds the next element to the right and finds its place in the list
  • Finishes when all numbers have been added to the ordered list no final check needed
  •  Good because it allows items to be added and sorted into an array at any time and is useful for smaller lists (Good with memory)
  • Good when the list has almost been sorted
  • Time complexity of O(n^2) so can be slow again on big lists

 
Quick Sort:

  • Divide and conquer
  • During the partition step it divides the array into 2 groups depending on the pivot (the split) which doesn't need to be the middle element
  • The best pivot value is the middle however this takes time so a random value is chosen
  • This will split so that all the numbers to the left of the pivot are less than the pivot and all the numbers to the right are greater than the pivot but not in order
  • Work in from each end of the list to put the numbers on the correct side to swap numbers from each side
  • Another partition step occurs halfway through the groups that were just made before - similar to a binary search
  • Uses recursion to keep splitting the partitions until the partitions are of size 0 or 1
  • Time complexity of O(n log2(n))
  • Very fast compared to the 2 other sorts on larger lists but can be complicated to program
1.1.2
Types of Processor
Reduced  instruction set computer(RISC)
complex instruction set computer(CISC)

                                                                                                                             
RISC
  • supports only a small number of very simple instructions , which can be completed in one clock cycle. making it faster
  • This means that individual instructions are executed extremely quickly , but more instructions are needed to complete a given task so if it is more complicated   it will take longer.
  • A side effect of the simpler instructions set is that RISC architectures need a greater number of registers to provide faster access to data when programs are being executed
  • mainly used for embedded systems such as washing machines

  • Can use more RAM to handle intermediate results
  • This has simple hardware but more complicated software code

CISC
  • supports a large number of complicated instructions, This means that instructions can take many clock cycles to complete
  • a single CISC instruction might be made up of a number of smaller RISC - type instructions
  • tends to be slightly slower than RISC
  • can support much wider range of actions and addressing modes.
  • When many programs were written in assembly language , CISC were very popular as writing programs for them is much easier
  • This has more complex hardware but more compact and simple software code
  • Can use less RAM as no need to store intermediate results
CPU time = (number of instructions) x (average cycles per instruction) x (seconds per cycle)

Risc can support pipelining while CISC cannot support it

RISC
CISC
Simple instructions
Complex instructions (often made up of many simpler instructions)
Fewer instructions
A large range of instructions
Fewer addressing modes
Many addressing modes
Only LOAD and STORE instructions can access memory

lower energy requirements and can go into "Sleep mode" when not actively processing
Many instructions can access memory
   
 


 Multi core systems :

  • Classic von Neumann architecture uses only a single processor to execute instructions . in order to improve the computing power of processors , it was necessary to increase the physical complexity of the CPU
  • traditionally this was done by finding new ingenious ways of fitting more and more transistors onto the same size chip
  • there was even a rule called Moore's law , which predicted that the number of transistors  that could be placed  on a chip would double every two years
Multi core systems are now very popular , where the processor will have several cores allowing for multiple programs or threads to be run at once.

parallel systems

  • One of the most common types of multicore system is the parallel processor. The chances are that you're using a parallel processing system right now They tend  to be referred to as dual core or quad core computers.
  • In parallel processing , two or more processors work together to perform a single task. The task is split into smaller sub- tasks.
  • These tasks are executed simultaneously by all available processors. This hugely decreases the time taken to execute a program , however software has to be specially written to take advantage of these multicore systems

multiple instruction single data (MISD) - multiple processors (Cores)on the same set of data
single instruction , multiple data (SIMD) - multiple processors that follow the sae set of instructions
multiple instruction , multiple data (MIMD) - multiple processors that process a number of different instructions simultaneously.

All parallel processing systems act in the same way as a single core CPU, loading instructions and data from memory and acting accordingly
However different processors in a multi core system needs to communicate continuously with each other in order to ensure that is one processor change a key piece of data the other processors are aware of the change and can incorporate it into their calculations , also there is a huge amount of additional complexity involved in implementing parallel processing because when each separate cycle has completed s own tasks , the result from all cores need to be combined to form the complete solution to the original problem .

     
     
     
     
     
     
     
     
     
     

CPU architecture

Components of a CPU

Based upon a Von-Neumann machine - stored program approach. where both the data and the program are stored in main memory.


                FDE CYCLE



Fetch - the next instruction is fetched from main memory
Decode - the instruction gets interrupted/ decoded , signals produced to control other internal components (ALU for example)
Execute - the instructions get executed

 Registers
A register is a small block of memory usually bytes, whish is used as temporary storage for instructions as they are being processed

general purpose registers are used as programs run , to enable calculations to be made and can be sued for any purpose the programmer chooses/ special purpose registers are crucial; to how the processor works .values are loaded in and out of registers during execution of a process some examples of special purpose registers are
  • program counter (PC)
  • memory address register and memory data register (MDR) (MAR)
  • current address register  (CAR)
  • accumulator
Program counter - points to next instruction to be executed
This holds addresses of the next instruction that is to be fetched -decoded-executed. This will increment automatically as the current instruction being decoded. (not current instruction - next one(+1)) only moves onto next instruction when the current one is being decoded .

Memory address register (MAR) - holds  location of instruction
 This holds the current instruction being executed.It points to the relevant location in memory where the required instruction is ( at this stage the address is simply copied from the program counter )

Memory data register (MDR) - gets information from main memory (Temporary store)
the mdr can contain both instructions and data. At this stage , an instruction has been fetched and is being stored here enroute to the current instruction register, The instruction is copied from the memory location pointed to by the MAR.

Current instruction register (CIR) - current instruction is decoded and executed
is used to store th3e current instruction to be decoded and executed (copied from the MDR).As the instruction in the CIR is being decoded and executed , the next instruction is being fetched into the MDR

Decoding and executing the instruction
the instruction in the CIR gets decoded. As this happens the PC(program counter) automatically increments.

Control unit (CU) - overall management of data and instructions movement
the control unit co-ordinated all of these fetch decode and execute activities. At each clock pulse , it controls the movement of data and instructions between the registers , main memory and input and output devices. Some instructions may take less time than a single clock cycle , but the next instruction will only start when the processor executed the next cycle.

Status register (SR) - status of instructions (has an overflow occurred)
this stores a combination of bits used to indicate the result of an instruction. For example one bit will be set to indicate that an instruction has caused an overflow. Another bit will be set to indicate tat the instruction produced a negative result. The status register also indicated whether an interrupt has been received.

Arithmetic logic unit (ALU) - calculations and comparisons instructions may require
The arithmetic logic unit carries out any arithmetic and logical operations required by any instruction that is executed. Calculations include floating point multiplication and integer divisions , while logic operations include comparison tests such as greater than or less than.

Accumulator (ACC) - holds the values for calculations
any instruction that performs a calculations it uses the ACC. many instructions operate on , or update , the ACC.If a subtraction instruction is run , it performs the subtraction using the data part if the instruction and stores the result in the ACC.


Buses
a bus is a path down which information can pass through. Think of a bus being a set of wires which is reserved for a particular type of information you won't go far wrong , although it doesn't have to be wires.
There are three components of the system bus
- Control  -  carries control signals to help manage the process, it sends commands to the different  components of the system
- Address - key communications and carries data in tandem with data bus , carries the location of which the data in the data bus should be delivered.
- Data bus - This is used to carry the data tat needs to be transferred from one part of the hardware to another , often memory

Common Architectures

Von Neumann

Many years ago, in fact 1945, just after the World War, two mathematician-scientists independently proposed how to build a more flexible computer. it has a shared memory space for instructions and data.


Memory

The computer will have memory that can hold both data and also the program processing that data. In modern computers this memory is RAM.

Control Unit

The control unit will manage the process of moving data and program into and out of memory and also deal with carrying out (executing) program instructions - one at a time. This includes the idea of a 'register' to hold intermediate values. In the illustration above, the 'accumulator' is one such register.
The 'one-at-a-time' phrase means that the von Neumann architecture is a sequential processing machine.
 

Input - Output

This architecture allows for the idea that a person needs to interact with the machine. Whatever values that are passed to and forth are stored once again in some internal registers.

Arithmetic Logic Unit

This part of the architecture is solely involved with carrying out calculations upon the data. All the usual Add, Multiply, Divide and Subtract calculations will be available but also data comparisons such as 'Greater Than', 'Less Than', 'Equal To' will be available.

Bus

Notice the arrows between components? This implies that information should flow between various parts of the computer. In a modern computer built to the Von Neumann architecture, information passes back and forth along a 'bus'. There are buses to identify locations in memory - an 'address bus'
And there are buses to allow the flow of data and program instructions - a 'data bus'.




Harvard architecture















  • The idea of the Harvard Architecture is to split the memory into two parts.
  •  One part for data and another part for programs.
  • Each part is accessed with a different bus.
  • This means the CPU can be fetching both data and instructions at the same time. There is also less chance of program corruption.
  • used by risc processors


some other examples of CPU architectures include :
  • Embedded CPU architectures e.g.                 Intel's 8051 architecture
  • Microcomputer CPU architectures e.g.        MOS Technology's 6502 architecture
  • mixed - core CPU architectures e.g.              CAS's Loogson
  • Workstation/ Server CPU architectures e.g. oracle's architecture
 









Thursday, 10 March 2016

LMC task

LMC task

Task 1:
INP - input the first value
STA FIRST - the input is stores in the memory address FIRST
INP - input the second value
ADD FIRST - Add the value in the memory address FIRST to the second value inputted
OUT - output the result
INP - the third value is inputted
SUB FIRST - Subtract the value in the memory address FIRST from the third value inputted
OUT - output the result
HLT - Finish the program (stops)

Task 2:

INP - input the first value 
STA FIRST - Store the value in the memory address FIRST
INP - input a second value
STA SECOND - Store the second value in the memory address SECOND
LDA FIRST - Load the value in the memory address FIRST
OUT - Output this value (found in FIRST)
LDA SECOND - Load the value in the memory address SECOND
OUT - Output this value (found in SECOND)
HLT - Finish the program

FIRST DAT - Declare FIRST as the next available memory address
SECOND DAT - Declare SECOND as the next available memory address (after FIRST)

Thursday, 3 March 2016

1.1.1 structure and function of the processor 
1.1.2 types of processor 

CPU

  • The cpu is the processor
  • it carries out all the mathematical and logical operations necessary to execute the instructions given to it by the user
  • it is one of the most expensive parts of the computer system 
  • They can be found in anything from smartphones and tablets to washing machines and microwaves
  • They are extremely complex and the way they are constructed 

Things that can be upgraded:
RAM- This will give the computer more main memory
Graphic cards-
Usually people swap a hard disk drive for a SSD
Purchasing a CPU with higher cores- this isn't generally done due to the cost and more technical acquisition is needed to improve and replace a CPU

Machine code instruction is the language that is understandable by humans and is known as source code,processors have no understanding of source codes and cannot executed unless it is translated into machine code .
A complier is the name given to any piece of software that takes the source code and converts it into machine code
machine code is the binary representation . spilt into the opcode and the operand (instruction set)
Opcode - the action the instruction performs
Operand - the location of the data



Machine code instructions 
Each processor architecture has its own unique set of instructions
The compiler must specify the target architecture before compiling,
 as the differing architectures are not compatible with all computer.Each line of source code can be complied into many lines of code.

Assembly language - uses text or mnemonics to represent machine code in a simplified way

Every instruction that a processor architecture can understand has a unique binary representationvalue, which is known as an Opcode
When executed the Opcode is used to determine which instruction to execute (what action it is to perform)


An example of the Opcode and what will happen when execution occurs:

Opcode
Assembly mnemonic
Description
000 0001
MOV
Moves a value to a register
000 010
ADD
Adds a value and stores in ACC (accumulator)
000 100
SUB
Subtracts a value and stores in ACC (accumulator)
001 000
MLT
Multiplies a value and stores in ACC (accumulator)

each opcode represents a different instruction and mnemonics are used to make development in assembly  language easier , as dealing with binary can get very complex



Comparison Operators 
==
Equal to
!=
Not equal to
<
Less than
<=
Less than or equal to
>
Greater than
>=
Greater than or equal to

Arithmetic Operators 
+Addition e.g. x=6+5 gives 11
-Subtraction e.g. x=6-5 gives 1
*Multiplication e.g. x=12*2 gives 24
/Division e.g. x=12/2 gives 6
MODModulus e.g. 12MOD5 gives 2
DIVQuotient e.g. 17DIV5 gives 3
^Exponentiation e.g. 3^4 gives 81








Tuesday, 1 March 2016

Searching algorithms

binary searching algorithm

Start
       key = 4
       get values
       set mid point
       set Imin to 0
       set Imax Len (values)-1
found = false
while found == false and Imax > = imin:
       mid=(imin=Imax)//2
IF key ==values[mid]:
       found =true
ELIF key > values[mid]:
        imin = mid + 1
    ELSE:
        imax = imid
IF found == TRUE:
    print(key,"found at pos",imid)
END IF
UNTIL found = TRUE


Time complexity
 linear search = O(n)
Binary search = O(log(n))

Binary search is more efficient , divides by 2 each time ("divide and conquer")

Disadvantages of binary searches
  • The list has to be ordered
  • if the values are random , the only choice is a linear search
  • Binary search algorithms are harder to write (errors likely)
  • if the list is short there is no point for it , (it halves the search so is good for big lists)
  • if the list changes often , you will need to resort it every time
  • if the items you are searching for are nearer to the front of the list there is no point in using binary search








Thursday, 25 February 2016

KARNAUGH MAP

They are used to simplify a Boolean expression. They work in a similar way to a truth table with patterns within the table in order to simplify it. They illustrate a table of possible inputs mapped against the required output

RULES
- no zeros are allowed
- no diagonal joining of blocks
- groups should be large as possible
- everyone one must be within a block - no loners
- groups must be formed in powers of 2 (1,2,4,8,etc.)
- overlapping is allowed
- wrap around are allowed

(A^B) v (A^¬B) split into 2 parts and deal separately in the map


            A                       
B
 0
 1
 
 
 
 0
 
 
 
 
 1
 
1  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

            A                       
B
A /B
0 0
A/B
01
A/B
11
A/B
10
 
 
C   0
 
 
1
 
 
 
C   1
 
 
1
1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

A and B and C and A and not B and C and A and B and not C

A and B or C an A