Pseudo code for 100 processors Pseudo code for 25 processors Pseudo code for 2 processors
Void main()

{

Switch (Processor_ID)

{

Case 1: Compute element 1;

Break;

Case 2: Compute element 2;

Break;

Case 3: Compute element 3;

Break;

.

.

Case 100: Compute element 100;

Break;

}

}

Void main()

{

Switch (Processor_ID)

{

Case 1: Compute elements 1-4;

Break;

Case 2: Compute elements 5-8;

Break;

Case 3: Compute elements 9-12;

Break;

.

.

Case 25: Compute elements 97-100;

Break;

}

}

Void main()

{

Switch (Processor_ID)

{

Case 1: Compute elements 1-50;

Break;

Case 2: Compute elements 51-100;

Break;

}

}

Computation time- 1 clock cycle Computation time- 4 clock cycles Computation time- 50 clock cycles


Levels Grains Parallelism
Instruction level Fine Highest
Loop level Fine to Medium Moderate
Sub-routine level Medium Moderate
Program level Coarse Least


In parallel computing, granularity (or grain size) of a task is a measure of the amount of work (or computation) which is performed by that task.[1]

Another definition of granularity takes into account the communication overhead between multiple processing elements. It defines granularity as the ratio of computation time to communication time wherein computation time is defined as the time required to perform computation task and communication time is defined as the time required to communicate between processors to perform the task[2]

If Tcomp is the and Tcomm denotes the communication time, then the Granularity G of a task can be calculated as:

Granularity is usually measured in terms of the number of instructions executed in a particular task.[1] Alternately, granularity can also be specified in terms of the execution time of a program, combining the computation time and communication time.[1]

  1. ^ a b c Hwang, Kai. Advanced Computer Architecture: Parallelism,Scalability,Programmability (1st ed.). McGraw-Hill Higher Education. ISBN 0070316228.
  2. ^ Kwiatkowski, Jan (9 September 2001). "Evaluation of Parallel Programs by Measurement of Its Granularity". Parallel Processing and Applied Mathematics. Springer Berlin Heidelberg: 145–153. doi:10.1007/3-540-48086-2_16.