Multicore vs Parallel Systems

GCSE Computer Science Resources
14-16 Years Old

48 modules covering every Computer Science topic needed for GCSE level, and each module contains:

  • An editable PowerPoint lesson presentation
  • Editable revision handouts
  • A glossary which covers the key terminologies of the module
  • Topic mindmaps for visualising the key concepts
  • Printable flashcards to help students engage active recall and confidence-based repetition
  • A quiz with accompanying answer key to test knowledge and understanding of the module
View the GCSE Resources →

KS3 Computing Resources
11-14 Years Old

We’ve created 45 modules covering every Computer Science topic needed for KS3 level, and each module contains:

  • An editable PowerPoint lesson presentation
  • Editable revision handouts
  • A glossary which covers the key terminologies of the module
  • Topic mindmaps for visualising the key concepts
  • Printable flashcards to help students engage active recall and confidence-based repetition
  • A quiz with accompanying answer key to test knowledge and understanding of the module
View the KS3 Resources →

Both multicore and parallel systems processing units refer to the way and the amount of computer chips operate in a computational system.

To help us understand what multicore and parallel systems are, it is vital to understand what a Central Processing Unit (CPU) is.

The objectives of these systems are so that more tasks can be executed simultaneously, effectively increasing the overall system performance.  

 A CPU is an electronic circuit in the form of a chip that executes instructions. The CPU executes logic, controlling, basic arithmetic, and input/output operations specified by instructions that make up the program.

What is a Multicore System?

As the production of single core processors is coming to a halt, multicore processors are filling the space they left behind, bringing new advanced features and overall better performance. The multicore processor has become largely used in applications such as,

cloud computing, data warehousing, and cyber physical systems. An area where these multicore processors are becoming increasingly popular, are environments that are constrained by weight, power, and size.

A multicore system is defined as being a system, which has more than two CPUs working together on the same chip. A multicore system is also a type of architecture, which has a single physical processor that contains the logic of two or more processors, which are packed together in a single integrated circuit (as a bundle). Multicore systems allow the system to perform more tasks as well as maintain a high system performance.

Multicore processor Architecture

Number of cores:

  • Each multicore processor has a different amount of cores. A quad core, has four cores for instance.

Number of core types:

  •  Homogeneous (symmetric) cores:
  • In a homogenous processor, all the cores are typically of the same type. The cores are all purpose use central processing units (CPUs) that run a single multicore system.
  • Heterogeneous (asymmetric) cores:
  • In a heterogeneous processor, there a mix of core types that can run different                  operating systems, they can also include graphic processing units (GPUs)

Number and level of caches:

  • Every core within the multicore processor will have a cache system for itself, which is not shared between cores. The cores are small and fast pools of local memory.

How cores are interconnected:

  • Cores vary due to their bus architectures. (Buses are circuits on a motherboard the connects components to the central processing unit (CPU))

Isolation:

  • Physical isolation:
  • Physical isolation makes sure that different cores cannot/will not have access to the             same physical hardware (this can be locations such as RAM and caches)
  • Temporal isolation:
  • Temporal isolation makes sure that the performance and execution of any one core        does not have a negative effect on another core.

Advantages of Multicore Processors

True Concurrency:

  • The availability of having different cores in the system allows multicore processing to increase the support for software applications across a range of applications.

Reliability and Robustness:

  • The ability to allocate software to a number of different cores will overall increase the reliability and robustness (i.e. fault tolerance) by constraining and limiting faults and/or failure from spreading from one piece of software to another.

Performance:

  • Performance, which arguably is most important advantage of multicore processors, is heavily increased by the addition of cores. The short distance between cores on an integrated chip allows shorter resource access latency and greater cache speeds, as compared to using single core processors.

Isolation:

  • During an execution of a program, compared to single core systems, software on multicore systems are less likely to affect another software on another core.

Energy efficiency:

  • The use of multiple processors allows architects to decrease the number of embedded systems. This allows them to surpass Moore’s Law, which states:
  • “Electrical resistance is increased with smaller circuits, which means that they create more heat.”
  • Therefore if less heat is generated, then less energy will be spent on cooling the system, which means this can save battery life.

Obsolescence Avoidance:

  • Unlike single core processors, multicore processors are still advancing, the number of cores is continuously increasing, which means this allows multicore processors to be kept in use, and in development.

Hardware costs:

  • Systems will contain fewer computerised peripherals, and processors due to multicore processors being in use.

Disadvantages of Multicore Processors

Concurrency Defects:

  • Within the multicore processing system are cores that are executing programs simultaneously, creating potential for:
  • Deadlocks: Cores(including waiting for themselves) waiting for each other to take action.
  • Livelock: Similar to deadlock, though the state of processes is constantly changing.
  • Starvation: Being denied necessary resources.

Analysis Difficulty:

  • Analysis of interference between systems becomes more difficult as the number of cores increases.

Interference:

  • The behaviour of software can be effected when one software executing a program on one core effects another program executing a program on the same processor. As the number of cores increases, it is only logical that the number of interference paths increases.

Shared resources:

  • In a multicore system, the same processor will have to share both internal and external processors.
  • Internal processors being: Cache, Input/Output controller, system bus, memory controller, and interconnects)
  • External processors being: (Input/Output devices, networks, and main memory).

Sharing these peripherals means:

  1. Single points off failure can occur.
  2. Interference can happen between two applications/software’s running on the same core.

Special programming is required:

  • Multicore systems can’t be simply placed into an operating system and turned on. They require specialised programming to allow them to operate on operating systems.

Heat generation:

  • Due to having many cores operating at the same time, this increases the amount of heat generated.

What is a Parallel System?

In this day and age, new applications constantly require faster processors. Many of the commercials applications used nowadays are built using parallel systems.

These systems are able to process large amounts of data in various ways, to achieve a high level of efficiency.

Parallel systems are all about breaking down discrete parts of instructions of a program so that they can execute them simultaneously on different CPUs.

Parallel systems are designed to decrease execution time of programs by portioning them into various fragments and processing these fragments simultaneously, these systems can also be known as tightly coupled systems. A parallel system can deal with multiple processors, machines, computers, or CPUs etc. by forming a parallel processing bundle or a combination of both entities.

Parallel System Architecture

As it would be expected, these parallel systems, are more difficult to program than single processors because the architecture of they are comprised off, which includes many CPUs, as opposed to one. All the CPUs in the system must be coordinated and synchronised together.

The models below have become the post popular ways of programming to build parallel systems, they include asynchronous processes with a shared memory.

The systems below are referred to as “Flynn’s taxonomy”.

Single instruction stream, single data stream (SISD) (sequential programming)

  • There is no parallelism in both the instruction and the data stream. One control unit will fetch one single instruction from the memory.
  • This type of stream correlates to the Von Neumann architecture, as a single uni-core system processor will execute only a single stream, to operate on data stored in single memory.
  • Examples include older uniprocessor machines such as older PC (for instance early 2000 models)

Single instruction stream, multiple data steams (SIMD)

  • Here one single instruction will operate on multiple different data streams, and they can be executed consecutively.
  • A single operation is performed on multiple data points at the same time, these systems exploit parallelism but not concurrency (i.e. not during overlapping time periods). Though it is generally known to be difficult to sustain commercial application of SIMD “only” processors. 
  • Examples video game consoles since 1998 has SIMD processor in the architecture

Multiple instruction streams, single data stream (MISD)

  • Many functional units will execute different operations on the same data stream. This architecture is generally used in fault tolerant environments.
  • An example is the Space Shuttle flight control computer.

Multiple instruction steam, multiple data stream (MIMD)

  • Machines will have a number of processors they can utilise and function independently  and asynchronously. At any single time there might be different processors executing different instructions on different prices of data.
  • Examples of MIMD systems include Intel Xeon Phi, and most parallel computers post 2013.

Difference between Parallel programming and sequential programming:

Parallel programming is the execution of programs/instructions at the same time. Whereas sequential programming involves an ordered sequence off instruction one after the other to execute the program completely.

Advantages of Parallel Systems

Time efficiency:

  • This is arguably the main reason that parallel programming are used, and that it executes code very efficiently. Parallel systems provide a means of concurrency, especially performing simultaneous multiple actions at the same time.

Goes beyond the limits of sequential programming:

  • Ultimately the speed of sequential programming depends on how fast data can be passed through the system hardware, therefore hardware must be considered to improve the speed at which execution happens (i.e. respective bandwidth).
  • However, with parallel programming the program does not have a dependency on the hardware architecture despite of the hardware’s specifications, the program will execute instructions simultaneously regardless.

Resource management:

  • Using computational resources that are available through a network is advantageous when the local resources are too costly to manage.

Disadvantages of Parallel Systems

Complexity:

  • One very big disadvantage of parallel systems is the complex architecture that they possess. The complexity could result in deadlocks which is when cores are waiting for each other (including waiting for themselves) to execute an instruction, as well as non-determinism which is when the program cannot predict an outcome or result of a process due to the lack of knowledge of the state of another peripheral.

Hard to develop a program that can be easily adaptable to existing and future systems:

  • Unlike sequential programs, which are based on the Von Neumann architecture, which simplifies many considerations required for the development of a program. Parallel processing is based on multiple programming models that depend on different attributes of parallel systems. This makes it hard to program and develop a parallel program for existing systems and for future ones too.

Code specific, increase learning curve:

  • Since parallel programming allows for different models to communicate with each other simultaneously, these models can have many different programming specifications, which impedes reuse and ultimately increases the learning curve.

Less mature:

  • The parallel architecture/programming style hasn’t been around as much as other architectural systems, like sequential programming. This disadvantage of parallel programming is that it’s relatively still quite new, therefore there is less documentation about it and less lessons learnt.

Types of Parallel systems

Both multicore and parallel systems execute tests in parallel, below are types of parallelism, which are used in industry.

Bit-level parallelism:

  • Bit-level parallelism is a form of parallel computing which is based on increasing processor word size. Due to an increase in word size, this enables the number of instructions a processor must execute to decrease. 

Instruction-level parallelism:

  • Theoretically, a computer program is a series of instructions being executed by the processor. Without instruction level parallelism a computer will only issue less than one instruction per cycle.
  • The instructions for a program can be bundled up and executed in parallel without changing the result of the program.

Task parallelism:

  • Task parallelism is a form of computer parallelism, which focuses on distributing tasks, executed simultaneously by threads or programs across different processors.
  • Pipelining is a common type of task parallelism as it involves moving a single set of data through a series of separate tasks, where they can be independently executed. 

Super-word level parallelism:

  • This is a vectorisation type of parallelism based on loop unrolling and basic block vectorisation.
  • Loop unrolling is a technique, which increases a programs execution speed at the expense of binary size, this is known as space-time tradeoff.

Pipelining:

  • Pipelining involves the process of modifying a system model to introduce delays between tests where there is data dependency.

Summary and Facts

There is an always a growing demand to increase the speed at which systems operate to enhance user experience and efficiency. Thus, architects and scientists are always working and testing new architectures to provide the best output.

Processors have advanced considerably from using single processors to multiple processors, and from using sequential processes to parallel processes.

These advancements in processes have resulted in parallel system and multicore systems which both revolve around the idea of executing data instruction at the same time.

In the coming years, these systems will become even more advanced to suit the needs of our computer systems, and the advantage of using these systems is that they can be built on and improved rather than single core processors which can’t.

That being said, single core processors will still have a use, as not every system requires a parallel one.

What is a Multicore System?

  • A multicore system is defined as being a system, which has more than two CPUs working together on the same chip.

Multicore processor Architecture

  • Number of cores
  • Number of core types
    • Homogeneous (symmetric) cores
    • Heterogeneous (asymmetric) cores
  • Number and level of caches
  • How cores are interconnected
  • Isolation
    • Physical isolation
    • Temporal isolation
  • Multicore systems allow the system to perform more tasks as well as maintaining a high system performance.

Advantages of Multicore Processors

  • True Concurrency
  • Reliability and Robustness
  • Performance
  • Isolation
  • Energy efficiency
  • Obsolescence Avoidance
  • Hardware costs

Disadvantages of Multicore Processors

  • Concurrency Defects
  • Analysis Difficulty
  • Interference
  • Shared resources
  • Special programming is required
  • Heat generation

What is a Parallel System?

Parallel systems are all about breaking down discrete parts of instructions of programs so that they can execute them simultaneously on different CPUs.

A parallel system can deal with multiple processors, machines, computers, or CPUs etc. by forming a parallel processing bundle or a combination of both entities.

Difference between Parallel programming and sequential programming

Parallel programming is the execution of programs/instructions at the same time. Whereas sequential programming involves an ordered sequence off instruction one after the other to execute the program completely.

Parallel System Architecture

  • Single instruction stream, single data stream (SISD)
  • Single instruction stream, multiple data streams (SIMD)
  • Multiple instruction streams, single data stream (MISD)
  • Multiple instruction steam, multiple data stream (MIMD)

Advantages of Parallel Systems

  • Time efficiency
  • Goes beyond the limits of sequential programming
  • Resource management

Disadvantages of Parallel Systems

  • Complexity
  • Hard to develop a program that can be easily adaptable to existing and future systems
  • Code specific, increase learning curve
  • Less mature

Types of Parallel systems

  • Bit-level parallelism
  • Instruction-level parallelism
  • Task parallelism
  • Super-word level parallelism
  • Pipelining

The main difference between multicore and parallel systems?

Both processes execute programs at the same time, though the main difference between the two is that parallel processing refers to running more than 1 program simultaneously, usually with different peripherals communicating with each other. These might be multiple CPUs, multiple threads on one core, multiple cores, or multiple machines.

On the other hand, multicore processing means the execution of a program on one core of a single CPU chip.

References:

  1. https://www.webopedia.com/TERM/M/multi_core_technology.html
  2. https://www.sciencedirect.com/topics/computer-science/multicore-system
  3. https://www.techopedia.com/definition/5305/multicore
  4. https://ijret.org/volumes/2015v04/i09/IJRET20150409015.pdf
  5. https://www.techwalla.com/articles/advantages-disadvantages-of-dual-core-processors
  6. https://insights.sei.cmu.edu/sei_blog/2017/08/multicore-processing.html
  7. https://www.omnisci.com/technical-glossary/parallel-computing
  8. https://brainly.in/question/5173899
  9. https://www.geeksforgeeks.org/introduction-to-parallel-computing/
  10. https://www.cardiff.ac.uk/computer-science/research/priority-areas/distributed-and-parallel-systems

Leave a Comment