












Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
An overview of cache coherence in parallel processing architecture, focusing on symmetric shared memory architecture. It discusses the concept of multiprocessor cache sharing, the cache coherence problem, and its implications on order among multiple processes. The document also includes formal definitions of coherence and consistency.
Typology: Slides
1 / 20
This page cannot be seen from the preview
Don't miss anything!
We noticed that based on the memory organization and interconnect strategy, the MIMD machines are classified as:
Here, the subsystems share the same physical centralized memory connected by a bus The key architectural property of this design is the Uniform Memory Access – UMA; i.e., the access time to all memory from all the processors is same
It consists of number of individual nodes containing a processors, some memory and I/O and an interface to an interconnection network that connects all the nodes
The distributed memory provides more memory bandwidth and lower memory latency
Here, we noticed that the shared-memory communication model has compatibility with the SMP hardware; and
offers ease of programming when communication patterns are complex or vary dynamically during execution
While the message-passing communication model has explicit Communication which is simple to understand; and is easier to use sender-initiated communication
caches for multi-processing in the symmetric shared-memory architecture
The symmetric shared memory architecture is one where each processor has the same relationship to the single memory
Small-scale shared-memory machines usually support caching of both the private data as well as the shared data
Whereas when shared data are cached the shared value may be replicated in multiple caches
This results in reduction in access latency and fulfill the bandwidth requirements,
but, due to difference in the communication for load/store and strategy to write in the caches, values in different caches may not be consistent, i.e.,
There may be conflict (or inconsistency) for the shared data being read by the multiple processors simultaneously This conflict or contention in caching of sheared data is referred to as the cache coherence problem Informally, we can say that memory system is coherent if any read of a data item returns the most recently written value of that data item
Note that here the processors P1, P2, P3 see old values in their caches as there exist several alternative to write to caches!
For example, in write-back caches, value written back to memory depends on which cache flushes or writes back value (and when);
i.e., value returned depends on the program order, program issue order or order of completion etc.
Now let us discuss what does order among multiple processes means!
Firstly, let us consider a single shared memory, with no caches
This means that a single shared memory, with no caches, imposes a serial or total order on operations to the location, i.e.,
With this much discussion on the cache coherence problem, we can say that
A memory system is coherent
if the results of any execution of a program are such that for each location,
it is possible to construct a hypothetical serial order of all operations to the location that is consistent with the results of the execution
In a coherent system
- the operations issued by any particular process occur in the order issued by that process, and - the value returned by a read is the value written by the last write to that location in the serial order