









Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Composition of Relations, Transitive Closure and Warshall's Algorithm assignment by Rasha Eqbal at Indian Institute of Technology Kharagpur
Typology: Study notes
1 / 16
This page cannot be seen from the preview
Don't miss anything!
Contents
1 Composition of Relations 3
2 Transitive Closure 7
3 Warshall’s Algorithm 12
Proof If an element z ∈ C is in (S ◦ R)(A 1 ), then x(S ◦ R)z for some x ∈ A 1. By the definition of composition, this means that xRy and ySz for some y ∈ B. So now we have z ∈ S(y) and y ∈ R(x) ⇒ z ∈ S(R(x)). Since {x} ⊆ A 1 , we can also say that S(R(x)) ⊆ S(R(A 1 )). Now, since z ∈ S(R(x)), therefore z ∈ S(R(A 1 )) also. This means that (S ◦ R)(A 1 ) ⊆ S(R(A 1 )). Conversely, suppose z ∈ S(R(A 1 )). Then z ∈ S(y) for some y ∈ R(A 1 ). Similarly, y ∈ R(x) for some x ∈ A 1. This means that xRy and ySz. So from the definition of composition we can say x(S ◦R)z. Thus z ∈ (S ◦R)(x). Since {x} ⊆ A 1 , we can say that (S ◦R)(x) ⊆ (S ◦R)(A 1 ). Hence z also belongs to (S ◦ R)(A 1 ). So S(R(A 1 )) ⊆ (S ◦ R)(A 1 ). Since (S ◦ R)(A 1 ) ⊆ S(R(A 1 )) and S(R(A 1 )) ⊆ (S ◦ R)(A 1 ), we can say that (S ◦ R)(A 1 ) = S(R(A 1 )). This proves the theorem.
† Example 2: Let A = {a, b, c} and let R and S be relations on A whose matrices are:
Find S ◦ R. Solution: We see from the matrices that:
It is easily seen that (a, b) ∈/ S ◦ R since, if we had (a, x) ∈ R and (x, b) ∈ S, then from Matrix MR we know that x would have to be either a or c; but from matrix MS we know that neither (a, b) nor (c, b) is an element of S. Hence we see that the first row of MS◦R is 1 0 1. Proceeding in a similar manner we get:
Hence the second row of MS◦R is 1 1 1.
Hence the third row of MS◦R is 0 1 1. Therefore the composition matrix is
Now we shall deduce an important and useful result. Let us consider three sets, A = {a 1 ,... , an}, B = {b 1 ,... , bp} and C = {c 1 ,... , cm}; and relation R defined from A to B, and S defined from B to C. The Boolean matrices MR and MS are of sizes n × p and p × m respectively. Let us represent MR as [rij ], MS as [sij ] and MS◦R as [tij ]. Now tij = 1 ⇔ (ai, cj ) ∈ S ◦ R, which means that for some k between 1 and p, (ai, bk) ∈ R and (bk, cj ) ∈ S, that is, rik = 1 and skj = 1. In other words, if rik = 1 and skj = 1 then tij ← 1. This is identical to the condition needed for MR Ø MS to have a 1 in position (i,j). Hence we can say that: MS◦R = MR Ø MS. In the special case when we have S = R, S ◦ R = R^2 and MS◦R = MR^2 = MR Ø MR.
§ Theorem 2 Let A, B, C and D be sets, R a relation from A to B, S a relation from B to C and T a relation from C to D. Then T ◦ (S ◦ R) = (T ◦ S) ◦ R Proof Let the Boolean matrices for the relations R, S and T be MR, MS and MT respec- tively. As was shown in Example 2, the Boolean matrix product represents the matrix of composition, i.e. MS◦R = MR Ø MS. Thus we have:
2 Transitive Closure
A relation R is said to be transitive if for every (a, b) ∈ R and (b, c) ∈ R there is a (a, c) ∈ R. A transitive closure of a relation R is the smallest transitive relation containing R. Suppose that R is a relation defined on a set A and that R is not transitive. Then the transitive closure of R is the connectivity relation R∞. We will now try to prove this claim.
§ Theorem 4 Let R be a relation on a set A. Then R∞^ is the transitive closure of R. Proof We need to prove that R∞^ is transitive and also that it is the smallest transitive relation containing R. If a and b ∈ A, then aR∞b if and only if there exists a path in R from a to b. If aR∞b and bR∞c, then we can say that aR∞c. This is because aR∞b means that there exists a path from a to b in R, similarly bR∞c means that there exists a path from b to c in R. Hence there will also exist a path form a to c in R. (We can simply start along the path from a to b and then continue along the path from b to c. This will give us the path from a to c.) This proves that R∞^ is transitive. Now let us consider a transitive relation S (containing R) i.e. R ⊆ S. Since S is transitive we can say that Sn^ ⊆ S ∀ n. (This means that if there is a path of length n from a to b, then aSb, which is true as S is a transitive relation.) Now, S∞^ =
n=1 S
n.
Hence S∞^ ⊆ S. Since R ⊆ S, therefore R∞^ ⊆ S∞, and as S∞^ ⊆ S, we can say that R∞^ ⊆ S. This means that R∞^ is the smallest of all transitive relations on A that contain R. As R∞^ satisfies both the properties, we can say that R∞^ is the transitive closure of R on set A. This completes our proof.
zNote: If we include the identity relation ∆ then R∞^ ∪ ∆ is the reachability relation R∗.
† Example 3: Let A = { 1 , 2 , 3 , 4 }, and let R = {(1, 2), (2, 3), (3, 4), (2, 1)}. Find the transitive closure of R.
Solution: Method 1: Using Digraph
Figure 1: Digraph of R
We can determine R∞^ by geometrically computing all paths from the digraph. From vertex 1 we have paths to vertices 1, 2, 3 and 4. So the ordered pairs (1, 1), (1, 2), (1,
§ Theorem 5 Let A be a set with |A| = n, and let R be a relation on A. Then R∞^ = R ∪ R^2 ∪... ∪ Rn i.e. powers of R greater than n need not be considered to compute R∞. Proof Let a and b ∈ A, and let a, x 1 , x 2 ,... , xm, b be a path from a to b in R; i.e. (a, x 1 ), (x 1 , x 2 ),... , (xm, b) ∈ R. Now if xi and xj correspond to the same vertex for some i < j, then the path from a to b can be distinctly divided into three regions. First, a path from a to xi; second, a path from xi to xj ; and lastly, a path from xj to b. Here we see that the second path forms a closed loop as xi = xj. So we can eliminate it altogether and put the first and third paths together to give us a shorter path. In a similar manner we can keep eliminating all the loops we get later on in the path too, to give us a path a, x 1 , x 2 ,... , xk, b where all of x 1 , x 2 ,... , xk are distinct. This path is the shortest one possible from a to b.
Figure 2: Shows how a closed loop can be eliminated to give the shortest path. Here, i = 2 and j = 5.
Now let us consider the case when a 6 = b. Since the total number of elements in the set A is n, the maximum path length we can get is n − 1. If a = b, then we get the maximum path length as n (as |A| = n and all the vertices except a and b are distinct). ’There is a path from a to b in R’ is equivalent to aR∞b. And if aR∞b (i.e. there is a path from a to b in R), from the preceding discussion we know that aRkb for some k, 1 ≤ k ≤ n (as the maximum path length possible is n). Thus R∞^ = R ∪ R^2 ∪... ∪ Rn. Hence the theorem is proved.
† Example 4: Find the transitive closure of R defined in Example 3. Solution:
Here we have n = 4. To find W 1 , k = 1. We can see that W 0 has 1’s in column 1 at location 2, and in row 1 at location 2. Thus W 1 has a new 1 at position (2, 2).
For W 2 , k = 2. W 1 has 1’s in column 2 at locations 1 and 2, and in row 2 at locations 1, 2 and 3. So the new 1’s would go to positions (1, 1), (1, 2), (1, 3), (2, 1), (2, 2) and (2, 3) (if not already there).
For W 3 , k = 3. W 2 has 1’s in column 3 at locations 1 and 2, and in row 3 at location
For W 4 , k = 4. W 3 has 1’s in column 4 at locations 1, 2 and 3 but no 1’s in row 4. So no new 1’s are added. Hence W 4 = W 3. This gives us the matrix representation of R∞^ which is the same as that obtained in Example 3.
The Algorithm To find the matrix CLOSURE of the transitive closure of a relation R whose n × n matrix representation is MAT.
WARSHALL
This algorithm involves three for loops, with two of them nested. Each loop iterates from 1 to n. This gives us a time complexity of O(n^3 ). If we were to find the transitive closure using the matrix multiplication method we would get a time complexity of O(n^4 ). Each time a matrix multiplication is performed the time complexity is O(n^3 ) as there are three loops running (two nested) from 1 to n. The matrix multiplications are carried out a total of n − 1 times to find matrices (MR)^2 Ø, (MR)^3 Ø,... , (MR)n Ø, since MR∞ = MR ∨ (MR)^2 Ø ∨... ∨ (MR)n Ø. So the number of steps involved are n^3 (n − 1), giving us a time complexity of O(n^4 ). Thus we see that Warshall’s algorithm is surely simpler and more efficient than the matrix multiplication method.
An Application of Warshall’s Algorithm § Theorem 6 If R and S are equivalence relations on a set A, then the smallest equivalence relation containing both R and S is (R ∪ S)∞.
Proof We know that a relation is reflexive if and only if it contains the identity or equality relation ∆. Since both R and S are reflexive, ∆ ⊆ R and ∆ ⊆ S. This implies that ∆ ⊆ R ∪ S ⊆ (R ∪ S)∞. So (R ∪ S)∞^ is also reflexive. R and S are symmetric. Let us consider (a, b) ∈ R ⇒ (b, a) ∈ R (as R is symmetric). Since R ⊆ (R ∪ S) ⊆ (R ∪ S)∞, therefore (a, b) and (b, a) ∈ (R ∪ S)∞. This implies that (R ∪ S)∞^ is also symmetric. This can be proved in a similar manner by taking (a, b) ∈ S instead of R. The property of transitive closure tells us that any relation T ∞^ is the smallest transi- tive relation containing the relation T. Applying this property to R ∪ S, we can conclude that (R ∪ S)∞^ is the smallest transitive relation containing (R ∪ S). Since it is transitive
Now we compute W 4 (k = 4). W 3 has 1’s at locations 3, 4 and 5 of column 4, and at locations 3, 4 and 5 of row 4. So new 1’s need to be added at positions (3, 5) and (5, 3) of W 3 to get W 4. Thus
To compute W 5 (k = 5), we see that, since W 4 has 1’s at locations 3, 4 and 5 of column 5, and at locations 3, 4 and 5 of row 5, no new 1’s need to be added. So W 5 = W 4.
∴ (R ∪ S)∞^ = {(1, 1), (1, 2), (2, 1), (2, 2), (3, 3), (3, 4), (3, 5), (4, 3), (4, 4), (4, 5), (5, 3), (5, 4), (5, 5)}.
The corresponding partition is {{ 1 , 2 }, { 3 , 4 , 5 }}.