Introduction to Analog VLSI Design Automation. Read more Practical problems in VLSI physical design automation · Read more. Majid Sarrafzadeh, C. K. Wong-An Introduction to VLSI Physical Design-Mcgraw- Hill College () - Ebook download as PDF File .pdf), Text File .txt) or read. Chapter 1 – Introduction. Original Authors: Andrew B. Kahng, Jens Lienig, Igor L. Markov, Jin Hu. VLSI Physical Design: From Graph Partitioning to Timing.
|Language:||English, Spanish, Portuguese|
|Genre:||Politics & Laws|
|Distribution:||Free* [*Register to download]|
Summary. Exercises. 3 Fabrication Process and its Impact on Physical Design Many students in my class CS (Introduction to VLSI design automation). Understand the process of VLSI layout design. ♋ Study the Learn about the physical design automation techniques Chapter 1. Introduction to VLSI Design . Chapter 1: Introduction to VLSI Physical. Design. Sadiq M. Sait & Habib Youssef. King Fahd University of Petroleum & Minerals. College of Computer Sciences &.
In many design methodologies, area and speed are the subjects of trade-offs. This is due to limited routing resources, as the more resources used, the slower the operation. Optimizing for minimum area allows the design both to use fewer resources, and for greater proximity of the sections of the design.
This leads to shorter interconnect distances, fewer routing resources used, faster end-to-end signal paths, and even faster and more consistent place and route times. Done correctly, there are no negatives to floorplanning. As a general rule, data-path sections benefit most from floorplanning, whereas random logic, state machines, and other non-structured logic can safely be left to the placer section of the place and route software.
Data paths are typically the areas of the design where multiple bits are processed in parallel with each bit being modified the same way with maybe some influence from adjacent bits.
Partitioning[ edit ] Partitioning is a process of dividing the chip into small blocks. This is done mainly to separate different functional blocks and also to make placement and routing easier. Partitioning can be done in the RTL design phase when the design engineer partitions the entire design into sub-blocks and then proceeds to design each module.
This kind of partitioning is commonly referred to as Logical Partitioning. It was the first step of the physical design cycle VR is the shortest Manhattan distance between two pins. It can also downsize the cells. In-placement optimization re-optimizes the logic based on VR. This can perform cell sizing, cell moving, cell bypassing, net splitting, gate duplication, buffer insertion, area recovery.
Optimization performs iteration of setup fixing, incremental timing and congestion driven placement. Post placement optimization before CTS performs netlist optimization with ideal clocks. It can do placement optimization based on global routing.
It re does HFN synthesis. Post placement optimization after CTS optimizes timing with propagated clock. It tries to preserve clock skew. Clock is not propagated before CTS as shown in the picture. After CTS hold slack should improve. Clock tree begins at.
There are two types of stop pins known as ignore pins and sync pins. If clock is divided then separate skew analysis is necessary.
Global skew achieves zero skew between two synchronous pins without considering logic relationship. Local skew achieves zero skew between two synchronous pins while considering logic relationship. Different ways to partition correspond to different circuit implementations. A hypergraph and a partition of it is shown in Figure 1.
The floorplanning problem in chip layout is analogous to floorplanning in building design. The cut or general cuts defines the partition. Although area is the major concern. A good floorplanning algorithm should achieve many goals. An important step in floorplan- ning is to decide the relative location of each module.
In order to intercon- nect a horizontal and vertical segment. For integrated circuits. There are two prevalent cost functions: The method first partitions the routing re- gion into a collection of disjoint rectilinear subregions. The placement corresponding to the circuit shown in Figure 1. In more recent CAD packages.
The traditional model of detailed routing is the two-layer Manhattan model with reserved layers. Usu- ally. A global routing based on the placement shown in Figure 1. Examples of design rules are timing rules. The floorplan corresponding to the circuit shown in Figure 1. This decomposition is carried out by finding a "rough" path i. A compacted version of the layout shown in Figure 1. A detailed routing corresponding to the global routing shown in Figure 1.
Layout rules were discussed in Section 1. The above steps are followed in their entirety in full custom designs. In other layouts. More recently. Recent designs per- form multilayer detailed routing and over-the-cell OTC routing.. In this stage the layout is op- timized. In addition. Pin grid arrays PGA have more pins that are distributed around the packages see Figure 1.
In such high-end systems. Packaging supplies chips with signals and powers. These chips must be supported by an equally fast and reliable packaging tech- nology. A package is essentially a mechanical support for the chip and facilitates connection to the rest of the system. Packaging has always played an important role in de- termining the overall speed.
A DIP has a small number of pins. One of the earliest packaging techniques was dual-in-line packaging DIP. Details of the steps taken in full custom designs and in the special cases will be discussed in subsequent chapters. We shall refer to the problems related to location of modules i. An example is shown in Figure 1. MCM has been used in high performance systems as a replacement for the individual packages. In order to minimize the delay.. This innovation led to major advances in interconnection density at the chip level of packaging.
This notation indicates the growth of a function. An instance of a typical MCM is shown in Figure 1. The primary goal of MCM routing is to meet high performance requirements. Compared with single chip packages or surface mount packages. Tech- niques to be described in Chapters 2 and 3 can be employed. The algorithms used in such tools should be of high quality and efficiency. Mul- tichip module MCM technology has been introduced to significantly improve performance by eliminating packaging.
Below the chip layer.. Various values of n and T n are shown in Table 1. An MCM is a packaging technique that places several semiconductor chips.
Consider a problem of size n. Most problems in this text can be solved in e. Let A be an algorithm solving the problem in time T n. To simplify the analysis of algorithms. The second reason for solving a problem is to estimate the complexity of a problem. This is. Once that is decided. There are two reasons for solving a problem. Depending on the application.
There is. An algorithm that does not guarantee an optimal solution is called a heuristic. For such problems. Such an approach has two advantages. This is often a good alternative.
The first is to find a solution that will be used in designing a chip. In this application. Such algorithms are of practical importance. This will further illustrate the natural trade-off between quality and time complexity of algorithms. The subproblems are independent and the partitions are recursive. The partition is usually done in a top-down fashion and the solution is constructed in a bottom-up fashion. Algorithms for optimization problems go through a num- ber of steps. The idea is to search the entire solution space by considering every possible solution and evaluating the cost of each solution.
A problem is partitioned into a collection of sub- problems. The most naive paradigm is exhaustive search. The following algorithmic paradigms have been employed in most sub- problems arising in VLSI layout.
Even if they do. The main problem is that such algorithms are very slow-"very slow" does not mean that they take a few hours or a few days to run.
Because of the size of the problems involved. In dynamic programming. In greedy algorithms the choice that results in a locally optimal solution is made. At each step. Each subproblem is solved once. The sizes of the subproblems are typically balanced. These paradigms will be used throughout this book see an introductory text in design and analysis of algorithms. Dynamic programming is applied when the subproblems are not independent.
Based upon this evaluation. In this approach. As will be shown in the next chapter. The idea originated from observing crystal formation of physical materials. There are also paradigms that are not discussed in this text. In an instance of the bin packing problem. The fitness value is a measure of the competence i.
In the mathematical programming approach. This is a general and usually inefficient method for solving optimization problems. At each stage of a genetic algorithm. Three bins are used. The algorithm evaluates the feasible solutions it encounters and moves from one solution to another. The objective function is a minimization or maximization problem subject to a set of constraints. It is generally hard to claim anything about the running time of such algorithms.
There is a set B of bins each with a fixed integer size b. There are other paradigms that will be introduced in later chapters. The goal is to pack all the items. The solutions to be selected in the next generation are probabilistically selected based on a fitness value. A solution in this instance is to place the items with sizes 1 and 5 in one bin. Example 1. Simulated annealing is a technique used to solve gen- eral optimization problems.
When the objective function and all inequalities are expressed as a linear combination of the involved variables then the system is called a. To create a new generation..
The described paradigms can be demonstrated using the classic bin packing problem. A subset of the items can be packed in one bin as long as the sum of the sizes of the items in the subset is less than or equal to b. This technique used is especially useful when the solution space of the problem is not well understood.. A simulated anneal- ing algorithm examines the configurations i. There is a tree-structured configuration space.
The aim is to avoid searching the entire space by stopping at nodes when an optimal solution cannot be obtained. Next the algorithmic paradigms. A possible solution is as follows. First find a solution using I B bins or de- I cide there is no solution. Place the items one by one into the first bin until it is saturated.
One ap- proach is to find all possible partitions of the items how many partitions are there for seven items? One strategy is to solve the problem for the first k items. The weight wi. In the first step. The next item has size 1 and can be placed in the first bin.
Next move to the next bin and repeat the procedure. Partition the problem into two subproblems. Continue this procedure.
In most hierarchical solutions. The objective is to maximize E i EE wi. Note that the solution presented within each paradigm is not unique. Now each subproblem can fit inside one bin. Formulate the decision problem as follows. Finally there are bins containing items 1. In this case the problem was divided into subproblems and the solutions to subproblems did not interact.
Since neither of the subproblems can fit in one bin. A total of four bins is used. The next item has size 2 and requires a new bin. In the next step. There are many ways to do exhaustive search. Also not every paradigm is effective for this problem: First obtain a few initial solutions: At the first level. This improves the solution. The probability of moving from one solution to another depends on the difference between the cost of the previous solution and the new solution i.
Moving from one solution to another is done in a probabilistic manner. At the beginning of this algorithm one may even accept with certain probability solutions that are worse than the original one. As in simulated annealing. If the solution exists. Formulate the above linear programming problem as to find a feasible solution satisfying the set of constraints. It is also possible to combine more than one paradigm in a given algorithm.
Start with an initial feasible solution. Step 3. It also depends on the stage of the algorithm-in early stages the probability of moving to higher-cost solutions is greater. Step 2. For ex- ample from S. As not every paradigm is suitable for any given problem. From this 1 4. Then exit. Then by crossover of S. Since it is not desirable to have a large population in one generation because the space grows exponentially. The task is repeated until either a good solution is obtained or time runs out.
S5 a subset is selected. This branching and bounding procedure continues until an optimal solution is obtained. Various layout environments such as standard cells and FPGAs will be discussed. Analyze the time complexity of your algorithm. What is the minimum number of bins needed in a one-dimensional bin packing problem as a function of size of the items and size of the bins? Chapter 4. State the assumptions you made and justify their realistic nature. Route the two-terminal subcircuits also called a switchbox shown in Figure E1.
How many transistors were used? Compare the top-down chip partitioning with bottom-up clustering.
Find a layer assignment of your layout using the minimum number of layers. Solve the sorting problem sort a given list of integers using each of the paradigms discussed in Section 1. Discuss the advantages and disadvantages of using a grid abstraction throughout the layout process.
Chapters 2 and 3 review the top-down approach to the circuit layout problem and elaborate on various subproblems that is. Design an optimal algorithm for solving this class of the one-dimensional bin packing problem.
An overview of other ap- proaches to the layout problem is also given. Chapter 5 covers the single-layer layout problem and its application to over-the-cell routing and multichip modules. Explain the significance of the following parameters and decide whether minimum or maximum requirements are expected see Figure 1.
In certain layout styles. How many transistors did you use? Which paradigm is most effectively used? Other books. Explain why e. The problem of cell generation is discussed in Chapter 6. Find a minimum area routing of the circuits shown in Figure 1. In these chapters. Assume the value B i. Some design rules are given as minimum requirements while others are expressed as maximum requirements. Find a minimum area layout of the circuit shown in Figure 1.
How to compact a given layout to obtain a layout with small area is shown in Chapter 7. Consider an instance of the one-dimensional bin packing problem. Given a set of rectangular bins each with size A x B. Two intervals I.
Analyze the time complexity of both algorithms. Design a greedy algorithm for finding the maximum number of pairwise disjoint intervals.
Given integers m and n and a two-terminal net N represented by terminals a. Each interval I. Design a greedy algorithm and a hierarchical algorithm for solving this problem.
Consider a two-dimensional version of the bin packing problem. Find an optimal solution to an instance of the two-dimensional bin packing problem see the previous exercise assuming: Report the resulting length. Design a simulated annealing algorithm for the same problem.
Neither the bins nor the items can be rotated. Minimize the total length.
What restricted and nontrivial classes of the one-dimensional bin packing problem can be optimally solved? Justify your answer.. Analyze the time complexity of your algorithms. Why is it necessary to redistribute the pins in MCM design? Describe your algorithm and discuss its performance. The northwest location is coordinate 0. Input format. The output format is shown in Figure CE1. Design an algorithm for finding the maximum number of pairwise independent segments.
The input format is: The second line has coordinates of the source and the sink. For each segment. Consider a set of horizontal and vertical line segments in the plane. Segments are given one by one and separated by a comma. The first line contains the grid size the first coordinate is the x-coordinate. These imaginary segments should not intersect the vertical segments at their endpoints.
The minimum value r is called the chromatic number of the graph. The empty space should be shaded. Each segment is specified by the x. The goal is to minimize the number of colors r. Implement a greedy algorithm for the one-dimensional bin packing problem. The size of the items should be shown. The first line contains the size of the bins.
Or send an error message indicating a solution does not exist. These heuristics can be of three types: Introduce heuristics to improve the result. Two segments are visible if there is an imaginary horizontal line segment intersecting those two segments and no other segments. Design an algorithm for obtaining a coloring of a given graph G. A coloring of G is a mapping of vertices into colors 1. Given a set of vertical segments. Also write the percentage of wasted space: Each bin should be partitioned as dictated by the items inside it.
The next lines contains the size of the items. Implement a hierarchical algorithm for the previous computer exercise. The specifications of two segments are separated by a comma. Consider a set of vertical segments on the plane. You may either show color labels next to each vertex or actually color the vertices. O 0 visibility pairs f 0 In the example. A maximum independentset MIS is an independent set of maximum cardinality. The input format is the same as the format given in Fig- ure CE1.
Specifications of edges are separated by commas. An edge is specified by the two vertices it connects. The output should show the graph and highlight the selected vertices. E V of G is a subset of vertices that are pairwise independent. Design and implement an algorithm that finds an MIS of a given graph G. An example of input follows.
Is your algorithm good? Verify your answer. Output format. An independent set V. The set of edges are given as input. Compare the running time and quality of the two algorithms.
A minimum vertex cover MVC is a vertex cover of minimum cardinality. Design a greedy algorithm for obtaining a MVC in a given graph G. A vertex cover in G is a subset V.
Each subproblem should be solved efficiently to make subsequent steps easy. Here the fundamental problems of circuit layout are discussed. There are three major problems related to the placement problem: The objective is to partition the circuit into parts such that the sizes of the components are within prescribed ranges and the number of connections between the components is minimized.
For There are two major subproblems. Partitioning is a fundamental problem in many CAD problems. The circuit layout problem is partitioned into a collection of subproblems.
The focus is on minimizing the chip area and total wire length. At the end of each section. In Chapter 4.
This chapter focuses on the placement problem and the next chapter focuses on the routing problem. L consists of a set N of vertices and a set L of hyperedges. We also associate a vertex weight function w: In order to explain the partitioning problem clearly. Each edge corresponds to a pair of distinct vertices see Figure 2. Different partitionings result in different circuit implementations.
Partitioning can be applied at many levels. E consists of a set V of vertices. One natural way to represent ea is to put an edge between every pair of distinct modules M. In the bipartition problem. These weights should probably depend on the problem itself and the problem instance. Most techniques that have been proposed are variations of these.. B i is the maximum size of part i and b i is the minimum size of part i.
The bipartition problem can also be used as a basis for heuristics in multiway partitioning. The number a is called the balance factor. The vertex weight may indicate the size of the corresponding circuit element. In this section. B i 's and b i 's are input parameters. The weight of either partition is no more than a times the total weight. One possible solution is to put varying weight on the hyperedges.
Thus a circuit is usually transformed to a graph before subsequent algorithms are applied. In prac- tice. A multiway partition of a hyper graph H is a set of nonempty. Since most algorithms work with graphs instead of hypergraphs..
Let the weight of the hyperedge ea be w ea. The transformation process involves replacing a hyperedge with a set of edges such that the edge costs closely resemble the original hypergraph when processed in subsequent stages. Mj is proportional to the number of hyperedges of the circuit between M. Then the vertices v a and vb are locked. An outline of the procedure follows. An effective way of representing hyperedges with graph edges is still un- known.
Partitioning algorithms are applied to the resulting graph. A minimum spanning tree of the complete graph is obtained. In some cases. The result is a in general. In order to transform a hypergraph into a graph. For the rest of this book. A cost increase is allowed now in the hope that there will be a cost decrease in subsequent steps. Given an unweighted graph G.
One such algorithm is an iterative bipartitioning algorithm proposed by Kernighan and Lin . This locking prohibits them from taking part in any further exchanges. This process continues.
Kernighan and Lin gave further heuristic improvements of their algorithm such as finding a good starting partition that are aimed at finding better bipartitions. After this is done. More precisely. The for-loop is executed 0 n times.
A formal description of the Kernighan-Lin algorithm is as fol- lows. The body of the loop requires 0 n 2 time. E V2 whose exchange makes the largest decrease or smallest increase in cut-cost. In this example. The extension of the Kernighan-Lin algorithm to multiway i. Figure 2. The repeat loop usually terminates after several passes. In their original paper. Start with an arbitrary partition of the graph into r equal-sized sets. In general. L Thus. Vbk from V2 to V1.
The usual repre- sentation of the circuit as a graph is shown in Figure 2. Net 1 connects modules 1. Vertex pair Gain Cut-cost 0. There are 2 pairs of subsets to consider. Consider the circuit shown in Figure 2. They introduced the following new elements to the Kernighan-Lin algorithm: Since each vertex is weighted. As in the Kernighan-Lin algorithm. To choose the next vertex to be moved. The data structure consists of two pointer arrays. An analogous statement holds for list B.
In the bipartitioning algorithm to be described next. When no moves are possible or if there are. Vertices are weighted. Let the two partitions be A and B. An initial balanced partition can be obtained by sorting the vertex weights in decreasing order. Consider 3. Indices of the list correspond to possible positive or negative gains. Here dmax is the maximum vertex degree in the hypergraph. A special data structure is used to select vertices to be moved across the cut to improve running time the main feature of the algorithm.
All vertices resulting in gain g are stored in the entry g of the list. Each pointer in the array list A points to a linear list of unlocked vertices inside A with the corresponding gain. Only a single vertex is moved across the cut in a single move. Moving one vertex from one set to the other will change the cost by at most dmax. A move may increase the cut-cost.
An initial balanced partition is one with either side of the partition having a total vertex weight of at most W. Note that a move of a vertex across the cut is allowable if such a move does not violate the balance condition. The following statement is the essential part of this model: A net-cut model was proposed by Schweikert and Kernighan  to handle multiterminal cases. This algorithm starts with a balanced partition A. Assume some of the hyperedges el. The running time of the Fiduccia-Mattheyses algorithm is 0 t.
When M. Note that summation of d's over all hyperedges is simply. Data structures list A and list B are easily initialized by traversing all hy- peredges. A sim- ilar case analysis holds for numB. Note that the cases for numA and numB are not necessarily disjoint.
Theorem 2. Both must be examined and then actions taken as dictated above. Let numA be the number of modules connected by ea that are in A. Otherwise the pass is ended. For each hyperedge e. Without loss of generality. In one pass of the algorithm. If all modules of ea are in A that is. Finding such a module is easy if doubly linked pointers are kept from the max gain in list A to the second gain i. The technique was.
This is the case because M. Predefining the partition size. The approach is based on a current flow formulation. Finding the modules with maximum gain in list A and list B takes a constant time. Sanchis  extended Krishnamurthy's algorithm to handle multiway network partitioning and showed that the technique was especially useful for partitioning a network into a large number of subsets.
Repeat the following for each hyperedge that is connected to M. The total time complexity of the algorithm is O ct. Case 1. Earlier work on ratio cut. The notion of ratio cut was. Let e a be one of the hyperedges that is connected to M. Case 2. A further improvement was proposed by Krishnamurthy . It takes O do time to update the gains in each case. He introduced a look-ahead ability into the algorithm.
Thus the best candidate among such vertices can be selected with respect to the gains they make possible in later moves. Note that if a move cannot be made. If all modules of ea are in B that is..
Since each hyperedge will be considered twice once in Case 1 and once in Case 2. Note that the Fiduccia-Mattheyses algorithm chooses arbitrarily between ver- tices that have equal gain and equal weight. An approximation algorithm was proposed in . The ratio cut approach concept can be described as follows. The objective is to find a cut that generates the minimum ratio among all cuts in the graph. The cut-cost ratio is defined as Rv. A better partition. YjEV2 c Let c ij be the cost of an edge connecting node i and node j.
Here the ratio cut algorithm proposed in  will be presented. The ratio cut algorithm consists of three major phases: Like many other partitioning problems.
These phases are discussed in more detail below. The ratio gain could be a negative real number if the node movement increases the ratio value of the cut. Move node i to the other side and lock it. Update the ratio gains for the remaining affected and unlocked nodes. Suppose that two seeds s and t have been chosen. The same procedure is applied again. The partition with the best resultant ratio is recorded.
The ratio gain r i of a node i is defined as the decrease in ratio if node i were to move from its current subset to the other. Since the ratio of the first cut is lower than that of the second cut.
From s to t. The ratio gain is not defined for the two seeds s and t. Choose a node i in Y whose movement to X will generate the best ratio among all the other competing nodes. A group swapping is utilized to make further improvement.
Find the longest path starting from s i. Once an initial partitioning is made. Move node i from Y to X. The process is as follows. The node at the end of one of the longest paths is called t. A left-shifting operation is similarly defined. The cut giving the minimum ratio found in the procedure forms the initial partitioning. A right-shifting operation is defined as shifting the nodes from s toward t. That is followed by the right- shifting operations in the opposite direction as shown in Figure 2.
Suppose seed s is fixed at the left end. Calculate the ratio gain r i for every node i. After the iterative shifting is complete. Randomly choose a node s. Select an unlocked node i with the largest ratio gain from two subsets. Finding a node with maximum gain is readily done. There is one difference between this algorithm and that of Fiduccia-Mattheyses.
Examine the time complexity of each phase separately. Node t the node farthest from s can be obtained by breadth-first search in O I E I time. As in the Fiduccia-Mattheyses algorithm. The running time of the ratio-cut algorithm is 0 1EI time.
In both cases. If the largest accumulated ratio gain during this process is positive. In the initialization step. The best ratio cost is obtained either by a node in. As before. Repeat steps until all nodes are locked. There are two lists. Thus both nodes must be examined and the one that results in the best ratio cost should be moved.
First find a node in list A or a node in list B whose move to the opposite partition results in the best ratio cost. Since nodes were moved from B to A. Each round steps takes O IEI time. Early studies link the spectral properties and the partition properties of the graph.
The running time of the ratio-cut algorithm in hypergraphs is 0 t time. Calculation of gains step 1 are done in list A and list B. In this round. The concept of ratio cut has been applied to find a two-way stable partition . In many applications. One of the more interesting approaches is based on the spectral method.
Each round takes O JEj time. Based on the proof of Theorem 2. Then the second round starts. The last phase is group swapping.
The initialization step takes O 1 El time as discussed in the proof of the Fiduccia-Mattheyses algorithm. Good experimental results have been obtained for both cut-weight and CPU time with this approach. Other approaches for finding a ratio cut have been proposed. These rounds are repeated for a constant number of steps or until no further gain is obtained. The next phase is iterative shifting. To accomplish this. The algorithm proposed in  establishes a connection between graph spectra and ratio cuts and proposes an algorithm for solving the ratio-cut problem based on the underlying spectral properties.
In practice it is sufficient to repeat these steps a constant number of times. Consider the two partitions of a given circuit shown in Figure 2. ET ai. Since each module can be assigned to only one partition. J Since the capacity of partition PI is bounded by a constant c Note that if the aim is to minimize the sum of the edges cut by the partition.
The algorithm proposed in  for solving the problem is based on mathe- matical programming. As the example demonstrates. At each stage. Eigenvector decomposition [ Accepting moves that increase the solution's cost stochastically allows simulated annealing to explore the design space more thoroughly and extricate the solution from a local optimum.
The objective function in simulated annealing is analogous to energy in a physical system. Perhaps the oldest techniques are the constructive methods that follow the greedy paradigm. Simulated annealing [ The above formulation is a special case of the quadratic Boolean programming problem.
The goal is to start placing the nodes in one of the two parti- tions. This approach is limited by the lack of a global view of the connectivity of the entire system. In order to separate a pair of nodes into two subsets. This technique usually produces good results at the expense of very long running time. Ex- perimental results. Other techniques related to this method can be found in [ Although this algorithm can find the optimal solution between any pair of vertices in a network.
Clustering [ A heuristic for solving the general problem was proposed in . The maximum-flow minimum-cut algorithm was proposed by Ford and Fulk- erson . They transformed the minimum cut problem into the maximum flow problem. This is also called the aggregation algorithm. The structure of the simulated annealing paradigm will be discussed in a later section.
Neighboring components with strong connections to each seed are merged with that seed. In practice. It requires the transformation of every multiterminal net into several two-terminal nets first.