Monday, October 05, 2009

Design procedure of VLSI

Structured design

Structured VLSI design is a modular methodology originated by Carver Mead and Lynn Conway for saving microchip area by minimizing the interconnect fabrics area. This is obtained by repetitive arrangement of rectangular macro blocks which can be interconnected using wiring by abutment. An example is partitioning the layout of an adder into a row of equal bit slices cells. In complex designs this structuring may be achieved by hierarchical nesting.

Structured VLSI design had been popular in the early 1980s, but lost its popularity later because of the advent of placement and routing tools wasting a lot of area by routing, which is tolerated because of the progress of Moore's Law. When introducing the hardware description language KARL in the mid' 1970s, Reiner Hartenstein coined the term "structured VLSI design" (originally as "structured LSI design"), echoing Edsger Dijkstra's structured programming approach by procedure nesting to avoid chaotic spaghetti-structured programs.As microprocessors become more complex due to technology scaling, microprocessor designers have encountered several challenges which force them to think beyond the design plane, and look ahead to post-silicon:

Challenges of VLSI

  • Power usage/Heat dissipation – As threshold voltages have ceased to scale with advancing process technology, dynamic power dissipation has not scaled proportionally. Maintaining logic complexity when scaling the design down only means that the power dissipation per area will go up. This has given rise to techniques such as dynamic voltage and frequency scaling (DVFS) to minimize overall power.
  • Process variation – As lithography techniques tend closer to the fundamental laws of optics, achieving high accuracy in doping concentrations and etched wires is becoming more difficult and prone to errors due to variation. Designers now have to simulate across multiple fabrication process corners before the chip is certified ready for production.
  • Stricter design rules – Due to lithography and etch issues with scaling, design rules for layout have gotten much more stringent. Designers have to keep more of these rules in mind while laying out custom circuits. The overhead for custom design is now reaching a tipping point, with many design houses now opting to switch to electronic design automation (EDA) tools to automate their design process.
  • Timing/design closure – As clock frequencies tend to scale up, designers are finding it more difficult to distribute and maintain low clock skewbetween these high frequency clocks across the entire chip. This has led to a rising interest in multicore and multiprocessor architectures, since an overall speedup can be obtained by lowering the clock frequency and distributing processing.
  • First-pass success – As die sizes shrink (due to scaling), and wafer sizes go up (to lower manufacturing costs), the number of dies per wafer increases, and the complexity of making suitable photomasks goes up rapidly. A mask set for a modern technology can cost several million dollars. This non-recurring expense deters the old iterative philosophy involving several "spin-cycles" to find errors in silicon, and encourages first-pass silicon success. Several design philosophies have been developed to aid this new design flow, including design for manufacturing (DFM), design for test (DFT), and many others

No comments:

Post a Comment

Popular Posts