8 things you need to know about FPGA design
1. Balance and exchange of area and speed
The area here refers to the amount of logic resources that a design consumes FPGA/CPLD. For FPGA, it can be measured by the consumed FF (flip-flop) and LUT (look-up table), and a more general measure can be the equivalent logic occupied by the design. number of doors. Speed refers to the highest frequency that the design can run stably on the chip. This frequency is determined by the timing conditions of the design and the clock requirements that the design meets: PAD to PAD time, Clock Setup Time, Clock Hold Time, Clock-to-Output Delay and many other timing feature quantities are closely related.
The two indicators of area and speed run through the clock of FPGA/CPLD design and are the ultimate criteria for evaluating design quality – area and speed are a pair of contradictory unity of opposites. It is unrealistic to require one with the smallest design area and the highest operating frequency at the same time. A more scientific design goal should be to occupy the smallest chip area under the premise of meeting the design timing requirements (including the design frequency requirements). Or under the specified area, the timing margin of the design is larger and the frequency runs higher.
These two goals fully embody the idea of balancing area and speed. As the two components of the contradiction, the status of area and speed is not the same. In contrast, it is more important to meet the requirements of timing and operating frequency. When the two conflict, the principle of speed priority is adopted.
In theory, if a design has a large timing margin and can run at a much higher speed than the design requirements, then the chip area consumed by the entire design can be reduced through the reuse of functional modules, which is to exchange the advantage of speed for the area savings. On the contrary, if the timing requirements of a design are very high, and the common method cannot reach the design frequency, then generally by converting the data stream to serial and parallel, copying multiple operation modules in parallel, the whole design can be run with the idea of ping-pong operation and serial-to-parallel conversion.
2. Hardware principles
The hardware principle is mainly for HDL code writing: Verilog is an abstraction of hardware in the form of C language. Its essential function is to describe the hardware, and its final implementation result is the actual circuit inside the chip. Therefore, the final criterion for judging the pros and cons of a piece of HDL code is: the performance of the hardware circuit it describes and implements, including area and speed.
To evaluate the code level of a design is to say that the design is more fluent and reasonable in the form of conversion from hardware to HDL code. The final performance of a design depends to a greater extent on the efficiency and rationality of the hardware implementation scheme conceived by the design engineer. (HDL code is only one of the expressions of hardware design) It is wrong for beginners to pursue neatness and shortness of code, and it goes against the standard of HDL.
The correct coding method, first of all, it is necessary to have a good grasp of the hardware circuit to be realized, and the structure and connection of the hardware in this part are very clear, and then express it with an appropriate HDL statement. In addition, Verilog, as an HDL language, is hierarchical. System level–algorithm level–register transfer level–logic level–gate level–switch level. Building a priority tree will consume a lot of combinational logic, so if you can use case, try to use case instead of if…..else……
3. System Principles
The system principle contains two levels of meaning: from a higher level, it is a hardware system, how a single board performs module cost and task allocation, what algorithms and functions are suitable for implementation in FPGA, and what kinds of algorithms and functions It is suitable for implementation in DSP/CPU, as well as FPGA scale estimation data interface design, etc.
Specific to FPGA design, it is necessary to have a macro-level reasonable arrangement for the overall design, such as clock domain, module reuse, constraints, area, speed and other issues. The optimization of modules on the system is the most important. Generally speaking, functional modules with high real-time requirements and high frequency are suitable for FPGA implementation. Compared with CPLD, FPGA is more suitable for the design of larger scale, higher frequency and more registers. When using FPGA/CPLD design, you should have a deep understanding of the various underlying hardware resources inside the chip and the available design resources.
For example, FPGA generally has abundant flip-flop resources, and CPLD has more abundant combinational logic resources. FPGA/CPLD is generally composed of low-level programmable hardware units, BRAM, wiring resources, configurable IO units, and clock resources. The underlying programmable hardware unit generally consists of flip-flops and look-up tables. The underlying programmable hardware resources of Xilinx are relatively SLICE and consist of two FFs and two LUTs. Altera’s underlying hardware resource is called LE, which consists of one FF and one LUT. Using on-chip RAN can realize common unit modules such as single-port RAM, dual-port RAM, synchronous/asynchronous FIFO, ROM, and CAM.
4. Synchronous Design Principles
The logic core of the asynchronous circuit is realized by the combinational logic circuit, such as asynchronous FIFO/RAM read and write signals, address decoding and other circuits. The main signals and output signals of the circuit do not depend on any clock signal, and are not generated by the clock signal driving the FF. The biggest disadvantage of asynchronous sequential circuits is that they are prone to glitches, especially when simulating after placement and routing and observing actual signals with a logic analyzer. The core logic of the synchronous sequential circuit is realized by various flip-flops, and the main signals and output signals of the circuit are generated by a certain clock edge driving the flip-flops. Synchronous sequential circuits can avoid glitches very well, and there are no glitches in the simulation after placement and routing, and sampling the actual working signal with a logic analyzer.
Do sequential circuits necessarily use more resources than asynchronous circuits? From the perspective of pure ASCI design, about 7 gates are needed to realize a D flip-flop, and one gate can realize a 2-input NAND gate, so in general, synchronous sequential circuits occupy a larger area than asynchronous circuits. (It is different in FPGA/CPLD, mainly because of the calculation method of the unit block)
How to realize the delay of synchronous sequential circuit? The general method of generating delay in asynchronous circuit is to insert a Buffer, two-level NAND gate, etc. This delay adjustment method is not suitable for synchronous timing design. First of all, it is necessary to clarify the delay control grammar in the HDL grammar, which is a behavior-level code description and is often used for simulation test stimulation, but it will be ignored in circuit synthesis and cannot start the delay effect.
The delay of the synchronous sequential circuit is generally completed by timing control, in other words, the delay of the synchronous sequential circuit is designed as a circuit logic. For relatively large delays and special timing requirements, a high-speed clock is generally used to generate a counter, and the delay is controlled by the count of the counter; for relatively small delays, D flip-flops can be used to hit them, which not only delays the signal One clock cycle has elapsed, and the initial synchronization of the signal to the clock is completed, which is used in input signal sampling and increasing timing constraint margins.
How is the clock of a synchronous sequential circuit generated? The quality and stability of the clock directly determine the performance of the synchronous sequential circuit. The synchronous sequential circuit of the input signal requires synchronization of the input signal. If the beat of the input data is at the same frequency as the processing clock of the chip of this stage, and the hold time match is established, the main clock of the chip of this stage can be used to directly sample the input data register. Complete the synchronization of input data. If the input data and the processing clock of the chip of this stage are asynchronous, especially when the frequency does not match, the input data must be sampled twice by the processing clock to complete the synchronization of the input data.
Is it defined as a Reg type, it must be synthesized into a register, and it is a synchronous sequential circuit? The answer is negative. The two most commonly used data types in Verilog are Wire and Reg. Generally speaking, the Wire type specifies the book data and the network cable through combinational logic, while the reg type specifies the data is not necessarily implemented by registers.
5. Ping-pong operation
“Ping-pong operation” is a processing technique often used in data flow control. The processing flow of ping-pong operation is: the input data stream is allocated to two data buffers isochronously through the “input data selection unit”, and the data buffer module can For any storage module, the more commonly used storage units are dual-port RAM (DPRAM), single-port RAM (SPRAM), FIFO and so on.
In the first buffering cycle, the input data stream is buffered to the “data buffer module 1”; in the second buffering cycle, the input data stream is buffered to the “data buffer module 2” through the switching of the “input data selection unit”. ”, at the same time, the first cycle data buffered in “data buffer module 1” is selected by “input data selection unit” and sent to “data stream operation processing module” for operation processing; in the third buffer cycle, through “input data Switch the selection unit again, buffer the input data stream to the “data buffer module 1”, and at the same time, switch the data in the second cycle buffered by the “data buffer module 2” through the “input data selection unit” and send it to the “data buffer module 2”. Stream operation processing module” to perform operation processing. so cycle.
Typical ping-pong operation method The biggest feature of ping-pong operation is that the input data selection unit and the output data selection unit perform operations and processing. Taking the ping-pong operation module as a whole, and watching the data from both ends, the input data and output data flow are continuous without any pause, so it is very suitable for pipeline processing of data flow.
Therefore, ping-pong operations are often used in pipelined algorithms to achieve seamless buffering and processing of data. The second advantage of ping-pong operations is that they save buffer space. For example, in the WCDMA baseband application, a frame is composed of 15 time slots, and sometimes it is necessary to delay the data of a whole frame by one time slot before processing. time slot for processing.
At this time, the length of the buffer is the data length of 1 frame. Suppose the data rate is 3.84Mb/s and 1 frame is 10ms. At this time, the length of the buffer needs to be 38400bit. If the ping-pong operation is used, it is only necessary to define two buffers for 1 time slot. When writing data to one RAM, it reads data from another RAM, and then sends it to the processing unit for processing. At this time, the capacity of each RAM is only 2560 bits, and the two blocks add up to 5120 bits.
In addition, clever use of ping-pong operation can also achieve the effect of processing high-speed data streams with low-speed modules. As shown in Figure 2, the data buffer module adopts dual-port RAM, and a first-level data preprocessing module is introduced after the DPRAM. This data preprocessing can be performed according to various data operations required. For example, in the WCDMA design, the input data Stream despreading, descrambling, de-rotating, etc. Assuming that the input data stream rate of port A is 100Mbps, the buffer period of the ping-pong operation is 10ms
6. Serial-parallel conversion design skills
Serial-to-parallel conversion is an important skill in FPGA design. It is a common method for data stream processing and a direct embodiment of the idea of area and speed interchange. There are many ways to realize serial-to-parallel conversion. According to the requirements of data sorting and quantity, registers, RAM, etc. can be selected to realize.
In the previous example of the ping-pong operation, the serial-to-parallel conversion of the data stream is realized through DPRAM, and due to the use of DPRAM, the data buffer can be opened very large, and the serial-to-parallel conversion can be completed by registers for a relatively small number of designs. If there is no special requirement, synchronous timing design should be used to complete the conversion between serial and parallel.
For example, from serial to parallel, the data is arranged in the high order first, which can be implemented by the following encoding: prl_temp<={prl_temp,srl_in}. where prl_temp is the parallel output buffer register and srl_in is the serial data input. The serial-to-parallel conversion that has a prescribed order can be realized by using the case statement. For complex serial-to-parallel conversion, it can also be implemented with a state machine. The method of serial-to-parallel conversion is relatively simple and need not be described here.
7. Design idea of pipeline operation
The first thing to declare is that the pipeline described here refers to a design idea of processing flow and sequential operations, not the “Pipelining” used to optimize timing in FPGA and ASIC design. Pipelining is a common design method in high-speed design. If the processing flow of a design is divided into several steps, and the entire data processing is “single flow”, that is, there is no feedback or iterative operation, and the output of the previous step is the input of the next step, you can consider using the pipeline design method to Increase the operating frequency of the system.
The structure of the pipeline design The structure diagram of the pipeline design is shown in the figure. Its basic structure is: connecting appropriately divided n operation steps in a single flow direction. The biggest feature and requirement of pipeline operation is that the processing of data flow in each step is continuous in time. If each operation step is simplified and assumed to pass through a D flip-flop (that is, using a register to beat a beat), then pipeline operation Just like a shift register bank, the data flow flows through the D flip-flops in turn to complete the operation of each step.
Pipeline design sequence A key to pipeline design lies in the rational arrangement of the entire design sequence, which requires a reasonable division of each operation step. If the operation time of the pre-stage is exactly equal to the operation time of the post-stage, the design is the simplest, and the output of the pre-stage can be directly imported into the input of the post-stage; if the operation time of the pre-stage is greater than that of the post-stage, the output of the pre-stage needs to be The data can only be imported to the post-stage input terminal if it is properly buffered.
If the operation time of the previous stage is just shorter than the operation time of the subsequent stage, the data flow must be shunted through the replication logic, or the data should be stored and post-processed in the previous stage, otherwise, the data of the subsequent stage will overflow. Pipeline processing methods such as RAKE receiver, searcher, preamble acquisition, etc. are often used in WCDMA design. The reason why the pipeline processing method is more frequent is because the processing module is copied, which is another concrete manifestation of the idea of exchanging area for speed.
8. Synchronization method of data interface
Synchronization of data interface is a common problem in FPGA/CPLD design, and it is also a key and difficult point. Many design instability is due to the synchronization problem of data interface. In the circuit diagram design stage, some engineers manually add BUFT NOR gates to adjust the data delay, so as to ensure that the clock of the module at this level has the requirements for the setup and hold time of the data of the upper-level module
In order to have stable sampling, some engineers generate a lot of clock signals with a difference of 90 degrees, and sometimes use the positive edge to hit the data, and sometimes use the negative edge to hit the data to adjust the sampling position of the data. Neither approach is desirable because the sampling implementation must be redesigned once the chip is replaced or migrated to another chip set. Moreover, these two methods result in insufficient margin for circuit implementation. Once the external conditions change (such as temperature rise), the sampling timing may be completely disordered, resulting in circuit paralysis.
The input and output delays (inter-chip, PCB wiring, delays of some driving interface components, etc.) are unmeasurable, or under conditions that may change, how to complete data synchronization? For the unpredictable or variable data delay, a synchronization mechanism needs to be established, and a synchronization enable or synchronization indication signal can be used. In addition, data synchronization can also be achieved by accessing data through RAM or FIFO.
Do I need to add constraints to design data interface synchronization? It is recommended to add appropriate constraints, especially for high-speed designs, be sure to add corresponding constraints on period, setup, hold time, etc. There are two functions of the additional constraints here: to increase the working frequency of the design to meet the requirements of interface data synchronization; to obtain the correct timing analysis report.
Haoxinshengic is a pprofessional FPGA and IC chip supplier in China. We have more than 15 years in this field。 If you need chips or other electronic components and other products, please contact us in time. We have an ultra-high cost performance spot chip supply and look forward to cooperating with you.
If you want to know more about FPGA or want to purchase related chip products, please contact our senior technical experts, we will answer relevant questions for you as soon as possible
Our Products