CN116090382A - Time sequence report generation method and device - Google Patents

Time sequence report generation method and device Download PDF

Info

Publication number
CN116090382A
CN116090382A CN202310350907.XA CN202310350907A CN116090382A CN 116090382 A CN116090382 A CN 116090382A CN 202310350907 A CN202310350907 A CN 202310350907A CN 116090382 A CN116090382 A CN 116090382A
Authority
CN
China
Prior art keywords
time sequence
path
timing
data
thread group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310350907.XA
Other languages
Chinese (zh)
Other versions
CN116090382B (en
Inventor
杜泽杰
冯春阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hongxin Micro Nano Technology Co ltd
Original Assignee
Shenzhen Hongxin Micro Nano Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Hongxin Micro Nano Technology Co ltd filed Critical Shenzhen Hongxin Micro Nano Technology Co ltd
Priority to CN202310350907.XA priority Critical patent/CN116090382B/en
Publication of CN116090382A publication Critical patent/CN116090382A/en
Application granted granted Critical
Publication of CN116090382B publication Critical patent/CN116090382B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/32Circuit design at the digital level
    • G06F30/33Design verification, e.g. functional simulation or model checking
    • G06F30/3308Design verification, e.g. functional simulation or model checking using simulation
    • G06F30/3312Timing analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/16File or folder operations, e.g. details of user interfaces specifically adapted to file systems
    • G06F16/164File meta data generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application provides a method and equipment for generating a time sequence report, wherein the method comprises the following steps: the method comprises the steps of obtaining a time sequence report generation request of a target logic circuit, wherein the time sequence report generation request comprises the following steps: and the target path type is used for taking out at least one first time sequence path of the target path type of the target logic circuit, which is cached in advance, from the first cache area, sequentially taking out time sequence data of the at least one first time sequence path, which is cached in advance, from the second cache area, wherein the time sequence data of each first time sequence path comprises signal arrival time corresponding to an input end and an output end of a logic unit on each first time sequence path, and generating a time sequence report of the at least one first time sequence path according to the time sequence data of the at least one first time sequence path, which is taken out in sequence. The first buffer area and the second buffer area avoid blocking, computer resources are fully utilized, time consumed for generating the time sequence report is reduced, and the time sequence report generation efficiency is improved.

Description

Time sequence report generation method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and apparatus for generating a timing report.
Background
Static timing analysis (static timing analysis, STA) is used to analyze the timing performance of the verify gate circuit, analyze the maximum delay of the circuit, ensure that the setup time constraint at a particular frequency is met, and also analyze the minimum delay of the circuit to meet the hold time constraint.
The chip design must pass static time sequence analysis, otherwise, the chip cannot work normally in a large probability, after the static time sequence analysis is completed, the analysis result is evaluated by checking the time sequence report, the time sequence report is usually generated at present by searching a time sequence path, then completely expanding the time sequence path, generating the time sequence report of the path based on the time sequence path, writing the time sequence report of the path into a file or an output screen, and repeating the process until all the time sequence reports meeting the requirements are generated.
However, with the expansion of chip scale, especially for large-scale integrated circuit design, the time sequence paths of the circuit are more and more, and the time sequence report is generated by adopting the method, which takes a long time.
Disclosure of Invention
In view of this, the embodiments of the present application provide a method and apparatus for generating a timing report, so as to solve the problem that the time spent for generating the timing report is long.
In a first aspect, an embodiment of the present application provides a timing report generating method, including:
obtaining a timing report generation request of a target logic circuit, wherein the timing report generation request comprises the following steps: a target path type;
according to the target path type, at least one pre-cached first timing path aiming at the target path type of the target logic circuit is taken out from a first cache area;
sequentially taking out the pre-cached time sequence data of at least one first time sequence path from the second cache area, wherein the time sequence data of each first time sequence path comprises the signal arrival time corresponding to the input and output ends of the logic units on each first time sequence path;
and generating a time sequence report of the at least one first time sequence path according to the time sequence data of the at least one first time sequence path which are sequentially fetched.
In an optional embodiment, before the fetching, according to the target path type, at least one first timing path of the target path type for the target logic circuit, the method further includes:
adopting a first thread group to perform path search of the target path type on the target logic circuit to obtain the first timing path aiming at the target logic circuit, and caching the first timing path into the first cache area;
The method further comprises the steps of:
adopting the first thread group to perform path search of the target path type on the target logic circuit to obtain a second time sequence path aiming at the target logic circuit, and caching the second time sequence path into the first cache area;
and continuing to search the path of the target path type for the target logic circuit by adopting the first thread group until all time sequence paths of the target path type are searched, and sequentially storing all time sequence paths into the first cache area.
In an optional embodiment, before the employing the first thread group to continue the path search of the target logic circuit for the target path type, the method further includes:
if the preset data queue corresponding to the time sequence report is not empty, determining the memory address of the junk data in the preset data queue, wherein the junk data is generated in the process of generating the time sequence report;
and adopting the first thread group to clean the garbage data according to the memory address.
In an optional embodiment, before the sequentially fetching the pre-buffered time series data of the at least one first time series path from the second buffer area, the method further includes:
And sequentially generating time sequence data of the at least one first time sequence path by adopting a second thread group, and storing the time sequence data of the at least one first time sequence path into the second cache area.
In an alternative embodiment, if the number of the at least one first timing path is a plurality; the generating the timing report of the at least one first timing path according to the sequentially fetched timing data of the at least one first timing path respectively includes:
the second thread group is adopted, time sequence data of one first time sequence path in the plurality of sequentially taken first time sequence paths is written into a preset file, and processing resources of the data writing action are released;
if the processing resources of the data writing-out action are not occupied by other thread groups, adopting the second thread group to write out the time sequence data of another first time sequence path in the plurality of first time sequence paths to the preset file, releasing the processing resources of the data writing-out action until the time sequence data of the plurality of first time sequence paths are all written out to the preset file, and taking the written-out preset file as a time sequence report of the plurality of first time sequence paths, wherein the another first time sequence path is the next path of the one first time sequence path.
In an optional embodiment, the sequentially retrieving pre-buffered time series data of the at least one first time series path from the second buffer area includes:
taking out time sequence data of a first time sequence path from the second cache area by adopting a second thread group;
and continuing to take out another first timing path from the second cache area until the timing data of the at least one first timing path is taken out from the second cache area by adopting the second thread group, wherein the another first timing path is the next path of the one first timing path.
In an alternative embodiment, said employing said second thread group to continue fetching another first timing path from said second cache region includes:
judging whether the time sequence data of the other first time sequence path in the second buffer area meets the preset data integrity condition or not by adopting the second thread group;
and if the time sequence data of the other first time sequence path meets the preset data integrity condition, adopting the second thread group to continuously fetch the time sequence data of the other first time sequence path from the second cache area.
In an alternative embodiment, the method further comprises:
obtaining the waiting time length of the second thread group when the first time sequence path is taken out of the first cache area;
and if the waiting time length reaches a preset time length threshold value, adding at least one thread in the second thread group to the first thread group.
In an alternative embodiment, the method further comprises:
acquiring the quantity of the residual time sequence data in the second buffer area at intervals of preset time length;
and if the number of the residual time sequence data meets the preset number of frequencies to reach the preset frequency, adding at least one thread in the first thread group into the second thread group.
In a second aspect, an embodiment of the present application further provides a timing report generating apparatus, including:
an obtaining module, configured to obtain a timing report generation request of a target logic circuit, where the timing report generation request includes: a target path type;
a fetching module, configured to fetch, from a first cache area, at least one first timing path of the target path type for the target logic circuit, where the first timing path is cached in advance according to the target path type;
The extraction module is further configured to sequentially extract, from the second buffer area, pre-buffered time-series data of the at least one first time-series path, where the time-series data of each first time-series path includes a signal arrival time corresponding to an input/output end of the logic unit on each first time-series path;
and the generating module is used for generating a time sequence report of the at least one first time sequence path according to the time sequence data of the at least one first time sequence path which are sequentially fetched.
In an alternative embodiment, the apparatus further comprises:
the processing module is used for searching the path of the target path type for the target logic circuit by adopting a first thread group to obtain the first timing path aiming at the target logic circuit, and caching the first timing path into the first cache area;
the processing module is further configured to perform path search of the target path type on the target logic circuit by using the first thread group, obtain a second timing path for the target logic circuit, and cache the second timing path in the first cache area;
and the processing module is further configured to continue to perform path search for the target path type on the target logic circuit by using the first thread group until all time sequence paths of the target path type are searched, and store all time sequence paths in the first cache area in sequence.
In an alternative embodiment, the apparatus further comprises:
the determining module is used for determining the memory address of the junk data in the preset data queue if the preset data queue corresponding to the time sequence report is not empty, wherein the junk data is generated in the process of generating the time sequence report;
and the processing module is also used for adopting the first thread group and clearing the garbage data according to the memory address.
In an alternative embodiment, the generating module is further configured to:
and sequentially generating time sequence data of the at least one first time sequence path by adopting a second thread group, and storing the time sequence data of the at least one first time sequence path into the second cache area.
In an alternative embodiment, if the number of the at least one first timing path is a plurality; the generating module is specifically configured to:
the second thread group is adopted, time sequence data of one first time sequence path in the plurality of sequentially taken first time sequence paths is written into a preset file, and processing resources of the data writing action are released;
if the processing resources of the data writing-out action are not occupied by other thread groups, adopting the second thread group to write out the time sequence data of another first time sequence path in the plurality of first time sequence paths to the preset file, releasing the processing resources of the data writing-out action until the time sequence data of the plurality of first time sequence paths are all written out to the preset file, and taking the written-out preset file as a time sequence report of the plurality of first time sequence paths, wherein the another first time sequence path is the next path of the one first time sequence path.
In an alternative embodiment, the extraction module is specifically configured to:
taking out time sequence data of a first time sequence path from the second cache area by adopting a second thread group;
and continuing to take out another first timing path from the second cache area until the timing data of the at least one first timing path is taken out from the second cache area by adopting the second thread group, wherein the another first timing path is the next path of the one first timing path.
In an alternative embodiment, the extraction module is specifically configured to:
judging whether the time sequence data of the other first time sequence path in the second buffer area meets the preset data integrity condition or not by adopting the second thread group;
and if the time sequence data of the other first time sequence path meets the preset data integrity condition, adopting the second thread group to continuously fetch the time sequence data of the other first time sequence path from the second cache area.
In an alternative embodiment, the obtaining module is further configured to:
obtaining the waiting time length of the second thread group when the first time sequence path is taken out of the first cache area;
The processing module is further configured to add at least one thread in the second thread group to the first thread group if the waiting duration reaches a preset duration threshold.
In an alternative embodiment, the obtaining module is further configured to:
acquiring the quantity of the residual time sequence data in the second buffer area at intervals of preset time length;
the processing module is further configured to add at least one thread in the first thread group to the second thread group if the number of remaining time-series data satisfies a preset number of frequencies and reaches a preset frequency.
In a third aspect, an embodiment of the present application further provides an electronic device, including: the timing report generating device comprises a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, and when the electronic device runs, the processor and the memory are communicated through the bus, and the processor executes the machine-readable instructions to execute the timing report generating method according to any one of the first aspect.
In a fourth aspect, embodiments of the present application further provide a computer readable storage medium having a computer program stored thereon, the computer program when executed by a processor performing the timing report generating method according to any one of the first aspects.
The application provides a method and equipment for generating a time sequence report, wherein the method comprises the following steps: the method comprises the steps of obtaining a time sequence report generation request of a target logic circuit, wherein the time sequence report generation request comprises the following steps: and the target path type is used for taking out at least one first time sequence path of the target path type of the target logic circuit, which is cached in advance, from the first cache area, sequentially taking out time sequence data of the at least one first time sequence path, which is cached in advance, from the second cache area, wherein the time sequence data of each first time sequence path comprises signal arrival time corresponding to an input end and an output end of a logic unit on each first time sequence path, and generating a time sequence report of the at least one first time sequence path according to the time sequence data of the at least one first time sequence path, which is taken out in sequence. The first buffer area and the second buffer area avoid blocking, so that the time consumed for generating the time sequence report is reduced, and the time sequence report generation efficiency is improved.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a software framework for generating a timing report according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating a timing report generating method according to an embodiment of the present disclosure;
FIG. 3 is a second flowchart of a timing report generating method according to an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating a timing report generating method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a path generation and garbage collection process provided by an embodiment of the present application;
fig. 6 is a flowchart of a timing report generating method according to an embodiment of the present application;
fig. 7 is a flowchart of a timing report generating method according to an embodiment of the present application;
fig. 8 is a flowchart of a timing report generating method according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a timing report generation and timing report writing process provided by an embodiment of the present application;
fig. 10 is a flowchart of a timing report generating method according to an embodiment of the present application;
fig. 11 is a flowchart eighth of a timing report generating method according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a timing report generating device according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
Before introducing the technical scheme of the application, first, a timing path is described:
timing path: the start point is the point in time when data is loaded by a clock edge, the end point is the point in time when data is loaded by another clock edge through combinational logic, and the end point is the point in time when data is loaded by another clock edge. The path from the clock end to the starting point is called a transmitting clock path, the path from the clock end to the ending point is called a capturing clock path, and the path from the starting point to the ending point is called a data path.
The current generation steps of the timing report are typically: 1. searching a time sequence path, and 2, completely expanding the time sequence path: the method comprises the steps of a data path (data path), a clock path (clock path), 3, generating a time sequence report of the path based on the time sequence path, wherein the process needs to consider various effects such as clock pessimistic removal (clock reconvergence pessimism removal, CRPR), time sequence borrowing (TimingBorrow) and the like, 4, writing the time sequence report of the path out of a file or outputting and displaying the file or the output on a screen, 5, cleaning temporary data in a memory, then searching for the next time sequence path, and 6, repeating the steps 1-5 until all the time sequence reports meeting the requirements are generated.
The timing report generating step has a strong dependency relationship, and in the same step, different timing paths have a dependency relationship, for example, the second timing path must wait for the first timing path to be searched before the completion of the searching, because the searching process of the timing paths generates data affecting the subsequent timing paths, and in addition, the timing paths satisfy a certain order, and in order to ensure the order, the writing of the files must be ordered.
However, as the chip scale increases, more and more timing paths need to be checked, and particularly for large-scale integrated circuit designs, it is highly desirable to generate timing reports faster, which takes longer.
Based on the method, the software framework for generating the time sequence report is provided, the dependency relationship is solved by splitting each step in the time sequence report process and adopting a mode similar to a producer consumer, the multi-core performance of a modern CPU is fully utilized, the non-blocking (or weak blocking) multi-thread framework is designed, the multi-core resource of a computer can be fully utilized, and the performance is greatly improved.
Fig. 1 is a schematic diagram of a software framework for generating a timing report according to an embodiment of the present application, as shown in fig. 1, where the software framework includes:
a timing path generator: is responsible for searching and generating the timing path.
A timing data generator: and the time sequence data form a time sequence report of the complete time sequence path, the specific form of the time sequence data can be a text form, a character string form or a binary form, and meanwhile, garbage data is possibly generated.
A timing report writer: and writing the time sequence data of all time sequence paths into a file to generate a total time sequence report, or writing the time sequence data of all time sequence paths into a screen to display, wherein garbage data may be generated at the same time.
Timing path ring buffer (ring buffer): the size may be dynamically changed for buffering or depositing the timing path.
Time series data ring buffer container: the size may be dynamically changed for buffering or storing time series data.
Garbage data queues: a pointer or memory address for storing garbage data to be reclaimed, which has not yet been reclaimed urgently.
Garbage data recoverer: the method is used for processing the garbage data and recovering the memory.
The dependency relationship is solved in a mode similar to a producer consumer model, and the multi-core performance of a modern CPU is fully utilized.
The timing path generator is used as a producer for producing a timing path and buffering the timing path into the timing path buffer container.
The time sequence data generator is used as a consumer for consuming the time sequence path in the time sequence path ring buffer container, and is also used as a producer for producing time sequence data and buffering the time sequence data into the time sequence data ring buffer container.
The time sequence report writer is used as a consumer for consuming time sequence data in the time sequence data ring buffer container to be written into a file in sequence, and a certain thread is preempted into the time sequence report writer in fig. 1.
The time sequence data in the time sequence data ring buffer container can be buffered according to the form of rptStr queue, and the time sequence data of all time sequence paths are written out to a time sequence report.
In fig. 1, each step in the process of splitting the time sequence report is solved by adopting a manner similar to that of a producer and a consumer, the performance is ensured by the effective cooperation of the framework, and parallel computation is realized.
Fig. 2 is a schematic flow chart of a timing report generating method according to an embodiment of the present application, where an execution body of the embodiment may be an electronic device, such as a terminal device, a server, or the like.
As shown in fig. 2, the method may include:
s101, acquiring a time sequence report generation request of a target logic circuit.
In one application scenario, a user may input a timing report generation request for a target logic circuit through an electronic design automation (Electronic design automation, EDA) platform, where the EDA platform may be provided with a timing report generation option for the target logic circuit, and when it is required to generate a timing report for the target logic circuit, the timing report generation request for the target logic circuit may be obtained by selecting the option.
The timing report generation request includes: the target path type, which may include, for example: a timing path with rising edge, a timing path with falling edge, and a timing path from a to B, wherein A, B are all timing nodes on the timing path, and the timing nodes can be understood as input nodes and output nodes of logic units on the timing path.
S102, at least one first timing path of the target path type of the target logic circuit, which is cached in advance, is taken out from the first cache area according to the target path type.
The first buffer area is used for buffering the timing path for the target logic circuit, and may be, for example, the timing path buffer container, or may be a memory buffer area, an array, a database, or the like, and the specific form of the first buffer area is not particularly limited in this embodiment.
According to the target path type in the timing report generation request, at least one pre-cached first timing path for the target path type of the target logic circuit can be fetched from the first cache area, wherein the first timing path is the timing path for the target path type of the target logic circuit.
In some embodiments, the timing path generator is invoked to search the timing path of the target logic circuit, and cache the searched timing path in the first cache area, so that the timing path generator can generate the timing path in a non-blocking manner, does not need to wait for the timing path to be used or wait for a corresponding timing report to be generated, and can perform other tasks, such as searching for the next timing path, by placing the generated timing path in the first cache area.
S103, sequentially taking out the time sequence data of at least one first time sequence path cached in advance from the second cache area.
The second buffer area is used for buffering the time-series data of the first time-series path, and may be, for example, the time-series data buffer container, or may be a memory buffer area, an array, a database, etc., and the specific form of the second buffer area is not particularly limited in this embodiment.
The timing data of at least one first timing path is sequentially fetched from the second buffer area, where the at least one first timing path may have a certain order, the timing data of at least one first timing path is sequentially fetched from the second buffer area according to the order of the at least one first timing path, the timing data of each first timing path includes a signal arrival time corresponding to an input/output end of a logic unit on each first timing path, and may further include a timing margin of the first timing path, a start point, an end point, and the like of the first timing path, the logic unit on the timing path may be, for example, a processing chip or a transistor and a combination thereof on the timing path, an input/output end of the logic unit may be an input end and an output end of the processing chip or the transistor and the combination thereof, and the signal arrival time corresponding to the input/output end may be understood as a time when the signal arrives at the input end and a time when the signal arrives at the output end of the logic unit.
Note that, the margin (slot) is a difference between the actual time taken and the time required for the design, and indicates whether the design satisfies one designation of the timing, a positive slot indicates that the timing (margin of the timing) is satisfied, and a negative slot indicates that the timing (shortage of the timing) is not satisfied.
In some embodiments, at least one first timing path is sequentially fetched from the first buffer area by calling the timing data generator according to a preset rule, the preset rule may be, for example, that the timing path margin (slot) is from large to small (i.e. the timing violation risk is from small to large), or that the path margin (slot) is from small to large (i.e. the timing violation risk is from large to small), then an order is set for at least one first timing path, where the order may be recorded by using a numbering method, for example, a number may be added sequentially, and the order may be recorded by using a linked list, an array, or the like, to ensure that the path number and the fetch path order are consistent, and then calling the timing data generator to sequentially fetch at least one first timing path from the first buffer area according to the fetch path order, and obtain corresponding timing data, and put the timing data into the second buffer area.
In order to avoid the influence of the path search on the device performance, the first timing path in the first buffer area may not be a complete timing path, so that the timing data generator may be called to restore the complete first timing path according to the target logic circuit and the incomplete first timing path, then obtain the corresponding timing data according to the complete first timing path, and put the timing data into the second buffer area.
S104, generating a time sequence report of at least one first time sequence path according to the time sequence data of the at least one first time sequence path which is sequentially fetched.
The time sequence data of at least one first time sequence path corresponds to a time sequence report, and in order to ensure the order of the time sequence report, the time sequence report writer is called to generate the time sequence report of at least one first time sequence path according to the time sequence data of at least one first time sequence path which is sequentially fetched, so that the order of the time sequence data of at least one first time sequence path in the time sequence report is ensured.
In addition, the time sequence data of at least one first time sequence path which is sequentially fetched can be written out on the screen by calling the time sequence report writer so as to ensure the order of the time sequence data displayed on the screen.
In the timing report generating method of the embodiment, compared with the prior art that a timing path is sequentially searched and generated, the blocking of the path searching task and the report generating task is avoided through the first buffer area and the second buffer area, computer resources are fully utilized, time consumed for generating the timing report is reduced, and the timing report generating efficiency is improved.
Fig. 3 is a second flowchart of a timing report generating method according to the embodiment of the present application, as shown in fig. 3, in an optional implementation manner, before at least one first timing path of a target path type of a target logic circuit, which is cached in advance, is taken out from a first cache area according to the target path type, the method may further include:
s201, performing path search of a target path type on a target logic circuit by adopting a first thread group to obtain at least one first time sequence path aiming at the target logic circuit, and caching the at least one first time sequence path into a first cache area.
The first thread group comprises at least one thread, after a time sequence report generation request aiming at the target logic circuit is acquired, the first thread group is adopted to search a path of a target path type for the target logic circuit, a first time sequence path aiming at the target logic circuit is obtained, and the first time sequence path is cached in a first cache area.
Steps S103-S104 may then be performed, and in an alternative embodiment, the method may further comprise:
s202, adopting the first thread group to search a path of a target path type of the target logic circuit, obtaining a second time sequence path aiming at the target logic circuit, and caching the second time sequence path into the first cache area.
S203, adopting the first thread group, continuing to search the path of the target path type for the target logic circuit until all time sequence paths of the target path type are searched, and sequentially storing all time sequence paths into the first cache area.
And adopting the first thread group to search the path of the target path type for the target logic circuit to obtain a second time sequence path aiming at the target logic circuit, wherein the second time sequence path is the time sequence path aiming at the target path type of the target logic circuit, and caching the second time sequence path in the first cache area.
The steps S202-S203 may be performed simultaneously in the steps S103-S104, or may be performed after the steps S103-S104, which is not particularly limited in this embodiment, i.e., the target logic circuit is subjected to the path search of the target path type, and the two processes of acquiring the timing data of the first timing path and generating the timing report of the first timing path do not collide with each other, do not block each other, and may be performed in parallel, that is, the non-blocking manner is adopted to generate the timing path, and does not need to wait for the timing path to be used, or wait for the corresponding timing report to be generated, but put the generated timing path into the first buffer area, so that other works can be performed, such as searching for the next timing path.
It should be noted that, in the above steps S201 to S203, the first thread group may be used to invoke the path search of the timing path generator for the target path type of the target logic circuit.
In the timing report generation method of the present embodiment, the timing path is generated in a non-blocking manner, so that computer resources can be fully used, and a significant improvement in performance can be achieved.
Fig. 4 is a flowchart of a timing report generating method according to an embodiment of the present application, as shown in fig. 4, in an optional implementation manner, in step S202, before continuing to perform a path search of a target path type on a target logic circuit by using a first thread group, the method may further include:
S301, if a preset data queue corresponding to the time sequence report is not empty, determining a memory address of junk data in the preset data queue.
S302, adopting the first thread group to clean up garbage data according to the memory address.
The method comprises the steps of generating a time sequence report, wherein the time sequence report comprises a time sequence data generation process, generating a time sequence data, generating the time sequence report in a process of generating the time sequence report, buffering a memory address of the time sequence data in a preset data queue, judging whether the preset data queue corresponding to the time sequence report is empty or not before path searching, determining the memory address of the time sequence data in the preset data queue if the time sequence data is not empty, adopting a first thread group, and cleaning the time sequence data according to the memory address, wherein the preset data queue can be the time sequence data queue.
That is, before the path search is continued by using the first thread group, if there is garbage data, the garbage data is cleaned by using the first thread group to reclaim the memory, wherein the garbage collector can be called by the first thread group to clean the garbage data, and it can be seen that the path generation and garbage cleaning actions are performed by circularly calling the sequential path generator and the garbage collector by using the first thread group, wherein the garbage collector cleaning data can be called by using a single thread, or different garbage collector cleaning data can be called by using a plurality of threads, and the process can reduce blocking, that is, when the first cache area has no free space, the first cache area is not blocked, but the garbage data is recycled, otherwise, if the garbage data is not available, the first cache area is not blocked, and the sequential path is not generated.
If the preset data queue is empty, adopting the first thread group, continuing to search the path of the target path type for the target logic circuit until all time sequence paths of the target path type are searched, and sequentially storing all time sequence paths into the first cache area.
Fig. 5 is a schematic diagram of a path generation and garbage collection process provided in this embodiment, as shown in fig. 5, after a first thread is started, a first thread is used to call a timing path generator to generate a timing path, and the timing path is put into a first buffer area, then it is determined whether a garbage data queue is empty, if the garbage data queue is empty, it is determined whether all the timing paths meeting the conditions (i.e. the target path type) have been generated, if not, the first thread is continued to be used to call the timing path generator to generate the timing path, the process is repeated until all the timing paths meeting the conditions have been generated, and if yes, then it is ended.
If the garbage data queue is not empty, a first thread group is adopted, a garbage recoverer is called to clean the garbage data, whether all the time sequence paths meeting the conditions are generated is judged, if not, the first thread is continuously adopted to call a time sequence path generator to generate the time sequence paths, the process is repeated until all the time sequence paths meeting the conditions are generated, and if yes, the process is ended.
Fig. 6 is a flowchart of a timing report generating method according to an embodiment of the present application, as shown in fig. 6, in an optional implementation manner, before sequentially retrieving, in step S103, timing data of at least one first timing path cached in advance from the second cache area, the method may further include:
s401, sequentially generating time sequence data of at least one first time sequence path by adopting a second thread group, and storing the time sequence data of the at least one first time sequence path into a second cache area.
Wherein the second thread group comprises: and the at least one thread adopts a second thread group, invokes the time sequence data generator, and sequentially fetches at least one first time sequence path aiming at the target path type from the first cache area, wherein the at least one first time sequence path can be fetched from large to small or from small to large according to the path length.
Then, an order is set for at least one first timing path, and the timing data of the at least one first timing path is sequentially generated according to the order, the same order as the corresponding timing path is given to the timing data, and the timing data of the at least one first timing path is stored in the second buffer area.
It should be noted that, after storing the time sequence data of at least one first time sequence path into the second cache area, the second thread group may be further adopted to continue to take out the time sequence paths from the first cache area and generate the time sequence data of the time sequence paths, and store the generated time sequence data into the second cache area until all the time sequence paths in the first cache area are taken out, and sequentially store the time sequence data of all the time sequence paths into the second cache area.
In the time sequence report generating process of the embodiment, a first thread group is adopted to generate a time sequence path, and a second thread group is adopted to generate time sequence data of the time sequence path, so that different threads do different tasks, and the processing efficiency is improved.
It will be appreciated that multithreading in the first thread group and the second thread group may be replaced by a multi-process implementation, or by a distributed plurality of computers, or by not distinguishing between the first thread group and the second thread group, but rather each thread invoking a large loop containing all steps.
Fig. 7 is a flowchart of a timing report generating method according to an embodiment of the present application, as shown in fig. 7, in an optional implementation manner, step S104 of generating a timing report of at least one first timing path according to sequentially extracted timing data of at least one first timing path may include:
S501, adopting a second thread group, writing out the time sequence data of one first time sequence path in the plurality of sequentially fetched first time sequence paths to a preset file, and releasing the processing resource of the data writing-out action.
The number of the at least one first time sequence path is a plurality, a second thread group is adopted, processing resources of the data writing-out action are called, time sequence data of one first time sequence path are written out to a preset file, the processing resources of the data writing-out action are released, and the preset file can be an empty file in a text form. Wherein, one first timing path may be each timing path sequentially fetched from the plurality of first timing paths, and then processing resources of the data writing action may be released for other thread groups to acquire.
In some embodiments, the processing resource of the data writing action may be implemented by the above-mentioned timing report writer, and the second thread group is adopted to attempt to call the timing report writer, and if the call is successful, the data writing action is performed according to the timing data of one first timing path, where the timing report writer may have only one global state, and the report writing action is performed by calling.
It should be noted that, if the second thread group successfully invokes the timing report writer, the second thread group stops executing the step S401, that is, in the process of executing the step S401 by the second thread group, the timing report writer is attempted to be invoked, and if the second thread group successfully invokes, the second thread group stops executing the step S401, but executes the step S501.
S502, if the processing resources of the data writing-out action are not occupied by other thread groups, a second thread group is adopted to write out the time sequence data of another first time sequence path in the plurality of first time sequence paths to a preset file, the processing resources of the data writing-out action are released until the time sequence data of the plurality of first time sequence paths are completely written out to the preset file, and the written-out preset file is used as a time sequence report of the plurality of first time sequence paths.
If the processing resources of the data writing action are not occupied by other threads, the second thread group is used for preempting the processing resources of the data writing action, the second thread group is adopted, the data writing action is executed according to the time sequence data of another first time sequence path, the processing resources of the data writing action are released, the process is repeated until all the time sequence data of the plurality of first time sequence paths are written into the preset file, the written preset file is used as a time sequence report of the plurality of first time sequence paths, wherein the other first time sequence path is the next path of one first time sequence path, namely the order of the other first time sequence path is the next order of one first time sequence path, for example, the order of one first time sequence path is 1, and the order of the other first time sequence path is 2.
That is, in the process of generating the timing reports of the plurality of first timing paths according to the timing data of at least one first timing path sequentially fetched, when the second thread group acquires the processing resources of the data writing-out operation, the timing data of one first timing path sequentially fetched is written out to the preset file, the processing resources of the data writing-out operation are released, and when the processing resources of the data writing-out operation are not occupied by other threads, the generation of the timing data of the next first timing path is continued to be generated and written into the preset file.
It should be noted that, if the data writing-out action is occupied by other thread groups, the loop is exited, and when the processing resource of the data writing-out action is preempted next time, the generation of the timing report according to the timing data of another first timing path is continued.
It can be seen that, by circularly calling the time sequence data generator and the time sequence report writer through the second thread group, the blocking can be reduced, when the second thread group preempts the time sequence report writer, the data writing action is executed, and when the second thread group preempts the time sequence report writer, the time sequence report writer is not preempted, but the time sequence data generator is continuously called, and the time sequence data of the time sequence path is generated, so that the preemptive task execution is realized, and the blocking is reduced.
Fig. 8 is a flowchart of a timing report generating method according to an embodiment of the present application, as shown in fig. 8, in an optional implementation manner, in step S502, a second thread group is adopted, and another first timing path is continuously fetched from a second cache area, which may include:
s601, judging whether the time sequence data of another first time sequence path in the second buffer area meets the preset data integrity condition by adopting the second thread group.
S602, if the time sequence data of the other first time sequence path meets the preset data integrity condition, adopting a second thread group to continuously fetch the time sequence data of the other first time sequence path from the second cache area.
And calling a time sequence data generator to take out at least one first time sequence path from the first cache area by adopting the second thread group, sequentially generating time sequence data of the at least one first time sequence path, and sequentially storing the time sequence data of the at least one first time sequence path into the second cache area.
When the time series data is fetched from the second buffer area, the time series data of the first time series path may not be written, or not be written, so before the report writing-out action is executed, a second thread group may be further adopted to judge whether the time series data of another first time series path in the second buffer area meets the preset data integrity condition.
If the time sequence data of the other first time sequence path meets the preset data integrity condition, the second thread group is adopted to continuously fetch the time sequence data of the other first time sequence path from the second cache area.
In some embodiments, if the timing data of the other first timing path does not satisfy the preset data integrity condition, the fetching of the timing data of the other first timing path from the second buffer area is stopped.
It should be noted that in the above steps S601-S602, the second thread group may be adopted, the timing report writer is invoked to determine whether the timing data meets the preset data integrity condition, and the timing data of another first timing path is fetched from the second buffer area, if the timing data of another first timing path meets the preset data integrity condition, the timing report writer is invoked to generate the timing report of another first timing path, if the timing data of another first timing path does not meet the preset data integrity condition, the timing report writer is released, and when the second thread group is invoked to the timing report writer next time, the timing report writer continues to be started from another first timing path, and whether the timing data of another first timing path meets the preset data integrity condition is determined until the timing report of another first timing path is generated.
In summary, when the timing report writer is called for the first time, from the set starting number of the timing path, the timing data (which may be in the form of a character string) of the corresponding number is searched in the second buffer area, then a timing report is generated according to the timing data, meanwhile, a large amount of temporary data of the corresponding outdated memory resource is returned, after the timing report of one timing path is generated, the timing report corresponding to the next timing path number is attempted to be generated, if the corresponding timing data is not found in the second buffer area (i.e. the preset data integrity condition is not met), the number is recorded, the cycle is exited, the timing report writer is released, and when the timing report writer is called for the next time, the timing report corresponding to the next timing path number is generated according to the timing data corresponding to the next timing path number, so that the order of the timing report is ensured.
In the method for generating the timing report of the embodiment, the integrity of the timing report is ensured by judging the integrity of the timing data so as to generate the corresponding timing report when the data is complete.
Fig. 9 is a schematic diagram of a timing report generating and timing report writing process provided in this embodiment, as shown in fig. 9, after a second thread group is started, a second thread group is adopted to call a timing data generator to sequentially take out a timing path from a first cache area, generate timing data of the timing path, store the timing data of the timing path into the second cache area, call a timing report writer by adopting the second thread group, if the timing report writer is successfully called (i.e. preempted to the timing report writer), write the sequentially taken out timing data into a file, then judge whether to finish writing all the timing data, if yes, finish, call the timing data generator by adopting the second thread group, and continue generating the timing data.
If the time sequence report writer is called to fail (i.e. the time sequence report writer is not preempted), judging whether the time sequence report writer finishes writing all time sequence data, if yes, ending, and if not, calling a time sequence data generator by adopting a second thread group to continue generating the time sequence report.
Fig. 10 is a flow chart seven of a timing report generating method provided in the embodiment of the present application, as shown in fig. 10, in an alternative implementation manner, the method may further include:
s701, obtaining the waiting time length when the second thread group takes out the first timing path from the first cache area.
S702, if the waiting time length reaches a preset time length threshold value, adding at least one thread in the second thread group to the first thread group.
The second thread group is used to fetch the first timing path from the first buffer area, and the waiting duration may be understood as a duration that the second thread group waits when fetching the first timing path from the first buffer area, where the second thread group includes at least one thread, and in some embodiments, the waiting duration may be an average waiting duration when at least one thread in the second thread group fetches the first timing path from the first buffer area.
If the waiting time reaches the preset time threshold, it indicates that the time for waiting for the threads in the second thread group to take out the first time path is longer, that is, the number of threads in the second thread group is more, and the first thread group generates the first time path slower, so that the threads in the second thread group are required to wait when the first time path is not generated yet, based on this, at least one thread in the second thread group can be added to the first thread group to realize load balancing, so that the number of the first thread group is increased, and the number of the second thread group is reduced.
In the time sequence report generating method of the embodiment, the number of the thread groups is dynamically adjusted through judging the waiting time, so that the load of the threads is balanced as much as possible, and the utilization rate of CPU resources is improved.
Fig. 11 is a schematic flowchart eight of a timing report generating method according to an embodiment of the present application, as shown in fig. 11, in an alternative implementation manner, the method may further include:
s801, acquiring the number of remaining time sequence paths in a first cache region every preset time length.
S802, if the number of the residual time sequence paths meets the preset number and reaches the preset frequency, adding at least one thread in the first thread group into the second thread group.
The number of the remaining time sequence paths in the first cache area is obtained every preset time length, if the number of the remaining time sequence paths meets the preset number and reaches the preset frequency, the speed of the first thread group to produce the time sequence paths is possibly too high, and the speed of the second thread group to take out the time sequence paths from the first cache area is too low, so that the frequency of the remaining time sequence paths in the first cache area meets the preset number and reaches the preset frequency, and at least one thread in the first thread group can be added to the second thread group for realizing load balance.
The predetermined number may be full, that is, if the full state in the second buffer reaches a certain frequency, which means that the speed of the production timing path may be too fast, the thread is extracted from the first thread group to join the second thread group.
In the method for generating the time sequence report of the embodiment, the number of the thread groups is dynamically adjusted through the number of the residual time sequence data in the second buffer area, so that the load of the threads is balanced as much as possible, and the utilization rate of CPU resources is improved.
Based on the same inventive concept, the embodiment of the present application further provides a timing report generating device corresponding to the timing report generating method, and since the principle of solving the problem by the device in the embodiment of the present application is similar to that of the timing report generating method described in the embodiment of the present application, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
Fig. 12 is a schematic structural diagram of a timing report generating apparatus according to an embodiment of the present application, where the apparatus may be integrated in an electronic device. As shown in fig. 12, the apparatus may include:
an obtaining module 901, configured to obtain a timing report generation request of a target logic circuit, where the timing report generation request includes: a target path type;
A fetching module 902, configured to fetch, according to the target path type, at least one first timing path of the target path type for the target logic circuit, which is cached in advance, from the first cache area;
the extracting module 902 is further configured to sequentially extract, from the second buffer area, time-series data of at least one first time-series path that is buffered in advance, where the time-series data of each first time-series path includes a signal arrival time corresponding to an input/output end of the logic unit on each first time-series path;
the generating module 903 is configured to generate a timing report of at least one first timing path according to the sequentially fetched timing data of at least one first timing path.
In an alternative embodiment, the apparatus further comprises:
the processing module 904 is configured to perform path search of a target path type on the target logic circuit by using the first thread group to obtain a first timing path for the target logic circuit, and cache the first timing path in the first cache area;
the processing module 904 is further configured to perform a path search of a target path type on the target logic circuit by using the first thread group, obtain a second timing path for the target logic circuit, and cache the second timing path in the first cache area;
The processing module 904 is further configured to continue to perform path search of the target path type for the target logic circuit using the first thread group until all timing paths of the target path type are searched, and sequentially store all timing paths in the first buffer area.
In an alternative embodiment, the apparatus further comprises:
a determining module 905, configured to determine, if the preset data queue corresponding to the timing report is not empty, a memory address of the garbage data in the preset data queue, where the garbage data is garbage data generated in a process of generating the timing report;
the processing module 904 is further configured to clean up garbage data according to the memory address using the first thread group.
In an alternative embodiment, the generating module 903 is further configured to:
sequentially generating time sequence data of at least one first time sequence path by adopting a second thread group, and storing the time sequence data of the at least one first time sequence path into a second buffer area.
In an alternative embodiment, if the number of the at least one first timing path is a plurality; the generating module 903 is specifically configured to:
adopting a second thread group to write out the time sequence data of one first time sequence path in the plurality of first time sequence paths which are sequentially taken out to a preset file, and releasing the processing resource of the data writing-out action;
If the processing resources of the data writing-out action are not occupied by other thread groups, adopting the second thread group to write out the time sequence data of another first time sequence path in the plurality of first time sequence paths to the preset file, releasing the processing resources of the data writing-out action until the time sequence data of the plurality of first time sequence paths are all written out to the preset file, and taking the written-out preset file as a time sequence report of the plurality of first time sequence paths, wherein the other first time sequence path is the next path of one first time sequence path.
In an alternative embodiment, the extraction module 902 is specifically configured to:
taking out time sequence data of a first time sequence path from a second cache area by adopting a second thread group;
and continuing to fetch another first timing path from the second cache area until the timing data of at least one first timing path is fetched from the second cache area by adopting the second thread group, wherein the another first timing path is the next path of the first timing path.
In an alternative embodiment, the extraction module 902 is specifically configured to:
judging whether the time sequence data of another first time sequence path in the second buffer area meets the preset data integrity condition by adopting a second thread group;
If the time sequence data of the other first time sequence path meets the preset data integrity condition, the second thread group is adopted to continuously fetch the time sequence data of the other first time sequence path from the second cache area.
In an alternative embodiment, the obtaining module 901 is further configured to:
obtaining waiting time length when the second thread group takes out the first timing sequence path from the first cache area;
the processing module 904 is further configured to add at least one thread in the second thread group to the first thread group if the waiting duration reaches a preset duration threshold.
In an alternative embodiment, the obtaining module 901 is further configured to:
acquiring the quantity of the residual time sequence data in the second buffer area at intervals of preset time length;
the processing module 904 is further configured to add at least one thread in the first thread group to the second thread group if the number of remaining time-series data satisfies the preset number of frequencies and reaches the preset frequency.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
Fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in fig. 13, where the device may include: the timing report generating device comprises a processor 1001, a memory 1002 and a bus 1003, wherein the memory 1002 stores machine-readable instructions executable by the processor 1001, and when the electronic device is running, the processor 1001 communicates with the memory 1002 through the bus 1003, and the processor 1001 executes the machine-readable instructions to execute the timing report generating method.
The embodiment of the application also provides a computer readable storage medium, and a computer program is stored on the computer readable storage medium, and is executed by a processor when the computer program is executed by the processor, and the processor executes the time sequence report generating method.
In the embodiments of the present application, the computer program may also execute other machine readable instructions when executed by a processor to perform the methods as described in other embodiments, and the specific implementation of the method steps and principles are referred to in the description of the embodiments and are not described in detail herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments provided in the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be noted that: like reference numerals and letters in the following figures denote like items, and thus once an item is defined in one figure, no further definition or explanation of it is required in the following figures, and furthermore, the terms "first," "second," "third," etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present application, and are not intended to limit the scope of the present application, but the present application is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, the present application is not limited thereto. Any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or make equivalent substitutions for some of the technical features within the technical scope of the disclosure of the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the corresponding technical solutions. Are intended to be encompassed within the scope of this application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A timing report generation method, comprising:
obtaining a timing report generation request of a target logic circuit, wherein the timing report generation request comprises the following steps: a target path type;
according to the target path type, at least one pre-cached first timing path aiming at the target path type of the target logic circuit is taken out from a first cache area;
sequentially taking out the pre-cached time sequence data of at least one first time sequence path from the second cache area, wherein the time sequence data of each first time sequence path comprises the signal arrival time corresponding to the input and output ends of the logic units on each first time sequence path;
and generating a time sequence report of the at least one first time sequence path according to the time sequence data of the at least one first time sequence path which are sequentially fetched.
2. The method of claim 1, wherein the fetching of pre-buffered at least one first timing path for the target path type of the target logic circuit from a first buffer area is preceded by the fetching of the pre-buffered at least one first timing path for the target path type, the method further comprising:
adopting a first thread group to perform path search of the target path type on the target logic circuit to obtain the first timing path aiming at the target logic circuit, and caching the first timing path into the first cache area;
The method further comprises the steps of:
adopting the first thread group to perform path search of the target path type on the target logic circuit to obtain a second time sequence path aiming at the target logic circuit, and caching the second time sequence path into the first cache area;
and continuing to search the path of the target path type for the target logic circuit by adopting the first thread group until all time sequence paths of the target path type are searched, and sequentially storing all time sequence paths into the first cache area.
3. The method of claim 2, wherein the employing the first thread group, before continuing the path search for the target path type for the target logic circuit, further comprises:
if the preset data queue corresponding to the time sequence report is not empty, determining the memory address of the junk data in the preset data queue, wherein the junk data is generated in the process of generating the time sequence report;
and adopting the first thread group to clean the garbage data according to the memory address.
4. The method of claim 2, wherein before sequentially retrieving pre-buffered timing data of the at least one first timing path from the second buffer, the method further comprises:
And sequentially generating time sequence data of the at least one first time sequence path by adopting a second thread group, and storing the time sequence data of the at least one first time sequence path into the second cache area.
5. The method of claim 4, wherein if the number of the at least one first timing path is a plurality; the generating a timing report of the at least one first timing path according to the sequentially fetched timing data of the at least one first timing path includes:
the second thread group is adopted, time sequence data of one first time sequence path in the plurality of sequentially taken first time sequence paths is written into a preset file, and processing resources of the data writing action are released;
if the processing resources of the data writing-out action are not occupied by other thread groups, adopting the second thread group to write out the time sequence data of another first time sequence path in the plurality of first time sequence paths to the preset file, releasing the processing resources of the data writing-out action until the time sequence data of the plurality of first time sequence paths are all written out to the preset file, and taking the written-out preset file as a time sequence report of the plurality of first time sequence paths, wherein the another first time sequence path is the next path of the one first time sequence path.
6. The method of claim 1, wherein sequentially retrieving pre-buffered timing data of the at least one first timing path from the second buffer region comprises:
taking out time sequence data of a first time sequence path from the second cache area by adopting a second thread group;
and continuing to take out another first timing path from the second cache area until the timing data of the at least one first timing path is taken out from the second cache area by adopting the second thread group, wherein the another first timing path is the next path of the one first timing path.
7. The method of claim 6, wherein using the second thread group to continue fetching another first timing path from the second cache region comprises:
judging whether the time sequence data of the other first time sequence path in the second buffer area meets the preset data integrity condition or not by adopting the second thread group;
and if the time sequence data of the other first time sequence path meets the preset data integrity condition, adopting the second thread group to continuously fetch the time sequence data of the other first time sequence path from the second cache area.
8. The method according to claim 4, wherein the method further comprises:
obtaining the waiting time length of the second thread group when the first time sequence path is taken out of the first cache area;
and if the waiting time length reaches a preset time length threshold value, adding at least one thread in the second thread group to the first thread group.
9. The method according to claim 4, wherein the method further comprises:
acquiring the number of remaining time sequence paths in the first cache region every preset time length;
and if the number of the residual time sequence paths meets the preset number of frequencies to reach the preset frequency, adding at least one thread in the first thread group into the second thread group.
10. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the processor executing the machine-readable instructions to perform the timing report generating method of any of claims 1 to 9.
CN202310350907.XA 2023-03-28 2023-03-28 Time sequence report generation method and device Active CN116090382B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310350907.XA CN116090382B (en) 2023-03-28 2023-03-28 Time sequence report generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310350907.XA CN116090382B (en) 2023-03-28 2023-03-28 Time sequence report generation method and device

Publications (2)

Publication Number Publication Date
CN116090382A true CN116090382A (en) 2023-05-09
CN116090382B CN116090382B (en) 2023-06-23

Family

ID=86201045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310350907.XA Active CN116090382B (en) 2023-03-28 2023-03-28 Time sequence report generation method and device

Country Status (1)

Country Link
CN (1) CN116090382B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118378581A (en) * 2024-06-26 2024-07-23 南京芯驰半导体有限公司 Processing method and device for chip time sequence

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120036509A1 (en) * 2010-08-06 2012-02-09 Sonics, Inc Apparatus and methods to concurrently perform per-thread as well as per-tag memory access scheduling within a thread and across two or more threads
US20120311515A1 (en) * 2011-06-02 2012-12-06 International Business Machines Corporation Method For Performing A Parallel Static Timing Analysis Using Thread-Specific Sub-Graphs
CN103403719A (en) * 2010-12-16 2013-11-20 辛奥普希斯股份有限公司 Simultaneous multi-corner static timing analysis using samples-based static timing infrastructure
US20150169813A1 (en) * 2013-12-18 2015-06-18 International Business Machines Corporation Creating an end point report based on a comprehensive timing report
US9589096B1 (en) * 2015-05-19 2017-03-07 Cadence Design Systems, Inc. Method and apparatus for integrating spice-based timing using sign-off path-based analysis
US20170193152A1 (en) * 2016-01-05 2017-07-06 International Business Machines Corporation System and method for combined path tracing in static timing analysis
CN109558345A (en) * 2017-09-27 2019-04-02 展讯通信(上海)有限公司 Memory selection method and device
CN113268501A (en) * 2021-05-26 2021-08-17 杭州迪普科技股份有限公司 Report generation method and device
CN113971383A (en) * 2020-07-24 2022-01-25 美商新思科技有限公司 Distributed static timing analysis
CN114841104A (en) * 2022-05-09 2022-08-02 Oppo广东移动通信有限公司 Time sequence optimization circuit and method, chip and electronic equipment
CN115017846A (en) * 2022-07-15 2022-09-06 飞腾信息技术有限公司 Interface-based time sequence repairing method, equipment and medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120036509A1 (en) * 2010-08-06 2012-02-09 Sonics, Inc Apparatus and methods to concurrently perform per-thread as well as per-tag memory access scheduling within a thread and across two or more threads
CN103403719A (en) * 2010-12-16 2013-11-20 辛奥普希斯股份有限公司 Simultaneous multi-corner static timing analysis using samples-based static timing infrastructure
US20120311515A1 (en) * 2011-06-02 2012-12-06 International Business Machines Corporation Method For Performing A Parallel Static Timing Analysis Using Thread-Specific Sub-Graphs
US20150169813A1 (en) * 2013-12-18 2015-06-18 International Business Machines Corporation Creating an end point report based on a comprehensive timing report
US9589096B1 (en) * 2015-05-19 2017-03-07 Cadence Design Systems, Inc. Method and apparatus for integrating spice-based timing using sign-off path-based analysis
US20170193152A1 (en) * 2016-01-05 2017-07-06 International Business Machines Corporation System and method for combined path tracing in static timing analysis
CN109558345A (en) * 2017-09-27 2019-04-02 展讯通信(上海)有限公司 Memory selection method and device
CN113971383A (en) * 2020-07-24 2022-01-25 美商新思科技有限公司 Distributed static timing analysis
CN113268501A (en) * 2021-05-26 2021-08-17 杭州迪普科技股份有限公司 Report generation method and device
CN114841104A (en) * 2022-05-09 2022-08-02 Oppo广东移动通信有限公司 Time sequence optimization circuit and method, chip and electronic equipment
CN115017846A (en) * 2022-07-15 2022-09-06 飞腾信息技术有限公司 Interface-based time sequence repairing method, equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
夏宏美;李成诗;韩芳;赖宗声;: "深亚微米超大规模集成电路的静态时序分析", 微计算机信息, no. 08, pages 215 - 219 *
苏阳;赵英潇;黄睿;张月;陈曾平;: "FPGA的多路数据并行录取和时序资源优化", 单片机与嵌入式系统应用, no. 07, pages 19 - 22 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118378581A (en) * 2024-06-26 2024-07-23 南京芯驰半导体有限公司 Processing method and device for chip time sequence

Also Published As

Publication number Publication date
CN116090382B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
US9787706B1 (en) Modular architecture for analysis database
KR101963917B1 (en) Automatic synchronization of most recently used document lists
CN108153587B (en) Slow task reason detection method for big data platform
US20150309846A1 (en) Parallel priority queue utilizing parallel heap on many-core processors for accelerating priority-queue-based applications
CN116090382B (en) Time sequence report generation method and device
US11269692B2 (en) Efficient sequencer for multiple concurrently-executing threads of execution
CN104035938A (en) Performance continuous integration data processing method and device
CN113791889B (en) Method for deploying learning model based on multistage polling queue controller
CN112035229A (en) Calculation graph processing method and device and storage medium
US20160299834A1 (en) State storage and restoration device, state storage and restoration method, and storage medium
CN112748855B (en) Method and device for processing high concurrency data request
CN110888739B (en) Distributed processing method and device for delayed tasks
CN110955461B (en) Processing method, device, system, server and storage medium for computing task
CN112486468A (en) Spark kernel-based task execution method and system and computer equipment
Rexha et al. A comparison of three page replacement algorithms: FIFO, LRU and optimal
CN110727666A (en) Cache assembly, method, equipment and storage medium for industrial internet platform
CN115757039A (en) Program monitoring method and device, electronic equipment and storage medium
US11734279B2 (en) Event sequences search
CN106354722B (en) Message processing method and device for streaming computing system
CN112181825A (en) Test case library construction method and device, electronic equipment and medium
CN111309475B (en) Detection task execution method and equipment
CN113868249A (en) Data storage method and device, computer equipment and storage medium
CN113419832A (en) Processing method and device of delay task and terminal
CN114741434B (en) Pre-statistical method and system for massive ES search data
CN110968595A (en) Single-thread sql statement execution method, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant