CN114429101B - Drawing method, device, equipment and medium for AI processor architecture - Google Patents

Drawing method, device, equipment and medium for AI processor architecture Download PDF

Info

Publication number
CN114429101B
CN114429101B CN202210352918.7A CN202210352918A CN114429101B CN 114429101 B CN114429101 B CN 114429101B CN 202210352918 A CN202210352918 A CN 202210352918A CN 114429101 B CN114429101 B CN 114429101B
Authority
CN
China
Prior art keywords
layer
layout
data
architecture
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210352918.7A
Other languages
Chinese (zh)
Other versions
CN114429101A (en
Inventor
魏斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Suiyuan Intelligent Technology Co ltd
Original Assignee
Beijing Suiyuan Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Suiyuan Intelligent Technology Co ltd filed Critical Beijing Suiyuan Intelligent Technology Co ltd
Priority to CN202210352918.7A priority Critical patent/CN114429101B/en
Publication of CN114429101A publication Critical patent/CN114429101A/en
Application granted granted Critical
Publication of CN114429101B publication Critical patent/CN114429101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/39Circuit design at the physical level
    • G06F30/392Floor-planning or layout, e.g. partitioning or placement

Abstract

The embodiment of the invention discloses a drawing method, a drawing device, drawing equipment and a drawing medium of an AI processor architecture. The AI processor architecture drawing method comprises the following steps: acquiring a current layer of drawing elements corresponding to a current drawing architecture layer; if the current drawing architecture layer does not belong to the top-level architecture layer, organizing and distributing drawing elements; determining relative offset data of each drawing element relative to the layout parent category model according to the layout result, and returning to execute the operation of obtaining the drawing element of the current layer corresponding to the current drawing architecture layer; if the current drawing architecture layer belongs to the top-level architecture layer, organizing and distributing each drawing element, and determining absolute distribution data of each drawing element; determining absolute layout data corresponding to each bottom layer drawing element; the AI processor architecture diagram is drawn according to the absolute layout data. The technical scheme of the embodiment of the invention can automatically draw the AI processor architecture diagram, unify the drawing standard of the AI processor architecture diagram and reduce the butt joint error.

Description

Drawing method, device, equipment and medium for AI processor architecture
Technical Field
The embodiment of the invention relates to the technical field of data processing, in particular to a drawing method, a drawing device, drawing equipment and drawing media of an AI processor architecture.
Background
With the continuous upgrading of chip manufacturing processes, the computational power density (isomorphism or isomerism) of an AI (Artificial Intelligence) processor deployed on a unit wafer area is higher and higher, so that the logic design density of the AI processor on the unit area is continuously increased, the number of hardware components of a single chip of the AI processor is increased exponentially, and a high requirement is put forward on the design of an AI processor architecture.
However, tens and hundreds of hardware components and hundreds of thousands of data channels are often used, making manual hand-drawing of AI processor architecture diagrams more and more difficult and error prone. Moreover, hand-drawing diagrams of different teams do not have a uniform standard, are difficult to pull through, and often have execution or docking errors, which lead to unnecessary cooperative iteration, so that automatic drawing of an AI processor architecture with a uniform drawing standard becomes a problem to be solved urgently in the field of chip manufacturing today.
Disclosure of Invention
Embodiments of the present invention provide a method, an apparatus, a device, and a medium for drawing an AI processor architecture, which can automatically draw an AI processor architecture, unify drawing standards of the AI processor architecture, and reduce a docking error.
In a first aspect, an embodiment of the present invention provides a method for drawing an AI processor architecture, including:
sequentially acquiring the drawing elements of the current layer corresponding to the current drawing architecture layer according to the sequence of the drawing scales from small to large; the layer of drawing elements comprises: a domain model or a hardware component model, wherein the domain model comprises a sub-hardware component model and/or a sub-domain model;
if the current drawing architecture layer is determined not to belong to the top-level architecture layer, organizing and distributing drawing elements of each layer to obtain a parent category model corresponding to the drawing elements of each layer as the drawing elements of the next drawing architecture layer;
determining relative offset data of each layer of drawing elements relative to the layout parent category model according to the layout result, and returning to execute the operation of sequentially acquiring the layer of drawing elements corresponding to the current drawing architecture layer from small to large according to the drawing scale;
if the current drawing architecture layer is determined to belong to the top-level architecture layer, organizing and distributing drawing elements of each layer, and determining absolute distribution data of the drawing elements of each layer;
according to the absolute layout data of the drawing elements of the current layer and the predetermined relative offset data, determining the absolute layout data respectively corresponding to the drawing elements of the bottom layer step by step;
and drawing according to the absolute layout data of each layer of drawing elements and the absolute layout data of each bottom layer of drawing elements to obtain an AI processor architecture diagram.
In a second aspect, an embodiment of the present invention further provides a drawing apparatus for an AI processor architecture, including:
the data acquisition module is used for sequentially acquiring the drawing elements of the current drawing framework layer corresponding to the drawing elements of the current drawing framework layer according to the sequence of the drawing scales from small to large; the layer of drawing elements comprises: a domain model or a hardware component model, wherein the domain model comprises a sub-hardware component model and/or a sub-domain model;
the first nesting layout module is used for organizing and laying out the drawing elements of each layer if the current drawing architecture layer is determined not to belong to the top-level architecture layer, and obtaining a parent category model corresponding to the drawing elements of each layer as the drawing elements of the next drawing architecture layer;
the second nested layout module is used for determining relative offset data of each layer of drawing elements relative to the distributed parent category model according to the layout result, and returning to execute the operation of sequentially acquiring the layer of drawing elements corresponding to the current drawing architecture layer from small to large according to the drawing scale;
the first data determining module is used for organizing and distributing the drawing elements of each layer and determining absolute distribution data of the drawing elements of each layer if the current drawing architecture layer is determined to belong to the top-level architecture layer;
the second data determination module is used for determining absolute layout data corresponding to the bottom layer drawing elements step by step according to the absolute layout data of the bottom layer drawing elements and the predetermined relative offset data;
and the AI processor architecture drawing module is used for drawing to obtain an AI processor architecture according to the absolute layout data of each layer of drawing elements and the absolute layout data of each bottom layer of drawing elements.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the rendering method of the AI processor architecture provided in any embodiment of the present invention.
In a fourth aspect, an embodiment of the present invention further provides a computer storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the rendering method of the AI processor architecture provided in any embodiment of the present invention.
According to the technical scheme of the embodiment, the local layer of drawing elements corresponding to the current drawing architecture layer are sequentially obtained according to the drawing scale from small to large, if the current drawing architecture layer is determined not to belong to the top level architecture layer, the drawing elements of each local layer are organized and distributed to obtain the parent category model corresponding to the drawing elements of each local layer as the drawing elements of the next drawing architecture layer, then the relative offset data of the drawing elements of each local layer relative to the distributed parent category model is determined according to the distribution result, the operation of sequentially obtaining the drawing elements of the local layer corresponding to the current drawing architecture layer according to the drawing scale from small to large is returned to be executed, if the current drawing architecture layer is determined to belong to the top level architecture layer, the drawing elements of each local layer are organized and distributed to determine the absolute distribution data of the drawing elements of each local layer, and further according to the absolute distribution data of the drawing elements of each local layer and the predetermined relative offset data, and determining absolute layout data corresponding to each bottom layer drawing element step by step, so as to draw and obtain the AI processor architecture diagram according to the absolute layout data of each layer drawing element and the absolute layout data of each bottom layer drawing element. The processor framework is abstracted by utilizing the domain, the layer-by-layer organization layout is carried out on the processor framework based on the framework layer, the reasonable abstraction and the hierarchical layout of the processor framework are realized, the offset condition of each layer of drawing elements relative to a distributed parent domain model after the layout can be automatically determined, the layout data of the bottom layer drawing elements are automatically determined when the framework layer in a larger range is drawn, the effect that the top layer framework layer drives the bottom layer framework layer to cooperatively layout is achieved, the problems that the drawing difficulty is large, the standard is not uniform and the butt joint error is frequent in the manual drawing of the AI processor framework in the prior art are solved, the AI processor framework diagram can be automatically drawn, the drawing standard of the AI processor framework diagram is unified, and the butt joint error is reduced.
Drawings
Fig. 1 is a flowchart of a method for rendering an AI processor architecture according to an embodiment of the present invention;
fig. 2 is a flowchart of a rendering method of an AI processor architecture according to a second embodiment of the present invention;
FIG. 3 is a diagram illustrating a parent domain model including a layer of domain nesting according to a second embodiment of the present invention;
FIG. 4 is a diagram illustrating a data structure of a drawing model according to a second embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating an AI processor architecture diagram algorithm flow according to a second embodiment of the invention;
fig. 6 is a schematic diagram illustrating an AI processor architecture drawing result according to a second embodiment of the present invention;
fig. 7 is a schematic diagram illustrating a result of a mirror inversion process according to a second embodiment of the present invention;
FIG. 8 is a diagram illustrating an AI processor architecture graph compression algorithm flow according to a second embodiment of the invention;
fig. 9 is a schematic diagram illustrating an AI processor architecture drawing result after compression processing according to a second embodiment of the present invention;
fig. 10 is a schematic diagram illustrating hardware components of an AI processor architecture according to a second embodiment of the present invention;
fig. 11 is a diagram illustrating an AI processor architecture rendering result according to a second embodiment of the present invention;
fig. 12 is a schematic diagram illustrating data flow in an AI processor architecture according to a second embodiment of the present invention;
fig. 13 is a diagram illustrating data flow in an alternative AI processor architecture according to a second embodiment of the present invention;
fig. 14 is a schematic diagram of a rendering apparatus of an AI processor architecture according to a third embodiment of the present invention;
fig. 15 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention.
It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
The processor architecture is internally provided with a plurality of local domains, hardware components in the domains have independent interconnection guarantee cooperation, and are communicated with hardware components outside the domains, the hardware components are realized by a bridging unit or the hardware components with bridging function, the hardware components in the domains can be communicated with the hardware components outside the domains, and a hardware component group (at least comprising two hardware components) with the characteristics can be called as a domain, and the domains can be nested, parallel and infinitely expanded. The hardware components are basic working units constituting the processor architecture and can be divided into an execution unit, a bridge unit and a bus unit according to the function roles. The execution unit may be configured to initiate or process a data exchange request as a start point or an end point of a data exchange, which may span multiple categories. The bridging unit may not initiate or receive data exchange requests, but merely act as a forwarder, as a bridge between the different categories. The bus unit may be used to carry an interconnection arrangement between all hardware components within the domain in which the bus is located, there being only one bus unit in a domain.
Fig. 1 is a flowchart of a drawing method for an AI processor architecture according to an embodiment of the present invention, which is applicable to a case of automatically drawing an AI processor architecture diagram, and the method can be executed by a drawing apparatus for an AI processor architecture, which can be implemented by software and/or hardware, and can be generally integrated in an electronic device. The electronic device may be a terminal device, a server device, or the like, and the embodiment of the present invention does not limit the type of the electronic device that executes the rendering method of the AI processor architecture. Accordingly, as shown in fig. 1, the method comprises the following operations:
and S110, sequentially obtaining the drawing elements of the current layer corresponding to the current drawing framework layer according to the sequence of the drawing scales from small to large.
Wherein the drawing scale may be used to measure the size of the drawing element. The current drawing architecture layer can be the architecture layer which needs model layout at present, and the dimension of the layout models of the same architecture layer is the same. The drawing element of the current layer can be a model needing to be drawn in the current drawing architecture layer. The layer of rendering elements may include a category model or a hardware component model, and the category model may include a sub-hardware component model, and/or a sub-category model. Illustratively, when the layer of drawing elements includes a category model and a hardware component model, and the category model includes a sub-hardware component model and a sub-category model, the drawing scale of the category model in the layer of drawing elements is consistent with that of the hardware component model, and the drawing scale of the sub-hardware component model under the category model in the layer of drawing elements is consistent with that of the sub-category model. The hardware component model may be a drawing model that characterizes the hardware component. The domain model may be a graphical model that characterizes the domain. The sub-hardware component model may be a drawing model of a hardware component nested under the domain model and belonging to the same architectural level as the sub-domain model. The sub-category model may be a drawing model of a sub-category of the category model nested under the category model. It is understood that the sub-category models can also be nested into the grand category model, and so on, one category model can be nested into a plurality of lower-level category models.
Illustratively, the hardware component model may be a rectangular shape, and the domain model may be a combination of nesting of at least two rectangles.
In the embodiment of the present invention, firstly, the AI processor architecture may be subjected to domain abstract division to obtain a domain division result, and the domain to which each hardware component in the AI processor architecture belongs is determined according to the domain division result, and then a connection list of each hardware component is determined according to the connection relationship of each hardware component in the AI processor and the domain to which each hardware component in the AI processor architecture belongs, and further, the AI processor architecture is subjected to hierarchical processing, and according to a sequence from a small drawing scale to a large drawing scale, that is, according to a sequence from a bottom layer to a top layer of a hierarchical result, according to the connection list of each hardware component in the AI processor, a current layer of drawing elements corresponding to a current drawing architecture layer is sequentially obtained.
For example, after performing abstract domain partitioning on the AI processor architecture, an instantiated path for each hardware component (from the AI processor architecture top-level domain up to the experienced domain of the hardware component concrete location) may be generated, for example: fc00_ Fc01_ Fc02 indicates that the position of the hardware component is reached from the AI processor framework top category Fc00, into the subcategory Fc01, and further into the subcategory Fc02 of Fc 01. Thus, the location of the hardware component is described by the continuous concatenation of the successive "parent _ child" categories.
Each hardware component does not exist independently, and an internal interconnect of the AI processor is required to cooperate with other hardware components, and therefore, a port description for each hardware component is important, where the ports of the hardware component are defined as follows:
and Modu: the name of the hardware component is named uniquely in the category of each hardware component. If: port number of the hardware component. M: indicating that the port role is the request initiator. S: indicating that the port role is a request recipient. The port interconnect must be initiated from M to S.
Illustratively, one example of a connection in the connection list is as follows:
Fc01:Fc00_Fc01_Modu00-If00_M→Fc01:Fc00_Fc01_Modu01-If00_S
this connection represents a connection between port number 00 belonging to the hardware unit Modu0, located in the domain Fc01, and port number 00 belonging to the hardware unit Modu01, located in the domain Fc 01.
After the architecture modeling is performed, a connection list is obtained from the existing architecture definition and provided to the drawing engine, and the processor architecture diagram can be obtained. The input required by the whole drawing engine only needs to be connected with the list, and does not need any other complicated constraint.
Optionally, when the AI processor architecture is subjected to hierarchical processing, the hardware component with the smallest drawing scale is located at the bottommost layer of the hierarchical result, the minimum category where the hardware component with the smallest drawing scale is located belongs to the upper layer of the bottommost layer of the hierarchical result, and the upper layer of the bottommost layer of the hierarchical result may further include the hardware component.
And S120, if the current drawing architecture layer is determined not to belong to the top level architecture layer, organizing and distributing drawing elements of each layer to obtain a parent category model corresponding to the drawing elements of each layer as the drawing elements of the next drawing architecture layer.
The top-level architecture layer may be the architecture layer with the largest drawing scale in the AI processor architecture, i.e., the top layer in the hierarchical result. The parent category model may be a drawing model of the upper category to which the current category belongs.
In the embodiment of the present invention, the current architecture drawing layer may be matched with the layering result, if the current drawing architecture layer does not belong to the top architecture layer, the drawing elements of each current layer are organized and arranged in the AI processor architecture drawing interface according to the category to obtain the parent category model corresponding to the drawing elements of each current layer, and the parent category model corresponding to the drawing elements of each current layer is used as the drawing element of the next drawing architecture layer.
And S130, determining relative offset data of each layer of drawing elements relative to the layout parent category model according to the layout result, and returning to execute the operation of sequentially acquiring the layer of drawing elements corresponding to the current drawing architecture layer from small to large according to the drawing scale.
The relative offset data can be used for representing the spatial offset of the drawing element of the layer relative to the laid-out parent category model. The relative offset data may include a horizontal offset as well as a vertical offset.
In the embodiment of the invention, after organizing and arranging the drawing elements of each layer, a layout result is obtained, further, according to the layout position of the drawing elements of each layer in the layout result in the corresponding parent domain model, the relative offset data of the drawing elements of each layer relative to the parent domain model to be arranged is determined, and further, the operation of sequentially obtaining the drawing elements of the layer corresponding to the current drawing architecture layer according to the sequence of the drawing scales from small to large is returned and executed.
And S140, if the current drawing architecture layer is determined to belong to the top-level architecture layer, organizing and distributing the drawing elements of the current layer, and determining absolute distribution data of the drawing elements of the current layer.
The absolute layout data may be layout coordinates of drawing elements of each current layer on the layout interface when the current drawing architecture layer belongs to the top-level architecture layer.
In the embodiment of the present invention, the current architecture drawing layer may be matched with the layering result, and if the current drawing architecture layer belongs to the top architecture layer, the drawing elements of each layer included in the top architecture layer may be further determined, and then the drawing elements of each layer included in the top architecture layer are organized and laid out on the AI processor architecture drawing interface, so as to obtain the absolute layout data of the drawing elements of each layer on the AI processor architecture drawing interface.
And S150, according to the absolute layout data of the drawing elements of the current layer and the predetermined relative offset data, determining the absolute layout data corresponding to the drawing elements of the bottom layer step by step.
Wherein the underlying drawing elements may be drawing elements of an architectural layer that completed the organizational layout before the architectural layer was drawn. The underlying rendering elements may include a category model and/or a hardware component model.
In the embodiment of the invention, the drawing elements of the current layer of each architecture layer from the top-level architecture layer to the bottom layer are determined according to the sequence from large drawing scale to small drawing scale, namely the sequence from the top layer to the bottom layer of the layering result, so that the absolute layout data of the drawing elements of the bottom layer in the top-level architecture layer is calculated according to the absolute layout data of the drawing elements of the current layer and the predetermined relative offset data.
For example, each category model in each local layer drawing element in the top-level architecture layer may be determined, and then relative offset data of a next-level sub-category model included in each category model with respect to the corresponding category model may be determined, and absolute layout data corresponding to each bottom-level drawing element in a previous architecture layer of the top-level architecture layer in the hierarchical order from the top-level architecture layer to the bottom layer may be determined according to absolute layout data of each category model in each local layer drawing element and relative offset data of a next-level sub-category model included in each category model with respect to the corresponding category model. By analogy, absolute layout data corresponding to each bottom drawing element of the top-level framework layer can be obtained.
And S160, drawing according to the absolute layout data of the drawing elements of the current layer and the absolute layout data of the drawing elements of the bottom layer to obtain an AI processor architecture diagram.
In the embodiment of the present invention, the coordinates of each local layer drawing element on the layout interface and the coordinates of each bottom layer drawing element with respect to the layout parent category model may be determined according to the absolute layout data of each local layer drawing element and the absolute layout data of each bottom layer drawing element, and the AI processor architecture diagram may be drawn according to the coordinates of each local layer drawing element on the layout interface and the coordinates of each bottom layer drawing element with respect to the layout parent category model. During processor architecture evaluation and verification, scene descriptions and definitions tend to be abstract and limited. The complex abstract definitions are visualized and standardized, so that the working efficiency can be greatly improved, the cooperation threshold is reduced, and the pertinence of the mass data for the performance evaluation and verification of the processor architecture is more important.
According to the technical scheme of the embodiment, the current layer of drawing elements corresponding to the current drawing framework layer are sequentially obtained according to the sequence from small to large of drawing scales, if the current drawing framework layer is determined not to belong to the top level framework layer, the drawing elements of the current layer are organized and distributed to obtain the parent category model corresponding to the drawing elements of the current layer as the drawing elements of the next drawing framework layer, then the relative offset data of the drawing elements of the current layer relative to the distributed parent category model is determined according to the distribution result, the operation of sequentially obtaining the drawing elements of the current layer according to the sequence from small to large of drawing scales is returned to be executed, if the current drawing framework layer is determined to belong to the top level framework layer, the drawing elements of the current layer are organized and distributed to determine the absolute layout data of the drawing elements of the current layer, and further according to the absolute layout data of the drawing elements of the current layer and the predetermined relative offset data, and determining absolute layout data corresponding to each bottom layer drawing element step by step, so as to draw and obtain the AI processor architecture diagram according to the absolute layout data of each layer drawing element and the absolute layout data of each bottom layer drawing element. The processor framework is abstracted by utilizing the domain, the layer-by-layer organization layout is carried out on the processor framework based on the framework layer, the reasonable abstraction and the hierarchical layout of the processor framework are realized, the offset condition of each layer of drawing elements relative to a distributed parent domain model after the layout can be automatically determined, the layout data of the bottom layer drawing elements are automatically determined when the framework layer in a larger range is drawn, the effect that the top layer framework layer drives the bottom layer framework layer to cooperatively layout is achieved, the problems that the drawing difficulty is large, the standard is not uniform and the butt joint error is frequent in the manual drawing of the AI processor framework in the prior art are solved, the AI processor framework diagram can be automatically drawn, the drawing standard of the AI processor framework diagram is unified, and the butt joint error is reduced.
Example two
Fig. 2 is a flowchart of a drawing method of an AI processor architecture according to a second embodiment of the present invention. The embodiment of the present invention is embodied on the basis of the above-described embodiment, and in the embodiment of the present invention, a specific optional implementation manner is provided before returning to execute operations of sequentially obtaining the drawing elements of the current layer corresponding to the current drawing architecture layer in the order from small to large in the drawing scale.
As shown in fig. 2, the method of the embodiment of the present invention specifically includes:
and S210, sequentially obtaining the drawing elements of the current layer corresponding to the current drawing framework layer according to the sequence of the drawing scales from small to large.
And S220, if the current drawing architecture layer is determined not to belong to the top-level architecture layer, organizing and distributing drawing elements of each layer to obtain a parent category model corresponding to the drawing elements of each layer as the drawing elements of the next drawing architecture layer.
In an optional embodiment of the present invention, performing an organization layout on the drawing elements of each layer may include: acquiring a first layout area and a second layout area of a parent category model matched with each layer of drawing elements; respectively carrying out up-down symmetrical layout on the drawing elements of each layer in a first layout area and a second layout area of the parent category model matched with the drawing elements of each layer; and carrying out bilateral symmetry layout on the drawing elements of the first layout area and the second layout area of the parent category model matched with the drawing elements of the current layer to obtain a symmetrical layout diagram of the parent category.
The first layout area and the second layout area are both areas for laying out the drawing elements of the layer in the parent category model. The first layout area and the second layout area are symmetrically distributed about a straight line horizontally passing through the center of gravity of the parent domain model. The up-down symmetrical layout can be an operation of symmetrically laying out the drawing elements of the layer in the first layout area and the second layout area respectively about a straight line horizontally passing through the gravity center of the parent category model. The left-right symmetrical layout can be an operation of symmetrically laying out the drawing elements of the layer of the first layout area and the second layout area about a straight line vertically passing through the gravity center of the parent category model. The parent category symmetric layout map may be a layout map obtained after left-right symmetric layout of each layer of drawing elements of the first layout region and the second layout region of the parent category model.
In the embodiment of the present invention, the left boundary of the layout region of the parent category model matched with each layer of drawing elements may be trisected to obtain three regions, the layout region where the upper boundary of the parent category model is located is the first layout region, the layout region where the lower boundary of the parent category model is located is the second layout region, and then the drawing elements of each layer are respectively and symmetrically arranged in the first layout region and the second layout region of the parent category model matched with each layer of drawing elements. When equal number of layout elements of the layer of drawing elements of the parent category model cannot be laid out in the first layout area and the second layout area of the parent category model, it needs to be ensured that the absolute value of the difference value of the number of the layout elements of the layer of drawing elements in the parent category model, which are symmetrically laid out in the first layout area and the second layout area, is less than or equal to 1. After the first layout region and the second layout region of the parent category model matched with each layer of drawing elements complete the up-down symmetrical layout of the layer of drawing elements, the first layout region and each layer of drawing elements of the parent category model matched with each layer of drawing elements can be further subjected to left-right symmetrical layout, and a bus model (a drawing model of a bus unit) is laid out in a region between the first layout region and the second layout region to obtain a symmetrical layout diagram of each parent category.
Fig. 3 is a schematic diagram of a parent domain model including a layer of domain nesting according to a second embodiment of the present invention, as shown in fig. 3, one domain model or one hardware component model is represented by a rectangle, an Upper region (a first layout region) and a Lower region (a second layout region) are used as non-bus hardware component layout regions, and a Middle region is used as a bus model drawing region and is responsible for representing an interconnection channel set of all hardware components included in the Upper and Lower regions. Domains can be nested indefinitely, such as Fc00 and Fc01, and can be nested indefinitely towards the parent or child layers.
Illustratively, for each rectangle characterizing the drawing model, a data structure is built as shown in fig. 4, and a graphical representation characterizing the hardware component or domain is performed. Wherein X represents the abscissa of the lower left corner of the rectangle, Y represents the ordinate of the lower left corner of the rectangle, W represents the width of the rectangle, H represents the height of the rectangle, c-x represents the abscissa of the center of gravity of the rectangle, c-y represents the ordinate of the center of gravity of the rectangle, b-x represents the abscissa of the base (lower boundary) of the rectangle, b-y represents the ordinate of the base (lower boundary) of the rectangle. l-x denotes the horizontal coordinate of the midpoint of the left side (left boundary) of the rectangle, l-y represents the midpoint ordinate on the left side (left border) of the rectangle. r is-x represents the horizontal coordinate of the midpoint of the right side (right boundary) of the rectangle, l-y represents the midpoint ordinate on the right side (right border) of the rectangle. t is t-x denotes the abscissa of the top (upper boundary) of the rectangle, t-y represents the ordinate of the top side (upper boundary) of the rectangle.
And S230, determining relative offset data of each layer of drawing elements relative to the layout parent category model according to the layout result, and returning to execute the operation of sequentially acquiring the layer of drawing elements corresponding to the current drawing architecture layer from small to large according to the drawing scale.
In an optional embodiment of the present invention, after determining, according to the layout result, relative offset data of each of the present-layer drawing elements with respect to the laid-out parent domain model, the method may include: determining width boundary data of the symmetrical layout drawing of each father category according to relative offset data of each layer of drawing elements relative to the arranged father category model; and refreshing the layout width of each father category model according to the width boundary data of the symmetrical layout drawing of each father category.
Wherein the width boundary data may be data characterizing the abscissa of the right boundary of the symmetric layout diagram of the parent domain.
In the embodiment of the present invention, the local layer drawing element with the largest abscissa in each parent category model may be determined according to the relative offset data of each local layer drawing element with respect to the laid-out parent category model, and then the width boundary data of each parent category symmetric layout drawing may be determined according to the abscissa of the local layer drawing element with the largest abscissa in each parent category model and the preset interval between the local layer drawing element and the parent category model boundary, and then the layout width of each parent category model may be refreshed according to the left boundary coordinate and the width boundary data of each parent category symmetric layout drawing.
In an optional embodiment of the present invention, before determining the width boundary data of the symmetric layout diagram of each parent category, the method may further include: acquiring bottom layer drawing elements matched with all the layer drawing elements corresponding to the current drawing architecture layer; and according to the relative offset data of each layer of drawing elements relative to the distributed parent category model, offsetting the bottom layer drawing elements matched with each layer of drawing elements corresponding to the current drawing architecture layer.
In the embodiment of the present invention, first, the drawing elements of each layer corresponding to each architecture layer from the current drawing architecture layer to the bottom architecture layer may be determined, the drawing elements of each layer of the current drawing architecture layer may be removed from the drawing elements of each layer corresponding to each architecture layer, so as to obtain the bottom drawing elements matched with the drawing elements of each layer corresponding to the current drawing architecture layer, and further based on the relative offset data of each of the present tier drawing elements with respect to the laid out parent domain model, shifting the bottom layer drawing elements matched with the drawing elements of the current layer corresponding to the current drawing architecture layer, so as to realize the synchronous linkage of the bottom layer drawing element and each layer drawing element of the current drawing architecture layer, namely when the positions of each layer drawing element of the current drawing architecture layer move, and driving the bottom layer drawing elements matched with the drawing elements of the current layer corresponding to the current drawing architecture layer to move according to the same offset.
In an optional embodiment of the present invention, refreshing the layout width of each parent category model according to the width boundary data of the symmetric layout diagram of each parent category may include: determining the maximum width value of each father category symmetrical layout drawing according to the width boundary data of each layer of drawing elements in each father category symmetrical layout drawing; and refreshing the layout width of each father category model according to the maximum width value of the symmetrical layout diagram of each father category.
The maximum width value can be used to represent the width of the parent category symmetric layout diagram with the maximum width in each parent category symmetric layout diagram.
In the embodiment of the invention, the width of each father domain symmetrical layout graph can be respectively determined according to the difference value between the width boundary data of each father domain symmetrical layout graph and the left boundary abscissa, the width of each father domain symmetrical layout graph is further compared, the maximum width value of each father domain symmetrical layout graph is determined, and the width of each father domain symmetrical layout graph is further refreshed according to the maximum width value of each father domain symmetrical layout graph, so that the width of each father domain model is consistent.
S240, if the current drawing architecture layer is determined to belong to the top-level architecture layer, organizing and distributing the drawing elements of the current layer, and determining absolute distribution data of the drawing elements of the current layer.
And S250, according to the absolute layout data of the drawing elements of the current layer and the predetermined relative offset data, determining the absolute layout data corresponding to the drawing elements of the bottom layer step by step.
And S260, drawing according to the absolute layout data of the drawing elements of the current layer and the absolute layout data of the drawing elements of the bottom layer to obtain an AI processor architecture diagram.
Fig. 5 is a schematic diagram of an AI processor architecture diagram algorithm flow provided in the second embodiment of the present invention, as shown in fig. 5, first a connection list of each hardware component in an AI processor architecture is obtained, the AI processor architecture is further subjected to three-dimensional hierarchical processing, index numbers are set for each architecture layer, the index numbers are gradually increased from the top layer to the bottom layer, a hardware component model or a category model corresponding to a current drawing architecture layer is determined and obtained according to the connection list, the AI processor architecture is further analyzed to construct a drawing engine data frame, a graphic parameter hash table, a graphic layout hash table and hash tables of sub-hardware components and hardware components included in a category of the hardware component or category corresponding to the current drawing architecture layer are determined, a width initial value and a height initial value of the hardware component model are obtained, and further a width of a drawing element of a deeper architecture layer is initialized to a width initial value, and initializing the height of the drawing elements of the deeper architecture layer to the height initial value. According to the layering result of the AI processor architecture, the layer is arranged from the bottom layer to the top layer by layer, and the specific process is as follows: acquiring a framework layer with idx = idmax-i-2 as a current drawing framework layer, unifying the width and height of the hardware component model of the idx layer (the width of the hardware component model of the idx layer is an initial width value, the height of the hardware component model of the idx layer is an initial height value, and the coordinate of the lower left corner of the hardware component model is (0, 0)), and completing initialization of the hardware component model of the idx layer. And i is adjusted from 0 to the index number of the bottommost framework layer, i is greater than or equal to 0, and i is less than or equal to idmax. Obtaining a category list of the idx layer according to a layering result, obtaining a category, and laying three regions according to the categories, carrying out organization layout on the sub-category models or the sub-hardware component models in the Upper region and the Lower region (the number difference of the sub-category models or the sub-hardware component models arranged in the Upper region and the Lower region is not more than 1; naming the isomorphic category models or the hardware component models which are only indexed and are in the same category, carrying out up-down symmetrical layout in the Upper region and the Lower region, and then carrying out left-right symmetrical layout in the Upper region and the Lower region (which can be simply understood as that drawing element indexes are increased from top to bottom and from left to right), wherein the hardware component module with a bridging function in the category is only arranged in the Lower region). The key values are the horizontal coordinate of the lower left corner of the drawing element, the vertical coordinate of the lower left corner, the width of the model and the height of the model. The key names of the hash tables of the sub-domains and the hardware components included in the domains are drawing element positions, and the key values are the names of the sub-domains and the hardware components.
After the layout of the different regions of the category is completed, the layout offset amount (relative offset data) of each drawing element is calculated. Establishing a bus model of the current category model, wherein the horizontal coordinate of the lower left corner of the bus model of the current category model = the initial width value (preset horizontal interval/2); the vertical coordinate of the lower left corner of the bus model of the current category model = height initial value/amplification ratio/2 + height initial value/preset vertical interval layout area number/2-height initial value/bus amplification ratio/2; the bus model of the current domain model has a high = height initial value and a bus amplification scale, where the bus amplification scale is a preset bus amplification scale, and the bus amplification scale may be set to 2/3, and the like.
Illustratively, the horizontal layout offset = width initial value of the drawing element of the 1 st of the Upper region of the idx-th layer is a preset interval; the horizontal layout offset of the 2 nd and subsequent rendering elements of the Upper region of the idx-th layer = width initial value + preset horizontal interval + right border abscissa of the preceding adjacent rendering element. The vertical layout offset of the drawing elements of the Upper region of the idx-th layer = height initial value ((number of layout regions-1) × preset vertical interval + (magnification scale-1)). The enlargement ratio may be set as needed, for example, the enlargement ratio =3, the number of layout regions may be the number of regions divided when the domain is organized and laid out, and in this embodiment, the number of layout regions is 3. The preset horizontal interval may be a horizontal interval between preset same-layer drawing elements. The preset vertical interval may be a vertical interval between preset same-layer drawing elements. The horizontal layout offset = width initial value of the drawing element of the 1 st of the Lower region of the idx-th layer by a preset interval; the horizontal layout offset of the 2 nd and subsequent rendering elements of the Upper region of the idx-th layer = width initial value + preset horizontal interval + right border abscissa of the preceding adjacent rendering element. Vertical layout offset of the drawing elements of the Upper region of the idx-th layer = height initial value · preset vertical interval. For each sub-category model or sub-hardware component model, overlapping an offset, wherein the abscissa of the updated sub-category model or sub-hardware component model = the original abscissa of the sub-category model or sub-hardware component model + the horizontal layout offset of the category model; the updated ordinate of the sub-category model or the sub-hardware component model = the original abscissa of the sub-category model or the sub-hardware component model + the vertical layout offset of the belonging category model. Adding a hardware component model and a category model newly appearing in the idx layer into a layout hash table, wherein the horizontal coordinate of the lower left corner of a drawing element newly added into the layout hash table = the original horizontal coordinate of the lower left corner of the drawing element + the horizontal layout offset of the corresponding drawing element; the vertical coordinate of the lower left corner of the drawing element newly added into the layout hash table = the vertical coordinate of the original lower left corner of the drawing element + the vertical layout offset of the corresponding drawing element; the width of the drawing element newly added into the layout hash table is equal to the initial width value; the height of the drawing element newly added to the layout hash table is higher than the height initial value.
After layout is completed in the Upper and Lower regions, determining width boundary data of a current category according to the rightmost maximum abscissa, refreshing current category parameters, returning and executing a category list of an idx layer obtained according to a layering result, obtaining a category operation until all categories in the category list of the idx layer are traversed, determining the maximum width values of all category models in the idx layer, refreshing the maximum level category of the idx layer according to the maximum width values, returning and obtaining a framework layer of idx = idmax-i-2 as an operation of currently drawing the framework layer, wherein at the moment, i = i +1, layout, offset calculation and layout width refreshing processing of drawing elements of all framework layers are completed, and the finally obtained AI processor framework diagram is shown in FIG. 6.
In an optional embodiment of the present invention, after obtaining the AI processor architecture diagram according to the absolute layout data of each of the current-layer drawing elements and the absolute layout data of each of the bottom-layer drawing elements by drawing, the method may further include: according to the expansion sequence from the initial architecture layer to be compressed to the top architecture layer, acquiring first compression width data of a sub-category model of each parent category model to be compressed in the current architecture layer to be compressed in the AI processor architecture diagram and second compression width data of a bus model of the sub-category model of each parent category model to be compressed; compressing the corresponding sub-category models of the parent category models to be compressed according to the first compressed width data and the second width data to be compressed; compressing a child hardware component model matched with the parent category to be compressed according to the minimum width data of the child category model of each parent category model to be compressed in the current architecture layer to be compressed; compressing model gaps among the sub-category models, the sub-hardware component models and the sub-hardware component models of each parent category to be compressed in the current structural layer to be compressed to obtain an initial compression diagram of each parent category to be compressed; carrying out mirror image turning processing on each to-be-compressed father category model including the bridging function hardware component in each to-be-compressed father category initial compression diagram to obtain each to-be-compressed father category compression diagram; compressing each overturned parent domain compression diagram to be compressed according to the target right boundary data of each overturned parent domain compression diagram to be compressed; and when the current architecture layer to be compressed is determined not to be the top architecture layer, returning to execute the operation of acquiring the first compression width data of the sub-category model of each parent category model to be compressed in the current architecture layer to be compressed in the AI processor architecture diagram and the second compression width data of the bus model of the sub-category model of each parent category model to be compressed until the compression of the inverted parent category compression diagram to be compressed matched with the top architecture layer is completed.
The initial architecture layer to be compressed may be an architecture layer where a domain model with the smallest drawing scale is located in the AI processor architecture diagram. The current architecture layer to be compressed may be the architecture layer that currently requires size compression. The parent domain model to be compressed may be a domain model with the largest drawing scale in the current architecture layer to be compressed. The first compression width data may characterize the distance that the child category model of the parent category to be compressed requires horizontal compression. The second width data to be compressed may characterize the distance that the bus model of the parent domain to be compressed requires horizontal compression. The minimum width data may characterize the width of the least wide child domain model of the parent domain model to be compressed. The initial compression diagram of the parent domain to be compressed can be a result diagram of the compressed drawing elements in the parent domain model to be compressed. A mirror flip process may be used to bring the bridge function hardware components close to the bus model. The target right boundary data may be the abscissa of the child domain model or the child hardware component model that flips the rightmost side in the parent domain compression map to be compressed. The flipping of the parent domain compression map to be compressed may be a result map of the parent domain initial compression map to be compressed after performing mirror flipping processing.
In the embodiment of the present invention, an initial to-be-compressed framework layer may be determined according to a layering result of an AI processor framework, and then a sub-category model of each to-be-compressed parent category model in a current to-be-compressed framework layer in the AI processor framework layer is obtained according to an expansion sequence of the initial to-be-compressed framework layer to the top-level framework layer, and further, first compression width data is determined according to a right boundary coordinate of a drawing element of a sub-category model of each to-be-compressed parent category model in the current to-be-compressed framework layer and a right boundary coordinate of a corresponding sub-category model. After the first compression width data are obtained, second compression width data are determined according to right boundary coordinates of the sub-category models of the parent category models to be compressed in the current framework layer to be compressed and the first compression width data, then the right boundaries of the sub-category models corresponding to the parent category models to be compressed are compressed according to the first compression width data, and the right boundaries of the bus models of the sub-category models corresponding to the parent category models to be compressed are compressed according to the second compression width data.
After the sub-category models of the parent category models to be compressed in the current framework layer to be compressed are compressed, the minimum width data of the sub-category models of the parent category models to be compressed are obtained, the sub-hardware component models matched with the parent category to be compressed are further compressed according to the minimum width data, model gaps among the sub-category models of the parent category to be compressed, the sub-hardware component models and model gaps among the sub-category models and the sub-hardware component models in the current framework layer to be compressed are further compressed, and the initial compression diagram of the parent category to be compressed is obtained. And when the initial compression diagram of the parent domain to be compressed comprises the bridge function hardware component and the layout of the bridge function hardware component is far away from the general information module, carrying out mirror image overturning processing on the bridge function hardware component, which is far away from the general information module, of each layout of the parent domain to be compressed in the current structural layer to be compressed, so as to obtain each overturned parent domain compression diagram to be compressed. After obtaining each inverted parent domain compression map to be compressed, further determining target right boundary data of each inverted parent domain compression map to be compressed, respectively compressing the corresponding inverted parent domain compression map to be compressed according to the target right boundary data of each inverted parent domain compression map to be compressed, further matching the current framework layer to be compressed with the top framework layer, if the current framework layer to be compressed is not the top framework layer, returning to execute the operation of obtaining first compression width data of the child domain model of each parent domain model to be compressed in the current framework layer to be compressed in the AI processor framework layer and second compression width data of the bus model of the child domain model of each parent domain model to be compressed until completing the compression of the inverted parent domain compression map to be compressed matched with the top framework layer, namely completing the compression of the element drawn by the current layer in the top framework layer, mirror image turning processing of a bridging function hardware component which is far away from a main message module is distributed in the top layer framework layer drawing element, and compression of an outer frame which envelops the top layer framework layer drawing element.
Fig. 7 is a schematic diagram illustrating a result of mirror image flipping according to a second embodiment of the present invention, and it is assumed that the originally drawn AI processor architecture is as shown in fig. 6, and mirror image flipping is performed on the originally drawn AI processor architecture, and a resulting flipping result can be seen in fig. 7.
Fig. 8 is a schematic diagram of an AI processor architecture graph compression algorithm flow according to a second embodiment of the present invention, as shown in fig. 8, a graph layout hash table of the AI processor may be first obtained, and all sub-domains and sub-hardware components included in all domains are obtained according to the graph layout hash table, and then a domain model of a current framework layer is obtained step by step from an initial framework layer to be compressed to a top framework layer, and the sub-domains included in the current domain model are compressed, and then a bus model of the sub-domain is compressed, and further a sub-hardware component model of the current domain model and the sub-domain model are compressed according to a sub-domain model of a minimum width under the current domain model, so as to compress gaps between the sub-domain model and the sub-hardware component model, between the sub-domain model and the sub-domain model, and between the sub-hardware component model and the sub-hardware component model, and then taking the Y coordinate of the bus model of the current framework model as a horizontal axis, mirroring the sub-framework model or the sub-hardware component model to which the bridge function hardware component far away from the bus model belongs, finally compressing the current framework model, judging whether the current framework layer domain model completes the compression processing, if not, returning to execute the operation of obtaining the current framework layer domain model step by step until the current framework layer domain model completes the compression processing, further compressing the next framework layer until all framework layer compression is completed, and finally obtaining the compressed AI processing framework shown in fig. 9.
For example, the width of each of the sub-domain models in the current domain model (abbreviated as wlst), the abscissa of the lower left corner of each of the sub-domain models (abbreviated as xlst), and the sum of the abscissa of the lower left corner of each of the sub-domain models and the width of the corresponding sub-domain model (abbreviated as wxl) may be determined, and further, the difference between the abscissa of the lower left corner of the leftmost sub-domain model in the current domain model and the abscissa of the lower left corner of the current domain model (abbreviated as sonMargin) may be calculated, the width of the compressed current sub-domain model = max (wxl) + sonMargin — the abscissa of the lower left corner of the current domain model, and the width of the bus model of the compressed current sub-domain model = width-2 × of the current domain model (width of the bus model in the current domain model — the abscissa of the lower left corner of the current domain model). When element gaps are drawn in the current category model in a compressed mode, ascending sorting can be carried out in the Upper region and the Lower left corner of the drawing elements in the Lower region according to the size of the horizontal coordinate of the drawing elements, the drawing elements are processed one by one from left to right, the compression distance of the previous sub-category model of each region is used as the distance of the left deviation of the next adjacent sub-category model or sub-hardware component model, the compression is carried out by analogy, and the compression of the sub-category models of the two layout regions is followed.
The connection list is connected according to the AI processor architecture diagram from the request originator to the request recipient, the hardware component interface is connected, and the AI processor architecture rendering is completed, resulting in fig. 10, and the request originator to the request recipient can also be indicated by arrows as independent port identifications.
And S270, acquiring processor operating data in the target time window.
Wherein the target time window may be a preset period for analyzing the processor operating condition. The processor operation data may be performance index data of the processor during operation, and is used for characterizing the operation condition of each hardware component of the processor. For example, the processor operating data may include, but is not limited to, power consumption data, collision probability data, computational power density data, load balancing data, and the like.
In the embodiment of the invention, the target time window can be determined according to the analysis requirement of the running condition of the processor, and then the running data of the processor in the target time window is collected.
S280, according to the processor operation data in the target time window, rendering an AI processor architecture diagram of the target playing quantity to obtain a plurality of load slices.
The target playing number may be a preset integer, and is used to represent the number of AI processor architecture diagrams that need to be rendered and played. The load slice may be an AI processor architecture diagram rendered from processor execution data.
In the embodiment of the present invention, the operating conditions of each hardware component may be determined according to the acquired processor operating data in the target time window, and further, the target time window may be divided according to a preset time interval to determine the target playing amount, so as to acquire the AI processor architecture diagrams of the target playing amount at the same time interval in the target time window, so as to render the AI processor architecture diagrams of the target playing amount according to the operating conditions of each hardware component, and obtain the load slices of the target playing amount.
It should be noted that, in the stage of processor design and verification, the architecture level is used to characterize complex scenes, which is very inefficient or even impossible to implement by simple graphics and text, and is based on the characterization of standardized and automated processor architecture characteristics. The decoupling of the drawing elements and the drawing element characterization quantities brings great efficiency for the analysis and the presentation of the architecture characteristics, namely, the hardware component model and the rendering color can be decoupled, and the power consumption, the performance, the temperature and the like of different hardware component models are represented by different colors, so that the analysis and the presentation effects of the architecture diagram are improved.
And S290, playing the load slices according to the generation sequence of the load slices.
In the embodiment of the present invention, the first acquired AI processor architecture is rendered first, and thus the order of acquiring the AI processor architecture is the generation order of the load slices. And playing the load slices according to the generation sequence of the load slices, namely the acquisition sequence of the architecture diagram of the target playing quantity AI processor, so as to dynamically display the operation condition of the AI processor, thereby facilitating the staff to timely maintain the equipment according to the operation condition of the AI processor.
Fig. 11 is a schematic diagram of rendering results of an AI processor architecture diagram according to a second embodiment of the present invention, and as shown in fig. 11, a category model or a hardware component model of the AI processor architecture is independently colored to visually display characteristics of the processor architecture in different aspects.
Exemplary, homogeneous or heterogeneous hardware components, are arranged in a distributed fashion throughout the processor. The interconnection layering of the AI processor architecture, the proportion allocation of the functional components, etc. can be visually arranged by coloring. The performance index data such as power consumption, computing power, data throughput and the like of each hardware component are mapped to each category and hardware component of the processor architecture, meanwhile, the normalized color weight is established to visually display the advantages and disadvantages of various performance indexes under a complex scene, and a direction and a method for further performance optimization or balance are provided qualitatively and quantitatively. The data flow is classified into two categories, load and store, as shown in fig. 12, the AI processor core loads data from one hardware component, and stores the data in other hardware components after the data is processed. The two data streams are distinguished by the attribute marks of M and S and color marks or by different linear shapes, and the relationship and the attribute of the data streams can be clearly marked. In a complex scene, the data streams have complex interrelations. When multiple data streams are parallel, there are three data channel relationships, all overlap, partial overlap, and no overlap, where the probability of partial overlap is the largest, and a schematic diagram of data streams of three data stream data channels with partial overlap can be seen in fig. 13. And mapping the performance parameters of the data streams acquired according to theoretical weight distribution or actual simulation to the color and the depth of each data stream, and then synthesizing the parallel color superposition of a plurality of data channels, so that the interconnection performance result of the processor framework under a certain scene can be visually obtained. And giving visual performance dyeing presentation to the judgment of the interconnection scheme in the processor architecture exploration stage and the landing degree of the performance target of the processor architecture.
At the present stage, the number of point-to-point interconnection channels in the AI processor architecture is already in the order of hundreds of thousands, and if parallel scenes are examined, the number of scenes is exponentially increased on the basis of data channels. The unlimited release of manpower is unscientific, and the work efficiency and the emphasis lack of scientific methodology support are also difficult to promote. Therefore, the processor architecture drawing engine and the dyeing method thereof fill the gap of the processor architecture scene analysis and the automatic associated processor architecture, and solve the sharp contradiction between the increasingly complex processor architecture and the use scene thereof and the low-efficiency analysis means of manual drawing and manual labeling. The method avoids the interference of any processor characteristic on drawing, and provides a processor architecture container only according to the vertical layering of a processor physical architecture. The container can load any data index which describes the corresponding characteristic of the processor architecture finely or roughly, and realize the loading and static and dynamic presentation of the data index. Load data in the processor at a certain moment t0 is loaded into a processor architecture container, the load state of each hardware component in the processor at the moment t0 can be seen, if load slices at the moments t0, t1 and … tN are collected and N slices are continuously played, a dynamic load playing graph of the tN-t0 time window can be obtained, and therefore the time window where key points exist is captured in an auxiliary mode, detail amplification is conducted, and rapid analysis is conducted.
According to the technical scheme of the embodiment, the current layer of drawing elements corresponding to the current drawing framework layer are sequentially obtained according to the sequence from small to large of drawing scales, if the current drawing framework layer is determined not to belong to the top level framework layer, the drawing elements of the current layer are organized and distributed to obtain the parent category model corresponding to the drawing elements of the current layer as the drawing elements of the next drawing framework layer, so that the relative offset data of the drawing elements of the current layer relative to the parent category model to be distributed is determined according to the distribution result, the operation of sequentially obtaining the drawing elements of the current layer according to the sequence from small to large of drawing scales is returned to be executed, if the current drawing framework layer is determined to belong to the top level framework layer, the drawing elements of the current layer are organized and distributed to determine the absolute layout data of the drawing elements of the current layer, so that the absolute layout data of the drawing elements of the current layer and the predetermined relative offset data are obtained according to the absolute layout data of the drawing elements of the current layer, and determining absolute layout data corresponding to each bottom layer drawing element step by step, further drawing to obtain an AI processor architecture diagram according to the absolute layout data of each layer drawing element and the absolute layout data of each bottom layer drawing element, further acquiring processor operation data in a target time window, rendering the AI processor architecture diagram of a target playing quantity according to the processor operation data in the target time window, and obtaining a plurality of load slices, thereby playing the load slices according to the generation sequence of the load slices. The processor framework is abstracted by utilizing the domain, the layer-by-layer organization layout is carried out on the processor framework based on the framework layer, the reasonable abstraction and the hierarchical layout of the processor framework are realized, the offset condition of each layer of drawing elements relative to a distributed parent domain model can be automatically determined after the layout, the layout data of the bottom layer drawing elements are automatically determined when the framework layer in a larger range is drawn, and the effect that the top layer framework layer drives the bottom layer framework layer to cooperatively layout is achieved. The method has the advantages that the load slices are rendered from processor operation data in a target time window, the load slices are played according to the generation sequence, the operation condition of a processor can be vividly displayed for workers, the problems that manual drawing of an AI processor framework in the prior art is high in drawing difficulty, non-uniform in standard and frequent in docking error are solved, the AI processor framework graph can be automatically drawn, the drawing standard of the AI processor framework graph is unified, the docking error is reduced, the application of the AI processor framework graph is enriched, and the analysis efficiency of the processor framework characteristic is improved.
It should be noted that any permutation and combination between the technical features in the above embodiments also belong to the scope of the present invention.
EXAMPLE III
Fig. 14 is a schematic diagram of a rendering apparatus of an AI processor architecture according to a third embodiment of the present invention, as shown in fig. 14, the apparatus includes: a data acquisition module 310, a first nested layout module 320, a second nested layout module 330, a first data determination module 340, a second data determination module 350, and an AI processor architecture diagram drawing module 360, wherein:
the data obtaining module 310 is configured to sequentially obtain drawing elements of the current drawing architecture layer according to a sequence of drawing scales from small to large; the layer of drawing elements comprises: a domain model or a hardware component model, wherein the domain model comprises a sub-hardware component model and/or a sub-domain model;
the first nested layout module 320 is configured to, if it is determined that the current drawing architecture layer does not belong to the top-level architecture layer, perform an organization layout on the drawing elements of each current layer, and obtain a parent category model corresponding to the drawing elements of each current layer as a drawing element of a next drawing architecture layer;
the second nested layout module 330 is configured to determine, according to the layout result, relative offset data of each of the current-layer drawing elements with respect to the parent domain model to be laid out, and return to execute operations of sequentially obtaining the current-layer drawing elements corresponding to the current drawing architecture layer in a descending order of the drawing scale;
the first data determining module 340 is configured to, if it is determined that the current drawing architecture layer belongs to the top-level architecture layer, perform organization and layout on the drawing elements of each current layer, and determine absolute layout data of the drawing elements of each current layer;
a second data determining module 350, configured to determine, step by step, absolute layout data corresponding to each bottom-layer drawing element according to the absolute layout data of each current-layer drawing element and each predetermined relative offset data;
the AI processor architecture drawing module 360 is configured to draw to obtain an AI processor architecture diagram according to the absolute layout data of each of the current-layer drawing elements and the absolute layout data of each of the bottom-layer drawing elements.
According to the technical scheme of the embodiment, the current layer of drawing elements corresponding to the current drawing framework layer are sequentially obtained according to the sequence from small to large of drawing scales, if the current drawing framework layer is determined not to belong to the top level framework layer, the drawing elements of the current layer are organized and distributed to obtain the parent category model corresponding to the drawing elements of the current layer as the drawing elements of the next drawing framework layer, then the relative offset data of the drawing elements of the current layer relative to the distributed parent category model is determined according to the distribution result, the operation of sequentially obtaining the drawing elements of the current layer according to the sequence from small to large of drawing scales is returned to be executed, if the current drawing framework layer is determined to belong to the top level framework layer, the drawing elements of the current layer are organized and distributed to determine the absolute layout data of the drawing elements of the current layer, and further according to the absolute layout data of the drawing elements of the current layer and the predetermined relative offset data, and determining absolute layout data corresponding to each bottom layer drawing element step by step, so as to draw and obtain the AI processor architecture diagram according to the absolute layout data of each layer drawing element and the absolute layout data of each bottom layer drawing element. The processor framework is abstracted by utilizing the domain, the layer-by-layer organization layout is carried out on the processor framework based on the framework layer, the reasonable abstraction and the hierarchical layout of the processor framework are realized, the offset condition of each layer of drawing elements relative to a distributed parent domain model after the layout can be automatically determined, the layout data of the bottom layer drawing elements are automatically determined when the framework layer in a larger range is drawn, the effect that the top layer framework layer drives the bottom layer framework layer to cooperatively layout is achieved, the problems that the drawing difficulty is large, the standard is not uniform and the butt joint error is frequent in the manual drawing of the AI processor framework in the prior art are solved, the AI processor framework diagram can be automatically drawn, the drawing standard of the AI processor framework diagram is unified, and the butt joint error is reduced.
Optionally, the first nested layout module 320 is specifically configured to obtain a first layout region and a second layout region of the parent category model that are matched with each of the drawing elements in the current layer; respectively carrying out up-down symmetrical layout on the drawing elements of the current layer in a first layout area and a second layout area of the parent category model matched with the drawing elements of the current layer; and carrying out bilateral symmetry layout on the drawing elements of the first layout area and the second layout area of the parent category model matched with the drawing elements of the current layer to obtain a symmetrical layout diagram of the parent category.
Optionally, the drawing device of the AI processor architecture further includes a width refreshing module, configured to determine width boundary data of each parent category symmetric layout diagram according to relative offset data of each drawing element of the current layer with respect to the parent category model to be laid out; and refreshing the layout width of each parent category model according to the width boundary data of each parent category symmetrical layout diagram.
Optionally, the width refreshing module is specifically configured to determine a maximum width value of each parent category symmetric layout according to width boundary data of each layer of drawing elements in each parent category symmetric layout; and refreshing the layout width of each father category model according to the maximum width value of each father category symmetrical layout drawing.
Optionally, the width refreshing module is specifically configured to obtain bottom-layer drawing elements matched with the current-layer drawing elements corresponding to the current drawing architecture layer; and according to the relative offset data of each layer of drawing elements relative to the distributed parent category model, offsetting the bottom layer drawing elements matched with each layer of drawing elements corresponding to the current drawing architecture layer.
Optionally, the drawing of the AI processor architecture further includes a drawing compression module, configured to obtain, according to an expansion sequence from the initial architecture layer to be compressed to the top-level architecture layer, first compression width data of a sub-category model of each parent category model to be compressed in the current architecture layer to be compressed in the AI processor architecture layer, and second compression width data of a bus model of the sub-category model of each parent category model to be compressed; compressing the corresponding sub-category models of the parent category models to be compressed according to the first compression width data and the second width data to be compressed; compressing a child hardware component model matched with the parent category to be compressed according to the minimum width data of the child category model of each parent category model to be compressed in the current architecture layer to be compressed; compressing model gaps among the sub-category models, the sub-hardware component models and the sub-hardware component models of each parent category to be compressed in the current structural layer to be compressed to obtain an initial compression diagram of each parent category to be compressed; carrying out mirror image turning processing on each to-be-compressed father category model including the bridging function hardware component in each to-be-compressed father category initial compression diagram to obtain each to-be-compressed father category compression diagram; compressing each inverted parent category compression map to be compressed according to the target right boundary data of each inverted parent category compression map to be compressed; and when the current architecture layer to be compressed is determined not to be the top architecture layer, returning to execute the operation of obtaining the first compression width data of the sub-category model of each parent category model to be compressed in the current architecture layer to be compressed in the AI processor architecture diagram and the second compression width data of the bus model of the sub-category model of each parent category model to be compressed until the compression of the reversed parent category compression diagram to be compressed matched with the top architecture layer is completed.
Optionally, the drawing of the AI processor architecture further includes a load slicing module, configured to obtain processor operating data within the target time window; rendering an AI processor architecture diagram of a target playing quantity according to the processor operation data in the target time window to obtain a plurality of load slices; and playing the load slices according to the generation sequence of the load slices.
The drawing device of the AI processor architecture can execute the drawing method of the AI processor architecture provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. For details of the technique not described in detail in this embodiment, reference may be made to the drawing method of the AI processor architecture provided in any embodiment of the present invention.
Since the drawing device of the AI processor architecture described above is a device capable of executing the drawing method of the AI processor architecture in the embodiment of the present invention, based on the drawing method of the AI processor architecture described in the embodiment of the present invention, a person skilled in the art can understand a specific implementation of the drawing device of the AI processor architecture in the embodiment of the present invention and various variations thereof, and therefore, how to implement the drawing method of the AI processor architecture in the embodiment of the present invention by the drawing device of the AI processor architecture is not described in detail herein. As long as those skilled in the art implement the apparatus for implementing the method for rendering the AI processor architecture in the embodiment of the present invention, the scope of the present application is intended to be protected.
Example four
Fig. 15 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention. FIG. 15 illustrates a block diagram of an electronic device 412 suitable for use in implementing embodiments of the present invention. The electronic device 412 shown in fig. 15 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 15, the electronic device 412 is in the form of a general purpose computing device. The components of the electronic device 412 may include, but are not limited to: one or more processors 416, a storage device 428, and a bus 418 that couples the various system components including the storage device 428 and the processors 416.
Bus 418 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an enhanced ISA bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus.
Electronic device 412 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 412 and includes both volatile and nonvolatile media, removable and non-removable media.
Storage 428 may include computer system readable media in the form of volatile Memory, such as RAM (Random Access Memory) 430 and/or cache Memory 432. The electronic device 412 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 434 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 15, commonly referred to as a "hard drive"). Although not shown in FIG. 15, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk-Read Only Memory (CD-ROM), a Digital Video disk (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 418 by one or more data media interfaces. Storage 428 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
Program 436 having a set (at least one) of program modules 426 may be stored, for example, in storage 428, such program modules 426 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which or some combination of which may comprise an implementation of a network environment. Program modules 426 generally perform the functions and/or methodologies of embodiments of the invention as described herein.
The electronic device 412 may also communicate with one or more external devices 414 (e.g., keyboard, pointing device, camera, display 424, etc.), with one or more devices that enable a user to interact with the electronic device 412, and/or with any devices (e.g., network card, modem, etc.) that enable the electronic device 412 to communicate with one or more other computing devices. Such communication may occur via I/O interface 422. Also, the electronic device 412 may communicate with one or more networks (e.g., a Local Area Network (LAN), Wide Area Network (WAN), and/or a public Network, such as the internet) via the Network adapter 420. As shown, network adapter 420 communicates with the other modules of electronic device 412 over bus 418. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 412, including but not limited to: microcode, device drivers, Redundant processing units, external disk drive Arrays, Redundant Array of Independent Disks (RAID) systems, tape drives, and data backup storage systems, to name a few.
The processor 416 executes various functional applications and data processing by running programs stored in the storage device 428, for example, implementing a rendering method of the AI processor architecture provided by the above-described embodiment of the present invention, including: sequentially acquiring the drawing elements of the current layer corresponding to the current drawing architecture layer according to the sequence of the drawing scales from small to large; the layer of drawing elements comprises: a domain model or a hardware component model, wherein the domain model comprises a sub-hardware component model and/or a sub-domain model; if the current drawing architecture layer is determined not to belong to the top-level architecture layer, organizing and distributing drawing elements of each layer to obtain a parent category model corresponding to the drawing elements of each layer as the drawing elements of the next drawing architecture layer; determining relative offset data of each drawing element of the current layer relative to the distributed parent category model according to the distribution result, and returning to execute the operation of sequentially acquiring the drawing elements of the current layer corresponding to the current drawing architecture layer from small to large according to the drawing scale; if the current drawing architecture layer is determined to belong to the top-level architecture layer, organizing and distributing drawing elements of each layer, and determining absolute distribution data of the drawing elements of each layer; according to the absolute layout data of the drawing elements of the current layer and the predetermined relative offset data, determining the absolute layout data respectively corresponding to the drawing elements of the bottom layer step by step; and drawing according to the absolute layout data of each layer of drawing elements and the absolute layout data of each bottom layer of drawing elements to obtain an AI processor architecture diagram.
According to the technical scheme of the embodiment, the current layer of drawing elements corresponding to the current drawing framework layer are sequentially obtained according to the sequence from small to large of drawing scales, if the current drawing framework layer is determined not to belong to the top level framework layer, the drawing elements of the current layer are organized and distributed to obtain the parent category model corresponding to the drawing elements of the current layer as the drawing elements of the next drawing framework layer, then the relative offset data of the drawing elements of the current layer relative to the distributed parent category model is determined according to the distribution result, the operation of sequentially obtaining the drawing elements of the current layer according to the sequence from small to large of drawing scales is returned to be executed, if the current drawing framework layer is determined to belong to the top level framework layer, the drawing elements of the current layer are organized and distributed to determine the absolute layout data of the drawing elements of the current layer, and further according to the absolute layout data of the drawing elements of the current layer and the predetermined relative offset data, and determining absolute layout data corresponding to each bottom layer drawing element step by step, so as to draw and obtain the AI processor architecture diagram according to the absolute layout data of each layer drawing element and the absolute layout data of each bottom layer drawing element. The processor framework is abstracted by utilizing the domain, the layer-by-layer organization layout is carried out on the processor framework based on the framework layer, the reasonable abstraction and the hierarchical layout of the processor framework are realized, the offset condition of each layer of drawing elements relative to a distributed parent domain model after the layout can be automatically determined, the layout data of the bottom layer drawing elements are automatically determined when the framework layer in a larger range is drawn, the effect that the top layer framework layer drives the bottom layer framework layer to cooperatively layout is achieved, the problems that the drawing difficulty is large, the standard is not uniform and the butt joint error is frequent in the manual drawing of the AI processor framework in the prior art are solved, the AI processor framework diagram can be automatically drawn, the drawing standard of the AI processor framework diagram is unified, and the butt joint error is reduced.
EXAMPLE five
An embodiment of the present invention further provides a computer storage medium storing a computer program, where the computer program is used to execute a rendering method of an AI processor architecture according to any one of the above embodiments of the present invention when executed by a computer processor, and the method includes: sequentially acquiring the drawing elements of the current layer corresponding to the current drawing architecture layer according to the sequence of the drawing scales from small to large; the layer of drawing elements comprises: a domain model or a hardware component model, wherein the domain model comprises a sub-hardware component model and/or a sub-domain model; if the current drawing architecture layer is determined not to belong to the top-level architecture layer, organizing and distributing drawing elements of each layer to obtain a parent category model corresponding to the drawing elements of each layer as the drawing elements of the next drawing architecture layer; determining relative offset data of each layer of drawing elements relative to the layout parent category model according to the layout result, and returning to execute the operation of sequentially acquiring the layer of drawing elements corresponding to the current drawing architecture layer from small to large according to the drawing scale; if the current drawing architecture layer is determined to belong to the top level architecture layer, organizing and distributing drawing elements of each layer, and determining absolute distribution data of the drawing elements of each layer; according to the absolute layout data of the drawing elements of the current layer and the predetermined relative offset data, determining the absolute layout data respectively corresponding to the drawing elements of the bottom layer step by step; and drawing according to the absolute layout data of each layer of drawing elements and the absolute layout data of each bottom layer of drawing elements to obtain an AI processor architecture diagram.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM) or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A drawing method of an Artificial Intelligence (AI) processor architecture is characterized by comprising the following steps:
sequentially acquiring the drawing elements of the current layer corresponding to the current drawing architecture layer according to the sequence of the drawing scales from small to large; the layer of drawing elements comprises: a domain model or a hardware component model, wherein the domain model comprises a sub-hardware component model and/or a sub-domain model;
if the current drawing architecture layer is determined not to belong to the top-level architecture layer, organizing and distributing drawing elements of each layer to obtain a parent category model corresponding to the drawing elements of each layer as the drawing elements of the next drawing architecture layer;
determining relative offset data of each layer of drawing elements relative to the layout parent category model according to the layout result, and returning to execute the operation of sequentially acquiring the layer of drawing elements corresponding to the current drawing architecture layer from small to large according to the drawing scale;
if the current drawing architecture layer is determined to belong to the top-level architecture layer, organizing and distributing drawing elements of each layer, and determining absolute distribution data of the drawing elements of each layer;
according to the absolute layout data of the drawing elements of the current layer and the predetermined relative offset data, determining the absolute layout data respectively corresponding to the drawing elements of the bottom layer step by step;
and drawing according to the absolute layout data of each layer of drawing elements and the absolute layout data of each bottom layer of drawing elements to obtain an AI processor architecture diagram.
2. The method according to claim 1, wherein the organizing and laying out of the drawing elements of each layer comprises:
acquiring a first layout area and a second layout area of a parent category model matched with each layer of drawing element;
respectively carrying out up-down symmetrical layout on the drawing elements of the current layer in a first layout area and a second layout area of the parent category model matched with the drawing elements of the current layer;
and carrying out bilateral symmetry layout on the drawing elements of the first layout area and the second layout area of the parent category model matched with the drawing elements of the current layer to obtain a symmetrical layout diagram of the parent category.
3. The method of claim 2, wherein determining the relative offset data of each of the drawing elements of the current layer with respect to the parent domain model to be laid out according to the layout result comprises
Determining width boundary data of the symmetrical layout drawing of the parent category according to relative offset data of the drawing elements of the current layer relative to the layout parent category model;
and refreshing the layout width of each parent category model according to the width boundary data of each parent category symmetrical layout diagram.
4. The method of claim 3, wherein said refreshing the layout width of each of said parent domain models according to the width boundary data of the symmetric layout diagram of each of said parent domains comprises:
determining the maximum width value of each father category symmetrical layout drawing according to the width boundary data of each drawing element of the layer in each father category symmetrical layout drawing;
and refreshing the layout width of each father category model according to the maximum width value of each father category symmetrical layout drawing.
5. The method of claim 3, further comprising, prior to said determining width boundary data for each of said parent domain symmetric layouts:
acquiring bottom layer drawing elements matched with all the layer drawing elements corresponding to the current drawing architecture layer;
and according to the relative offset data of each layer of drawing elements relative to the distributed parent category model, offsetting the bottom layer drawing elements matched with each layer of drawing elements corresponding to the current drawing architecture layer.
6. The method of claim 1, further comprising, after the obtaining the AI processor architecture diagram according to the drawing of the absolute layout data of the current tier of drawing elements and the absolute layout data of the bottom tier of drawing elements:
according to the expansion sequence from the initial architecture layer to be compressed to the top architecture layer, acquiring first compression width data of a sub-category model of each parent category model to be compressed in the current architecture layer to be compressed in the AI processor architecture diagram and second compression width data of a bus model of the sub-category model of each parent category model to be compressed;
compressing the corresponding sub-category models of the parent category models to be compressed according to the first compression width data and the second width data to be compressed;
compressing a child hardware component model matched with the parent category to be compressed according to the minimum width data of the child category model of each parent category model to be compressed in the current architecture layer to be compressed;
compressing model gaps among the sub-category models, the sub-hardware component models and the sub-hardware component models of each parent category to be compressed in the current structural layer to be compressed to obtain an initial compression diagram of each parent category to be compressed;
carrying out mirror image turning processing on each to-be-compressed father category model including the bridging function hardware component in each to-be-compressed father category initial compression diagram to obtain each to-be-compressed father category compression diagram;
compressing each inverted parent category compression map to be compressed according to the target right boundary data of each inverted parent category compression map to be compressed;
and when the current architecture layer to be compressed is determined not to be the top architecture layer, returning to execute the operation of obtaining the first compression width data of the sub-category model of each parent category model to be compressed in the current architecture layer to be compressed in the AI processor architecture diagram and the second compression width data of the bus model of the sub-category model of each parent category model to be compressed until the compression of the reversed parent category compression diagram to be compressed matched with the top architecture layer is completed.
7. The method according to claim 1, further comprising, after the obtaining the AI processor architecture diagram according to the absolute layout data of the respective current-layer drawing elements and the absolute layout data of the respective bottom-layer drawing elements, drawing:
acquiring processor operating data in a target time window;
rendering an AI processor architecture diagram of a target playing quantity according to the processor operation data in the target time window to obtain a plurality of load slices;
and playing the load slices according to the generation sequence of the load slices.
8. An AI processor architecture rendering apparatus, comprising:
the data acquisition module is used for sequentially acquiring the drawing elements of the current drawing framework layer corresponding to the drawing elements of the current drawing framework layer according to the sequence of the drawing scales from small to large; the layer of drawing elements comprises: a domain model or a hardware component model, wherein the domain model comprises a sub-hardware component model and/or a sub-domain model;
the first nesting layout module is used for organizing and laying out drawing elements of each layer if the current drawing architecture layer is determined not to belong to the top level architecture layer, and obtaining a father category model corresponding to the drawing elements of each layer as the drawing elements of the next drawing architecture layer;
the second nested layout module is used for determining relative offset data of each layer of drawing elements relative to the distributed parent category model according to the layout result, returning and executing the operation of sequentially acquiring the layer of drawing elements corresponding to the current drawing architecture layer from small to large according to the drawing scale;
the first data determining module is used for organizing and distributing the drawing elements of each layer and determining absolute distribution data of the drawing elements of each layer if the current drawing architecture layer is determined to belong to the top-level architecture layer;
the second data determination module is used for determining absolute layout data corresponding to the bottom layer drawing elements step by step according to the absolute layout data of the bottom layer drawing elements and the predetermined relative offset data;
and the AI processor architecture drawing module is used for drawing to obtain an AI processor architecture according to the absolute layout data of each layer of drawing elements and the absolute layout data of each bottom layer of drawing elements.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the rendering method of the AI processor architecture of any of claims 1-7.
10. A computer storage medium on which a computer program is stored, characterized in that the program, when executed by a processor, implements a rendering method of the AI processor architecture of any of claims 1-7.
CN202210352918.7A 2022-04-06 2022-04-06 Drawing method, device, equipment and medium for AI processor architecture Active CN114429101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210352918.7A CN114429101B (en) 2022-04-06 2022-04-06 Drawing method, device, equipment and medium for AI processor architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210352918.7A CN114429101B (en) 2022-04-06 2022-04-06 Drawing method, device, equipment and medium for AI processor architecture

Publications (2)

Publication Number Publication Date
CN114429101A CN114429101A (en) 2022-05-03
CN114429101B true CN114429101B (en) 2022-06-17

Family

ID=81314248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210352918.7A Active CN114429101B (en) 2022-04-06 2022-04-06 Drawing method, device, equipment and medium for AI processor architecture

Country Status (1)

Country Link
CN (1) CN114429101B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110069257A (en) * 2019-04-25 2019-07-30 腾讯科技(深圳)有限公司 A kind of interface processing method, device and terminal
CN113282219A (en) * 2021-07-22 2021-08-20 深圳英集芯科技股份有限公司 Method for drawing assembly line CPU architecture diagram and terminal equipment
CN114238725A (en) * 2021-12-27 2022-03-25 中国建设银行股份有限公司 Visualization method and system for automatic layout mapping

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10523522B2 (en) * 2015-08-31 2019-12-31 The Boeing Company Environmental visualization system including computing architecture visualization to display a multidimensional layout

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110069257A (en) * 2019-04-25 2019-07-30 腾讯科技(深圳)有限公司 A kind of interface processing method, device and terminal
CN113282219A (en) * 2021-07-22 2021-08-20 深圳英集芯科技股份有限公司 Method for drawing assembly line CPU architecture diagram and terminal equipment
CN114238725A (en) * 2021-12-27 2022-03-25 中国建设银行股份有限公司 Visualization method and system for automatic layout mapping

Also Published As

Publication number Publication date
CN114429101A (en) 2022-05-03

Similar Documents

Publication Publication Date Title
US5696693A (en) Method for placing logic functions and cells in a logic design using floor planning by analogy
CN102004809B (en) For the method and apparatus showing the assembly of the object of PLM data base
CN106844781B (en) Data processing method and device
CN108573112B (en) Space flight test emission two-dimensional layout analysis method based on digital simulation
CN101142615A (en) Display priority for 2d cad documents
CN102142152B (en) For show method, device and the program of object on computer screen
CN105718643A (en) Optimization view angle based ship production design drawing-inspection device implementation method
CN115272637A (en) Large-area-oriented three-dimensional virtual ecological environment visualization integration and optimization system
CN110211234B (en) Grid model stitching system and method
CN114429101B (en) Drawing method, device, equipment and medium for AI processor architecture
CN116595808B (en) Event pyramid model construction and multi-granularity space-time visualization method and device
CN112484695A (en) Building indoor space clear height analysis method and device based on BIM model
CN114655382B (en) Virtual visualization system and method for ship structure dynamics analysis result
US7813905B2 (en) Simulation apparatus, simulation method, and computer-readable recording medium in which simulation program is stored
CN114254537A (en) Multi-scale component model finite element grid generation method and device and storage medium
Smith Integrating New and'Used'Modeling Tools for Performance Engineering
CN115797577A (en) Digital elevation model-based landform information model construction system and method
CN117556622A (en) Communication equipment panel model generation method
CN113656873B (en) Indoor effect graph generation method and device, storage medium and electronic equipment
WO2016205519A1 (en) Hybrid map drawing display
CN115935863B (en) Digital circuit load division processing method, device and computer equipment
Ren et al. Design and Development of 3D Urban Planning Management System Based on Oblique Image Technology
JP2000113024A (en) Method and device for generating list
JP2000172883A (en) Fluid simulator
CN116151183A (en) Chip layout three-dimensional modeling method and system, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant