CN117829083A - Routing method and device based on neural network, electronic equipment and storage medium - Google Patents

Routing method and device based on neural network, electronic equipment and storage medium Download PDF

Info

Publication number
CN117829083A
CN117829083A CN202410233627.5A CN202410233627A CN117829083A CN 117829083 A CN117829083 A CN 117829083A CN 202410233627 A CN202410233627 A CN 202410233627A CN 117829083 A CN117829083 A CN 117829083A
Authority
CN
China
Prior art keywords
neural network
module
position information
line
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410233627.5A
Other languages
Chinese (zh)
Other versions
CN117829083B (en
Inventor
陆钊
吴珺媛
王宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Lichi Semiconductor Co ltd
Original Assignee
Shanghai Lichi Semiconductor Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Lichi Semiconductor Co ltd filed Critical Shanghai Lichi Semiconductor Co ltd
Priority to CN202410233627.5A priority Critical patent/CN117829083B/en
Publication of CN117829083A publication Critical patent/CN117829083A/en
Application granted granted Critical
Publication of CN117829083B publication Critical patent/CN117829083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Design And Manufacture Of Integrated Circuits (AREA)

Abstract

The application discloses a wiring method, a device, electronic equipment and a storage medium based on a neural network, wherein the method comprises the following steps: determining starting point position information of a starting point module corresponding to at least one pre-arranged line and ending point position information of an ending point module; determining input data of the neural network model based on the start position information and the end position information; calculating turning point position information of each target line by utilizing a neural network model with input data determined based on a preset signal transmission phase difference of the prearranged line, wherein the neural network model takes the minimum number of turning points, the close to the preset length of the line length and/or the minimum number of different line crossings as constraint conditions, and constructs a loss function based on the constraint conditions; and carrying out wiring operation on the corresponding pre-arranged circuit based on the turning point position information to form a corresponding target circuit. The method accurately determines the related data of the target line.

Description

Routing method and device based on neural network, electronic equipment and storage medium
Technical Field
The present invention relates to the field of circuit design and manufacturing, and in particular, to a wiring method, device, electronic apparatus, and storage medium based on a neural network.
Background
In electronic devices, where PCB routing is required, there are theoretically an infinite number of possible connections for the electronic path between any two ICs. The phase difference of electric signal transmission can be caused by different arrangement modes among a plurality of circuits, and the phase difference refers to the phase difference between two signals in a circuit. The phase difference may be used to describe the amount of time offset between two signals, i.e., the amount of delay or advance of one signal relative to the other.
At present, when the circuit arrangement is carried out on equipment such as a PCB (printed Circuit Board), the arrangement is random due to the fact that unified planning is not carried out. This will cause a larger phase difference between the different lines. And is disadvantageous in controlling the signal. Meanwhile, due to the difference of phase differences, the identity information of the signal transmitting module cannot be determined by the signal receiving module, and the transmitting module is required to transmit the header containing the identity information to the receiving module, so that the signal processing complexity is definitely increased, and the data transmission efficiency is reduced.
Disclosure of Invention
An object of an embodiment of the application is to provide a wiring method, a wiring device, an electronic device and a storage medium based on a neural network. The method can quickly and accurately determine the related data of the target line. Meanwhile, the determined target line can reduce the complexity of signal processing and improve the data transmission efficiency.
In order to achieve the above object, an embodiment of the present application provides a wiring method based on a neural network, including:
determining starting point position information of a starting point module corresponding to at least one pre-arranged line and ending point position information of an ending point module;
determining input data of a neural network model based on the start position information and the end position information;
calculating to obtain turning point position information of each target line by using the neural network model with the input data determined based on the preset signal transmission phase difference of the prearranged line, wherein the neural network model takes the minimum number of turning points, the close to the preset length of the line length and/or the minimum number of different line crossings as constraint conditions, and constructs a loss function based on the constraint conditions;
and carrying out wiring operation on the corresponding pre-arranged circuit based on the turning point position information to form a corresponding target circuit, wherein the signal transmission phase difference of the target circuit is respectively associated with the identity information of the starting point module and the identity information of the end point module.
Optionally, the determining the start position information of the start module and the end position information of the end module corresponding to the at least one pre-arranged line includes:
Determining a corresponding position coordinate system based on the reference point on the target circuit board;
and determining the starting point coordinates of the starting point module and the ending point coordinates of the ending point module based on the position coordinate system.
Optionally, the determining input data of the neural network model based on the starting point position information and the ending point position information includes:
and constructing an input matrix based on the starting point coordinates, the ending point coordinates and the preset length of the prearranged circuit, wherein row vectors of the input matrix respectively correspond to the related information of each prearranged circuit.
Optionally, the signal transmission phase differences of all the prearranged circuits are the same target phase difference, and the calculating, based on the preset signal transmission phase differences of the prearranged circuits, the turning point position information of each target circuit by using the neural network model which determines the input data includes:
and carrying out regression operation on the neural network model based on the loss function on the condition that the signal transmission phase difference of the target line is the target phase difference.
Optionally, the method further comprises training the neural network model, including:
And determining the loss function based on the first weight of the constraint condition with the minimum turning point number, the second weight of the constraint condition with the line length close to the preset length and the third weight of the constraint condition with the minimum different line crossing number.
Optionally, the turning point position information of the target line is an output matrix, and the data form of the output matrix is an indefinite length vector, where the indefinite length vector includes a start point coordinate of the start point module, a turning point coordinate of the target line, and an end point coordinate of the end point module that are sequentially arranged.
Optionally, the method further comprises:
and determining the identity information of the starting point module and/or the identity information of the end point module based on the signal transmission phase difference of the target line.
The embodiment of the application also provides a wiring device based on the neural network, which comprises:
a position module configured to determine start position information of a start module and end position information of an end module corresponding to at least one pre-arranged line;
an input module configured to determine input data of a neural network model based on the start position information and the end position information;
The processing module is configured to calculate turning point position information of each target line by using the neural network model which determines the input data based on a preset signal transmission phase difference of the prearranged line, wherein the neural network model takes the minimum number of turning points, the close to the preset length of line length and/or the minimum number of different line crossings as constraint conditions, and a loss function is constructed based on the constraint conditions;
and the wiring module is configured to perform wiring operation on the corresponding pre-arranged circuit based on the turning point position information to form a corresponding target circuit, wherein the signal transmission phase difference of the target circuit is respectively associated with the identity information of the starting point module and the identity information of the end point module.
The embodiment of the application also provides electronic equipment, which comprises a processor and a memory, wherein the memory stores executable programs, and the memory executes the executable programs to perform the steps of the method.
Embodiments of the present application also provide a storage medium carrying one or more computer programs which, when executed by a processor, implement the steps of the method as described above.
The wiring method based on the neural network uses a neural network model, and the neural network model determines a loss function of the neural network model by minimizing the number of turning points, the line length is close to the preset length and/or the number of different line crossings is minimized as constraint conditions, so that intelligent optimization is realized on the neural network model, and related data of a target line is calculated rapidly and accurately. And the identity information of the starting point module and/or the end point module can be determined on the basis of the signal transmission phase difference of the target line in a mode of not sending a message, so that the signal processing complexity is reduced, and the data transmission efficiency is improved.
Drawings
FIG. 1 is a flow chart of a routing method based on a neural network model in an embodiment of the present application;
FIG. 2 is a flow chart of one embodiment of step S100 of FIG. 1 according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a transducer model in an embodiment of the present application;
fig. 4 is a block diagram of a wiring device based on a neural network model according to an embodiment of the present application.
Detailed Description
Various aspects and features of the present application are described herein with reference to the accompanying drawings.
It should be understood that various modifications may be made to the embodiments of the application herein. Therefore, the above description should not be taken as limiting, but merely as exemplification of the embodiments. Other modifications within the scope and spirit of this application will occur to those skilled in the art.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and, together with a general description of the application given above and the detailed description of the embodiments given below, serve to explain the principles of the application.
These and other characteristics of the present application will become apparent from the following description of a preferred form of embodiment, given as a non-limiting example, with reference to the accompanying drawings.
It is also to be understood that, although the present application has been described with reference to some specific examples, those skilled in the art can certainly realize many other equivalent forms of the present application.
The foregoing and other aspects, features, and advantages of the present application will become more apparent in light of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present application will be described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely exemplary of the application, which can be embodied in various forms. Well-known and/or repeated functions and constructions are not described in detail to avoid obscuring the application with unnecessary or excessive detail. Therefore, specific structural and functional details disclosed herein are not intended to be limiting, but merely serve as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present application in virtually any appropriately detailed structure.
The specification may use the word "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments as per the application.
According to the wiring method based on the neural network, reasonable wiring can be conducted on circuit boards such as PCBs, and wiring quality and efficiency are improved. The method comprises the steps of determining at least one pre-arranged circuit on a circuit board, and further determining starting point position information of a starting point module and end point position information of an end point module corresponding to the pre-arranged circuit. The start module and the end module are respectively relatively independent electronic modules, and can be respectively a chip or a functional module inside the chip. The pre-arranged line is a line that pre-connects the start module and the end module. The starting point position information is the position information of the starting point module, the end point position information is the position information of the end point module, and the starting point position information and the end point position information are input data of the neural network model. The neural network model needs to be trained and then used. The trained neural network model may be used to calculate a specific wiring arrangement between the start module and the end module. The neural network model may be a transducer model, or a model based on a transducer construct, for example. The transducer is a deep learning architecture that captures long range dependencies in an input sequence through self-attention mechanisms (self-attention) and position coding (position encoding). The transducer is not limited by the sequence length, and can process input sequences with arbitrary length. This allows it to be more efficient and accurate in processing long sequence data.
In addition, the wiring method further includes calculating using a neural network model in which input data is determined, based on a signal transmission phase difference of the pre-arranged wiring set in advance. In the calculation process, the number of turning points of the target circuit corresponding to the pre-arranged circuit is minimized, the length of the circuit approaches to the preset length and/or the number of different circuit crossings is minimized to be a constraint condition, and a loss function is constructed based on the constraint condition. And then based on the loss function. And obtaining turning point position information of each target line.
The turning point position information may be coordinates of a turning point of the target line. Based on the coordinates of the turning points, the wiring operation of the corresponding pre-arranged circuit can accurately and efficiently form the corresponding target circuit. Since the signal transmission phase difference of the target line is predetermined, when the end point module receives a signal having a known signal transmission phase difference, the identity information of the end point module can be determined based on the signal transmission phase difference. Therefore, the starting point module does not need to send a message to the ending point module to inform the identity of the starting point module, the complexity of signal processing is reduced, the bandwidth is saved, and the data transmission efficiency is improved.
The following describes a wiring method based on a neural network model according to an embodiment of the present application in detail with reference to the accompanying drawings, and fig. 1 is a flowchart of the wiring method based on the neural network model according to the embodiment of the present application, as shown in fig. 1, and the method includes the following steps:
s100, determining starting point position information of a starting point module and end point position information of an end point module corresponding to at least one pre-arranged line.
For example, wiring of the circuit board is required in designing, manufacturing, and the like of the circuit board. At least one pre-arranged line is firstly required to be determined, the pre-arranged line is not actually arranged, the pre-arranged line is determined according to the design requirement of a target circuit board, and then the target line which is required to be actually constructed is determined according to the determined pre-arranged line.
The pre-arranged line has a start module and an end module of the line, i.e. a line is connected from the start module to the end module. The starting point module and the end point module are respectively relatively independent electronic modules, and can be respectively a chip or a functional module inside the chip.
The start module has start position information that demarcates a relative positional relationship of the start module, for example, the start position information may be coordinates of the start module in the entire target circuit board. Similarly, the end point module has end point position information that demarcates the relative positional relationship of the end point module, for example, the end point position information may be coordinates of the end point module throughout the target circuit board. When the start position information and the end position information are determined, the relative position relationship between the start module and the end module can be determined.
In one embodiment, the start position information and the end position information may each be represented by means of a dataset, an array, or the like.
And S200, determining input data of the neural network based on the starting point position information and the ending point position information.
Illustratively, the neural network model is used to calculate specific routing information between the start module and the end module. For example, the neural network model may be a transducer model, or a model constructed based on a transducer. The transducer may be a deep learning architecture that captures long range dependencies in the input sequence through self-attention mechanisms (self-attention) and position coding (position encoding). The transducer is not limited by the sequence length, and can process input sequences with arbitrary length. This allows it to be more efficient and accurate in processing long sequence data. The neural network model needs to be trained and then used, and a corresponding loss function is needed in the working process.
The neural network in this application is exemplified below. For example, when the neural network model is a transducer model, the self-attention mechanism is the core of the transducer, which allows the model to compare each element in the input sequence to other elements as the sequence is processed, so that each element is processed correctly in a different context.
To further improve the performance of the model, a multi-head attention mechanism (multi-head attention) was introduced by the transducer. The multi-headed attention mechanism learns different context representations by applying the self-attention mechanism to multiple sets of different query matrices Q, key matrices K, and value matrices V. Specifically, the input sequence is respectively subjected to different linear transformation to obtain a plurality of groups of different query matrixes Q, key matrixes K and value matrixes V, and then the query matrixes Q, the key matrixes K and the value matrixes V are input into a plurality of parallel self-attention mechanisms for processing.
In one embodiment, as shown in FIG. 3, the transducer model consists of two parts, an Encoder (Encoder) and a Decoder (Decode). The encoder converts the input sequence into a series of context representation vectors (Contextualized Embedding), which are composed of multiple identical layers. Each Layer consists of two sublayers, a Self-Attention Layer and a feed-forward full-link Layer (Feedforward Layer), respectively. Specifically, the self-attention layer interacts each position in the input sequence with all other positions to calculate a context representation vector for each position. The feed-forward fully connected layer then maps the contextual representation vector for each location to another vector space to capture higher level features. The decoder takes the output of the encoder and the target sequence as inputs, and generates a probability distribution for each position in the target sequence. The decoder consists of multiple identical layers, each consisting of three sublayers, a self-attention layer, an Encoder-decoder attention layer (Encoder-Decoder Attention Layer), and a feed-forward fully-connected layer, respectively. Wherein the self-attention layer and the feed-forward fully-connected layer function identically to the encoder, and the encoder-decoder attention layer interacts the input of the current position of the decoder with all positions of the encoder to obtain information about the target sequence.
Input data determined based on the start position information and the end position information may be input to an Encoder (Encoder), encoded by the Encoder, the encoded input data is calculated by a transducer model, and the obtained calculation result is output by the decoder.
And S300, calculating the turning point position information of each target line by using the neural network model which determines the input data based on the preset signal transmission phase difference of the prearranged line, wherein the neural network model takes the minimum number of turning points, the close to the preset length of the line length and/or the minimum number of different line crossings as constraint conditions, and constructing a lossy function based on the constraint conditions.
Illustratively, the signal transmission phase difference refers to a phase difference between two signals in a circuit. The signal transmission phase difference may be used to describe the amount of time offset between two signals, i.e., the amount of delay or advance of one signal relative to the other. The units of phase difference are typically expressed in degrees (degrees or radians). There are many possible ways of connecting between the start module and the end module. From the circuit point of view, the various connection modes are not different. However, since the actual line lengths of the various connections are different, there is a significant difference in the phase difference of the signal transmission (phase difference of the electrical signal) from the start module in different connection modes for the end module. Therefore, both the length and the topology of the line have a correspondence with the signal transmission phase difference.
In this embodiment, under the condition of signal transmission phase difference based on a preset pre-arranged line, the neural network model is used to calculate the turning point position information of each target line, so that the obtained target line and the signal transmission phase difference have an association relationship, and the end point module can determine the identity information of the start point module based on the signal transmission phase difference and the association relationship.
The target line corresponds to the pre-arranged line, and the pre-arranged line can obtain relevant information of the corresponding target line after calculation of the neural network model. Including location information of the turning point on the target line, such as coordinates of the turning point. The wiring can be specifically performed according to the position of the turning point.
In this embodiment, the constraint condition of the neural network model is constructed based on the turning points of the lines concerned, the lengths of the lines, and the relationships between the different lines. The method specifically comprises the step of constructing constraint conditions of the neural network model by minimizing the number of turning points of lines involved in training data, enabling the length of the lines to approach to a preset length and/or minimizing the number of different line crossings. The formed constraint conditions can enable the neural network model to be more intelligent, and the calculation result is optimized.
After determining the constraint, a loss function of the neural network model may be determined based on the constraint. The penalty function is a function that maps random events or their values of related random variables to non-negative real numbers to represent the "risk" or "penalty" of the random event. In application, the loss function is typically associated with an optimization problem as a learning criterion, i.e., solving and evaluating the neural network model by minimizing the loss function.
And S400, carrying out wiring operation on the corresponding pre-arranged circuit based on the turning point position information to form a corresponding target circuit, wherein the signal transmission phase difference of the target circuit is respectively associated with the identity information of the starting point module and the identity information of the end point module.
Illustratively, the inflection point location information indicates the relative location of the inflection point on the target line, such as the coordinates of the inflection point on the target circuit board. The pre-arranged lines may have a plurality of turning points, and based on the position information of the turning points, the corresponding pre-arranged lines can be rapidly and accurately routed to form target lines corresponding to the pre-arranged lines. The target line is an actually wired line. The signal transmission phase difference of the target line is the preset signal transmission phase difference of the prearranged line.
Since the signal transmission phase difference of the target line is predetermined, and the neural network module performs calculation under the condition of the signal transmission phase difference, the end module can obtain the information related to the signal transmission phase difference, such as the identity information of the start module, in advance. When the end point module receives a signal having a known phase difference in signal transmission, identity information of the end point module may be determined based on the phase difference in signal transmission. Therefore, the starting point module does not need to send a message to the ending point module to inform the identity of the starting point module, the complexity of signal processing is reduced, the bandwidth is saved, and the data transmission efficiency is improved.
The neural network-based wiring method of the embodiment of the application uses a neural network model, and the neural network model determines a loss function of the neural network model by minimizing the number of turning points, the line length approaches to a preset length and/or the number of different line crossings is minimized as a constraint condition. Intelligent optimization is realized on the neural network model so as to rapidly and accurately calculate the related data of the target line. And the identity information of the starting point module and/or the end point module can be determined on the basis of the signal transmission phase difference of the target line in a mode of not sending a message, so that the signal processing complexity is reduced, and the data transmission efficiency is improved.
In one embodiment of the present application, the determining the start position information of the start module and the end position information of the end module corresponding to the at least one pre-arranged line, as shown in fig. 2, includes the following steps:
s110, determining a corresponding position coordinate system based on the reference point on the target circuit board.
Illustratively, the target circuit board is provided with at least one reference point for position reference of other modules on the target circuit board. In this embodiment, the corresponding position coordinate system is determined based on the position of the reference point.
For example, a center position of the position target circuit board is determined as a reference point, and a position coordinate system is constructed based on the center position.
And S120, determining the starting point coordinates of the starting point module and the ending point coordinates of the ending point module based on the position coordinate system.
Illustratively, all locations on the target circuit board have corresponding coordinates based on the location coordinate system. This includes determining the position coordinates of the start module and the position coordinates of the end module based on the position coordinate system.
For example, the position coordinates of the first endpoint module are (115, 46), and the position coordinates of the corresponding first endpoint module are (152, 162); the second start point module has position coordinates (146, 46) and the corresponding first end point module has position coordinates (161, 170).
In one embodiment of the present application, the determining the input data of the neural network model based on the start point position information and the end point position information includes:
and constructing an input matrix based on the starting point coordinates, the ending point coordinates and the preset length of the prearranged circuit, wherein row vectors of the input matrix respectively correspond to the related information of each prearranged circuit.
Illustratively, the input matrix is one form of input data for the neural network model. The input matrix is constructed by the preset lengths of the starting point coordinates, the end point coordinates and the prearranged route in a specific numerical mode respectively.
For example, the position coordinates of the first endpoint module are (115, 46), the position coordinates of the corresponding first endpoint module are (152, 162), and the corresponding first predetermined length is 138.7; the position coordinates of the second starting point module are (146, 46), the position coordinates of the corresponding first ending point module are (161, 170), and the corresponding first preset length is 130.2. As shown in fig. 3, the input matrix is constructed based on the above data as:
the input matrix with the fixed size matrix is used as the input data of the neural network model, wherein each row of the input matrix represents a target line, and the (x, y) coordinates of the starting point module and the ending point module and the preset length are sequentially arranged.
In an embodiment of the present application, the signal transmission phase differences of all the prearranged circuits are the same target phase difference, and the calculating, based on the preset signal transmission phase differences of the prearranged circuits, the turning point position information of each target circuit by using the neural network model that determines the input data includes:
and carrying out regression operation on the neural network model based on the loss function on the condition that the signal transmission phase difference of the target line is the target phase difference.
The signal transmission phase differences of all the prearranged lines are the same target phase difference, i.e. the signal transmission phase differences of all the target lines are the same target phase difference. On the premise of the same target phase difference, the neural network model carries out regression operation based on the loss function. The regression operation is a statistical method for studying the relationship between variables. The neural network model performs regression operation to establish a nonlinear relationship between input data and output data, so as to obtain accurate output data. The output data can be the position coordinates of turning points, and the signal phase difference corresponding to the output data is also the target phase difference.
In one embodiment of the present application, the method further comprises training the neural network model, including:
and determining the loss function based on the first weight of the constraint condition with the minimum turning point number, the second weight of the constraint condition with the line length close to the preset length and the third weight of the constraint condition with the minimum different line crossing number.
Illustratively, the neural network model needs to be trained and then used. In the training process, training data can be used for input and other operations to realize the training step of the neural network model, and constraint conditions and loss functions of the neural network are determined step by step. In this embodiment, the loss function of the neural network model is determined step by step based on the constraint condition that the number of turning points involved in the training data is minimized, the line length approaches the preset length, and/or the number of different line crossings is minimized.
The constraint condition that the number of turning points is minimized, the constraint condition that the line length is close to the preset length, and the constraint condition that the number of different lines to cross is minimized have respective weights, and a loss function can be constructed based on the different weights and the respective corresponding constraint conditions.
For example, the transducer model needs to minimize the following loss functions (loss functions) in training:
wherein Y is output data of the neural network model; i, j represent line numbers;representing vector length, i.e., the number of turning points (number of inflection points on a line); />Representing the last number of the input matrix in the ith row, namely the preset length of the ith line; />Representing the total length of the ith line, and adding the lengths of each section after the inflection points on the lines are connected in sequence; />Calculating the crossing times of the ith line and the jth line; />Calculating a distance between the network output and the real value;representing the weight, i.e. the duty cycle in loss, where +.>Representing a first weight, ++>Representing a second weight, ++>Representing a third weight.
In an embodiment of the present application, the turning point position information of the target line is an output matrix, and a data form of the output matrix is an indefinite length vector, where the indefinite length vector includes a start point coordinate of the start point module, a turning point coordinate of the target line, and an end point coordinate of the end point module that are sequentially arranged.
Illustratively, as described in connection with the above embodiments, as shown in fig. 3, the input matrix of the neural network model is:
The input matrix with the fixed size matrix is used as the input data of the neural network model, wherein each row of the input matrix represents a target line, and the (x, y) coordinates of the starting point module and the ending point module and the preset length are sequentially arranged. The calculated output data is a series of indefinite length vectors, and represents coordinates of a starting point module, coordinates of each turning point on a target line and coordinates of an end point module which are sequentially arranged on a connecting line, for example: output data is an indefinite length vector. Wherein the starting point module has coordinates (146, 46), the first turning point has coordinates (146, 67), the second turning point has coordinates (161,82 The coordinates of the endpoint module are (161, 170).
In one embodiment of the present application, the method further comprises the steps of: and determining the identity information of the starting point module and/or the identity information of the end point module based on the signal transmission phase difference of the target line.
The signal transmission phase difference of the target line is illustratively a predetermined signal transmission phase difference of the prearranged line. In the process of calculating the turning point coordinates on the premise of the signal transmission phase difference and laying a line based on the turning point coordinates, the end point module can obtain related information of the signal transmission phase difference, such as identity information of the start point module, in advance. When each end point module receives the signal and determines that the phase difference is the signal transmission phase difference, the identity information of the corresponding start point module can be determined. Therefore, the starting point module does not need to send a message to the ending point module to inform the identity of the starting point module, the complexity of signal processing is reduced, the bandwidth is saved, and the data transmission efficiency is improved.
In one embodiment, an electronically controlled phase locked loop is used at the endpoint module to generate phase differences to the local signal by using different settings in succession until phase matches the incoming signal. By means of the enumeration, an electronic signal corresponding to the current phase can be obtained, and then a signal transmission phase difference is obtained.
For example, when the signal transmission phase difference of the predetermined target line is a, after the first end point module receives the first signal, when the corresponding phase difference is determined to be a, based on the association relationship between the phase difference a and the start point module, the start point module associated with the first end point module can be determined to be the first start point module, and the first start point module is not required to send a message indicating the identity to the first end point module; similarly, when the predetermined signal transmission phase difference of the target line is B, after the second end point module receives the second signal, and when the corresponding phase difference is determined to be B, based on the association relationship between the phase difference B and the start point module, the start point module associated with the second end point module can be determined to be the second start point module, and the second start point module is not required to send a message indicating the identity to the second end point module.
Based on the same inventive concept, the embodiment of the present application further provides a wiring device based on a neural network, as shown in fig. 4, including:
and a position module configured to determine start position information of the start module and end position information of the end module corresponding to the at least one pre-arranged line.
For example, wiring of the circuit board is required in designing, manufacturing, and the like of the circuit board. At least one pre-arranged line is firstly required to be determined, the pre-arranged line is not actually arranged, the pre-arranged line is determined according to the design requirement of a target circuit board, and then the target line which is required to be actually constructed is determined according to the determined pre-arranged line.
The pre-arranged line has a start module and an end module of the line, i.e. a line is connected from the start module to the end module. The starting point module and the end point module are respectively relatively independent electronic modules, and can be respectively a chip or a functional module inside the chip.
The start module has start position information that demarcates a relative positional relationship of the start module, for example, the start position information may be coordinates of the start module in the entire target circuit board. Similarly, the end point module has end point position information that demarcates the relative positional relationship of the end point module, for example, the end point position information may be coordinates of the end point module throughout the target circuit board. The position module determines the start position information and the end position information, and then the relative position relationship between the start module and the end module can be determined.
In one embodiment, the start position information and the end position information may each be represented by means of a dataset, an array, or the like.
An input module configured to determine input data of a neural network model based on the start position information and the end position information.
Illustratively, the neural network model is used to calculate specific routing information between the start module and the end module. For example, the neural network model may be a transducer model, or a model constructed based on a transducer. The transducer may be a deep learning architecture that captures long range dependencies in the input sequence through self-attention mechanisms (self-attention) and position coding (position encoding). The transducer is not limited by the sequence length, and can process input sequences with arbitrary length. This allows it to be more efficient and accurate in processing long sequence data. The neural network model needs to be trained and then used, and a corresponding loss function is needed in the working process.
The neural network in this application is exemplified below. For example, when the neural network model is a transducer model, the self-attention mechanism is the core of the transducer, which allows the model to compare each element in the input sequence to other elements as the sequence is processed, so that each element is processed correctly in a different context.
To further improve the performance of the model, a multi-head attention mechanism (multi-head attention) was introduced by the transducer. The multi-headed attention mechanism learns different context representations by applying the self-attention mechanism to multiple sets of different query matrices Q, key matrices K, and value matrices V. Specifically, the input sequence is respectively subjected to different linear transformation to obtain a plurality of groups of different query matrixes Q, key matrixes K and value matrixes V, and then the query matrixes Q, the key matrixes K and the value matrixes V are input into a plurality of parallel self-attention mechanisms for processing.
In one embodiment, the transducer model consists of two parts, an Encoder (Encoder) and a Decoder (Decoder). The encoder converts the input sequence into a series of context representation vectors (Contextualized Embedding), which are composed of multiple identical layers. Each Layer consists of two sublayers, a Self-Attention Layer and a feed-forward full-link Layer (Feedforward Layer), respectively. Specifically, the self-attention layer interacts each position in the input sequence with all other positions to calculate a context representation vector for each position. The feed-forward fully connected layer then maps the contextual representation vector for each location to another vector space to capture higher level features. The decoder takes the output of the encoder and the target sequence as inputs, and generates a probability distribution for each position in the target sequence. The decoder consists of multiple identical layers, each consisting of three sublayers, a self-attention layer, an Encoder-decoder attention layer (Encoder-Decoder Attention Layer), and a feed-forward fully-connected layer, respectively. Wherein the self-attention layer and the feed-forward fully-connected layer function identically to the encoder, and the encoder-decoder attention layer interacts the input of the current position of the decoder with all positions of the encoder to obtain information about the target sequence.
The input module may input the input data determined based on the start position information and the end position information into an Encoder (Encoder), the Encoder encodes the input data, the transducer model calculates the encoded input data, and the decoder outputs the obtained calculation result.
The processing module is configured to calculate turning point position information of each target line by using the neural network model which determines the input data based on the preset signal transmission phase difference of the prearranged line, wherein the neural network model takes the minimum number of turning points, the close to the preset length of line length and/or the minimum number of different line crossings as constraint conditions, and constructs a loss function based on the constraint conditions.
Illustratively, the signal transmission phase difference refers to a phase difference between two signals in a circuit. The signal transmission phase difference may be used to describe the amount of time offset between two signals, i.e., the amount of delay or advance of one signal relative to the other. The units of phase difference are typically expressed in degrees (degrees or radians). There are many possible ways of connecting between the start module and the end module. From the circuit point of view, the various connection modes are not different. However, since the actual line lengths of the various connections are different, there is a significant difference in the phase difference of the signal transmission (phase difference of the electrical signal) from the start module in different connection modes for the end module. Therefore, both the length and the topology of the line have a correspondence with the signal transmission phase difference.
In this embodiment, under the condition of signal transmission phase difference based on a preset pre-arranged line, the processing module calculates turning point position information of each target line by using a neural network model, so that the obtained target line and the signal transmission phase difference have an association relationship, and the end point module can determine identity information of the start point module based on the signal transmission phase difference and the association relationship.
The target line corresponds to the pre-arranged line, and the pre-arranged line can obtain relevant information of the corresponding target line after calculation of the neural network model. Including location information of the turning point on the target line, such as coordinates of the turning point. The wiring can be specifically performed according to the position of the turning point.
In this embodiment, the processing module constructs the constraint condition of the neural network model based on the turning points of the related lines, the line lengths, and the relationships between different lines. The method specifically comprises the step of constructing constraint conditions of the neural network model by minimizing the number of turning points of lines involved in training data, enabling the length of the lines to approach to a preset length and/or minimizing the number of different line crossings. The formed constraint conditions can enable the neural network model to be more intelligent, and the calculation result is optimized.
The determine constraint post-processing module may determine a loss function of the neural network model based on the constraint. The penalty function is a function that maps random events or their values of related random variables to non-negative real numbers to represent the "risk" or "penalty" of the random event. In application, the loss function is typically associated with an optimization problem as a learning criterion, i.e., solving and evaluating the neural network model by minimizing the loss function.
And the wiring module is configured to perform wiring operation on the corresponding pre-arranged circuit based on the turning point position information to form a corresponding target circuit, wherein the signal transmission phase difference of the target circuit is respectively associated with the identity information of the starting point module and the identity information of the end point module.
Illustratively, the inflection point location information indicates the relative location of the inflection point on the target line, such as the coordinates of the inflection point on the target circuit board. The wiring module can rapidly and accurately perform wiring operation on the corresponding pre-arranged circuit based on the position information of the turning points to form a target circuit corresponding to the pre-arranged circuit. The target line is an actually wired line. The signal transmission phase difference of the target line is the preset signal transmission phase difference of the prearranged line.
Since the signal transmission phase difference of the target line is predetermined, and the neural network module performs calculation under the condition of the signal transmission phase difference, the end module can obtain the information related to the signal transmission phase difference, such as the identity information of the start module, in advance. When the end point module receives a signal having a known phase difference in signal transmission, identity information of the end point module may be determined based on the phase difference in signal transmission. Therefore, the starting point module does not need to send a message to the ending point module to inform the identity of the starting point module, the complexity of signal processing is reduced, the bandwidth is saved, and the data transmission efficiency is improved.
In one embodiment of the present application, the location module is further configured to:
determining a corresponding position coordinate system based on the reference point on the target circuit board;
and determining the starting point coordinates of the starting point module and the ending point coordinates of the ending point module based on the position coordinate system.
In one embodiment of the present application, the input module is further configured to:
and constructing an input matrix based on the starting point coordinates, the ending point coordinates and the preset length of the prearranged circuit, wherein row vectors of the input matrix respectively correspond to the related information of each prearranged circuit.
In an embodiment of the present application, the signal transmission phase differences of all the prearranged wires are the same target phase difference, and the processing module is further configured to:
and carrying out regression operation on the neural network model based on the loss function on the condition that the signal transmission phase difference of the target line is the target phase difference.
In one embodiment of the present application, the wiring device further comprises a training module configured to:
and determining the loss function based on the first weight of the constraint condition with the minimum turning point number, the second weight of the constraint condition with the line length close to the preset length and the third weight of the constraint condition with the minimum different line crossing number.
In an embodiment of the present application, the turning point position information of the target line is an output matrix, and a data form of the output matrix is an indefinite length vector, where the indefinite length vector includes a start point coordinate of the start point module, a turning point coordinate of the target line, and an end point coordinate of the end point module that are sequentially arranged.
In one embodiment of the present application, the processing module is further configured to:
and determining the identity information of the starting point module and/or the identity information of the end point module based on the signal transmission phase difference of the target line.
The embodiment of the application also provides an electronic device, as shown in the figure, which comprises a processor and a memory, wherein the memory stores an executable program, and the memory executes the executable program to perform the steps of the method.
Embodiments of the present application also provide a storage medium carrying one or more computer programs which, when executed by a processor, implement the steps of the method as described above.
It should be appreciated that in embodiments of the present application, the processor may be a central processing unit (Central Processing Unit, CPU for short), other general purpose processor, digital signal processor (Digital Signal Processing, DSP for short), application specific integrated circuit (Application Specific Integrated Circuit, ASIC for short), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA for short) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
It should also be understood that the memory referred to in the embodiments of the present application may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable ROM (Electrically EPROM, EEPROM), or a flash Memory. The volatile memory may be a random access memory (Random Access Memory, RAM for short) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (Direct Rambus RAM, DR RAM).
Note that when the processor is a general-purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, the memory (storage module) is integrated into the processor.
It should be noted that the memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
It should also be understood that the first, second, third, fourth, and various numerical numbers referred to herein are merely descriptive convenience and are not intended to limit the scope of the present application.
It should be understood that the term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method. To avoid repetition, a detailed description is not provided herein.
In various embodiments of the present application, the sequence number of each process does not mean the sequence of execution, and the execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative logical blocks (illustrative logical block, abbreviated ILBs) and steps described in connection with the embodiments disclosed herein can be implemented in electronic hardware, or in combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed method, apparatus and electronic device may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A neural network-based wiring method, comprising:
determining starting point position information of a starting point module corresponding to at least one pre-arranged line and ending point position information of an ending point module;
determining input data of a neural network model based on the start position information and the end position information;
calculating to obtain turning point position information of each target line by using the neural network model with the input data determined based on the preset signal transmission phase difference of the prearranged line, wherein the neural network model takes the minimum number of turning points, the close to the preset length of the line length and/or the minimum number of different line crossings as constraint conditions, and constructs a loss function based on the constraint conditions;
And carrying out wiring operation on the corresponding pre-arranged circuit based on the turning point position information to form a corresponding target circuit, wherein the signal transmission phase difference of the target circuit is respectively associated with the identity information of the starting point module and the identity information of the end point module.
2. The neural network-based wiring method according to claim 1, wherein determining the start point position information of the start point module and the end point position information of the end point module corresponding to the at least one pre-arranged line includes:
determining a corresponding position coordinate system based on the reference point on the target circuit board;
and determining the starting point coordinates of the starting point module and the ending point coordinates of the ending point module based on the position coordinate system.
3. The neural network-based wiring method according to claim 2, wherein the determining input data of a neural network model based on the start point position information and the end point position information includes:
and constructing an input matrix based on the starting point coordinates, the ending point coordinates and the preset length of the prearranged circuit, wherein row vectors of the input matrix respectively correspond to the related information of each prearranged circuit.
4. The wiring method according to claim 1, wherein the signal transmission phase differences of all the prearranged circuits are the same target phase differences, the calculation of turning point position information of each target circuit based on the preset signal transmission phase differences of the prearranged circuits using the neural network model which determines the input data, comprises:
and carrying out regression operation on the neural network model based on the loss function on the condition that the signal transmission phase difference of the target line is the target phase difference.
5. The neural network-based wiring method of claim 1, further comprising training the neural network model, comprising:
and determining the loss function based on the first weight of the constraint condition with the minimum turning point number, the second weight of the constraint condition with the line length close to the preset length and the third weight of the constraint condition with the minimum different line crossing number.
6. The neural network-based wiring method according to claim 1, wherein the turning point position information of the target line is an output matrix, and the data form of the output matrix is an indefinite length vector, and the indefinite length vector includes start point coordinates of the start point module, turning point coordinates of the target line, and end point coordinates of the end point module, which are sequentially arranged.
7. The neural network-based wiring method of claim 1, further comprising:
and determining the identity information of the starting point module and/or the identity information of the end point module based on the signal transmission phase difference of the target line.
8. A neural network-based wiring device, comprising:
a position module configured to determine start position information of a start module and end position information of an end module corresponding to at least one pre-arranged line;
an input module configured to determine input data of a neural network model based on the start position information and the end position information;
the processing module is configured to calculate turning point position information of each target line by using the neural network model which determines the input data based on a preset signal transmission phase difference of the prearranged line, wherein the neural network model takes the minimum number of turning points, the close to the preset length of line length and/or the minimum number of different line crossings as constraint conditions, and a loss function is constructed based on the constraint conditions;
and the wiring module is configured to perform wiring operation on the corresponding pre-arranged circuit based on the turning point position information to form a corresponding target circuit, wherein the signal transmission phase difference of the target circuit is respectively associated with the identity information of the starting point module and the identity information of the end point module.
9. An electronic device comprising a processor and a memory, the memory having stored therein an executable program that is executed by the memory to perform the steps of the method of any of claims 1 to 7.
10. A storage medium carrying one or more computer programs which, when executed by a processor, implement the steps of the method of any of claims 1 to 7.
CN202410233627.5A 2024-03-01 2024-03-01 Routing method and device based on neural network, electronic equipment and storage medium Active CN117829083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410233627.5A CN117829083B (en) 2024-03-01 2024-03-01 Routing method and device based on neural network, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410233627.5A CN117829083B (en) 2024-03-01 2024-03-01 Routing method and device based on neural network, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117829083A true CN117829083A (en) 2024-04-05
CN117829083B CN117829083B (en) 2024-05-28

Family

ID=90510057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410233627.5A Active CN117829083B (en) 2024-03-01 2024-03-01 Routing method and device based on neural network, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117829083B (en)

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070088530A1 (en) * 2001-09-29 2007-04-19 The Boeing Company Adaptive distance field constraint for designing a route for a transport element
US20080015824A1 (en) * 2006-04-28 2008-01-17 Caterpillar Inc. Automatic hose and harness routing method and system
US7735048B1 (en) * 2003-11-24 2010-06-08 Cadence Design Systems, Inc. Achieving fast parasitic closure in a radio frequency integrated circuit synthesis flow
US20200025877A1 (en) * 2018-07-18 2020-01-23 Qualcomm Incorporated Object verification using radar images
KR102131414B1 (en) * 2019-12-31 2020-07-08 한국산업기술시험원 System for the energy saving pre-cooling/heating training of an air conditioner using deep reinforcement learning algorithm based on the user location, living climate condition and method thereof
US20200257958A1 (en) * 2019-02-07 2020-08-13 Samsung Electronics Co., Ltd. Optical device and optical neural network apparatus including the same
US20200380342A1 (en) * 2019-05-31 2020-12-03 XNOR.ai, Inc. Neural network wiring discovery
EP3772028A1 (en) * 2019-07-30 2021-02-03 Bayerische Motoren Werke Aktiengesellschaft Method and system for routing a plurality of vehicles using a neural network
CN112528591A (en) * 2020-12-11 2021-03-19 电子科技大学 Automatic PCB wiring method based on joint Monte Carlo tree search
DE102019217733A1 (en) * 2019-11-18 2021-05-20 Volkswagen Aktiengesellschaft Method for operating an operating system in a vehicle and operating system for a vehicle
US20210201145A1 (en) * 2019-12-31 2021-07-01 Nvidia Corporation Three-dimensional intersection structure prediction for autonomous driving applications
US20210209939A1 (en) * 2020-12-08 2021-07-08 Harbin Engineering University Large-scale real-time traffic flow prediction method based on fuzzy logic and deep LSTM
CN113673196A (en) * 2021-08-15 2021-11-19 上海立芯软件科技有限公司 Global wiring optimization method based on routability prediction
CN114462594A (en) * 2022-01-11 2022-05-10 广东轩辕网络科技股份有限公司 Neural network training method and device, electronic equipment and storage medium
CN115221833A (en) * 2022-07-27 2022-10-21 苏州浪潮智能科技有限公司 PCB wiring sorting method, system and device and readable storage medium
CN115329705A (en) * 2022-07-29 2022-11-11 全智芯(上海)技术有限公司 Layout method and device of semiconductor device, readable storage medium and terminal
KR102513647B1 (en) * 2022-06-29 2023-03-24 디지털파워넷 주식회사 IoT-based electric safety remote inspection and control device using a phase difference-free sensor unit
CN116070575A (en) * 2023-01-12 2023-05-05 广东工业大学 Chip wiring optimization method and software system
CN116151324A (en) * 2023-02-28 2023-05-23 东南大学 RC interconnection delay prediction method based on graph neural network
US11803760B1 (en) * 2019-10-29 2023-10-31 Cadence Design Systems, Inc. Method and systems for combining neural networks with genetic optimization in the context of electronic component placement
WO2024011876A1 (en) * 2022-07-14 2024-01-18 东南大学 Method for predicting path delay of digital integrated circuit after wiring
CN117473384A (en) * 2023-10-30 2024-01-30 南方电网数字电网研究院有限公司 Power grid line safety constraint identification method, device, equipment and storage medium
WO2024040941A1 (en) * 2022-08-25 2024-02-29 华为云计算技术有限公司 Neural architecture search method and device, and storage medium

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070088530A1 (en) * 2001-09-29 2007-04-19 The Boeing Company Adaptive distance field constraint for designing a route for a transport element
US7735048B1 (en) * 2003-11-24 2010-06-08 Cadence Design Systems, Inc. Achieving fast parasitic closure in a radio frequency integrated circuit synthesis flow
US20080015824A1 (en) * 2006-04-28 2008-01-17 Caterpillar Inc. Automatic hose and harness routing method and system
US20200025877A1 (en) * 2018-07-18 2020-01-23 Qualcomm Incorporated Object verification using radar images
US20200257958A1 (en) * 2019-02-07 2020-08-13 Samsung Electronics Co., Ltd. Optical device and optical neural network apparatus including the same
US20200380342A1 (en) * 2019-05-31 2020-12-03 XNOR.ai, Inc. Neural network wiring discovery
EP3772028A1 (en) * 2019-07-30 2021-02-03 Bayerische Motoren Werke Aktiengesellschaft Method and system for routing a plurality of vehicles using a neural network
US11803760B1 (en) * 2019-10-29 2023-10-31 Cadence Design Systems, Inc. Method and systems for combining neural networks with genetic optimization in the context of electronic component placement
DE102019217733A1 (en) * 2019-11-18 2021-05-20 Volkswagen Aktiengesellschaft Method for operating an operating system in a vehicle and operating system for a vehicle
US20210201145A1 (en) * 2019-12-31 2021-07-01 Nvidia Corporation Three-dimensional intersection structure prediction for autonomous driving applications
KR102131414B1 (en) * 2019-12-31 2020-07-08 한국산업기술시험원 System for the energy saving pre-cooling/heating training of an air conditioner using deep reinforcement learning algorithm based on the user location, living climate condition and method thereof
US20210209939A1 (en) * 2020-12-08 2021-07-08 Harbin Engineering University Large-scale real-time traffic flow prediction method based on fuzzy logic and deep LSTM
CN112528591A (en) * 2020-12-11 2021-03-19 电子科技大学 Automatic PCB wiring method based on joint Monte Carlo tree search
CN113673196A (en) * 2021-08-15 2021-11-19 上海立芯软件科技有限公司 Global wiring optimization method based on routability prediction
CN114462594A (en) * 2022-01-11 2022-05-10 广东轩辕网络科技股份有限公司 Neural network training method and device, electronic equipment and storage medium
KR102513647B1 (en) * 2022-06-29 2023-03-24 디지털파워넷 주식회사 IoT-based electric safety remote inspection and control device using a phase difference-free sensor unit
WO2024011876A1 (en) * 2022-07-14 2024-01-18 东南大学 Method for predicting path delay of digital integrated circuit after wiring
CN115221833A (en) * 2022-07-27 2022-10-21 苏州浪潮智能科技有限公司 PCB wiring sorting method, system and device and readable storage medium
CN115329705A (en) * 2022-07-29 2022-11-11 全智芯(上海)技术有限公司 Layout method and device of semiconductor device, readable storage medium and terminal
WO2024040941A1 (en) * 2022-08-25 2024-02-29 华为云计算技术有限公司 Neural architecture search method and device, and storage medium
CN116070575A (en) * 2023-01-12 2023-05-05 广东工业大学 Chip wiring optimization method and software system
CN116151324A (en) * 2023-02-28 2023-05-23 东南大学 RC interconnection delay prediction method based on graph neural network
CN117473384A (en) * 2023-10-30 2024-01-30 南方电网数字电网研究院有限公司 Power grid line safety constraint identification method, device, equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DONG YUAN GE等: "Micro drill defect detection with hybrid BP networks,clusters selection and crossover", SPRINGER LINK, 2 February 2024 (2024-02-02) *
MURPHY等: "Neural nerwork fitness function for optimization-based approaches to PCB design automation", MIT LIBRARIES, 31 December 2020 (2020-12-31) *
李强, 石新生, 龚庆武: "输电线路故障定位新技术应用综述", 高压电器, no. 01, 25 February 2005 (2005-02-25) *
郭晓丹;孟桥;梁勇;: "基于Σ-Δ调制的比特流Sigmoid函数的实现及其在3-D空间判别网络中的应用", 电子学报, no. 05, 15 May 2015 (2015-05-15) *

Also Published As

Publication number Publication date
CN117829083B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN113033811B (en) Processing method and device for two-quantum bit logic gate
US11176446B2 (en) Compositional prototypes for scalable neurosynaptic networks
CN111950225A (en) Chip layout method and device, storage medium and electronic equipment
CN109597816A (en) Parameter verification method, apparatus, computer storage medium and embedded device
CN111461885B (en) Consensus network management method, device, computer and readable storage medium
CN112995172B (en) Communication method and communication system for butt joint between Internet of things equipment and Internet of things platform
CN113673688A (en) Weight generation method, data processing method and device, electronic device and medium
CN115017178A (en) Training method and device for data-to-text generation model
CN113467487A (en) Path planning model training method, path planning device and electronic equipment
Wang et al. Hybrid filter design of fault detection for networked linear systems with variable packet dropout rate
CN117829083B (en) Routing method and device based on neural network, electronic equipment and storage medium
CN113839830B (en) Method, device and storage medium for predicting multiple data packet parameters
KR20210127084A (en) Method and apparatus for learning stochastic inference models between multiple random variables with unpaired data
Wei et al. Quasi-consensus control for stochastic multiagent systems: When energy harvesting constraints meet multimodal FDI attacks
CN115964984B (en) Method and device for balanced winding of digital chip layout
CN115690449A (en) Image annotation method based on local feature enhancement and parallel decoder
CN115129642A (en) Chip bus delay adjusting method, electronic device and medium
CN114781630A (en) Weight data storage method and device, chip, electronic equipment and readable medium
CN112487931A (en) Method, device, readable medium and electronic equipment for resisting attack
CN117349033B (en) Brain simulation processing method and device, electronic equipment and computer readable storage medium
CN116931619B (en) Temperature control method and system for laser
GB2618079A (en) System and apparatus suitable for Utilization of neural network based approach in association with integer programming,and a processing method in association
CN116011563B (en) High-performance pulse transmission simulation method and device for pulse relay
Tanaka et al. Compounding procedures for a weighted item collecting problem with a cost penalty term in directed bipartite structures
He et al. Optimal periodic scheduling for remote state estimation under sensor energy constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant