CN117972812A - Engineering drawing layout optimization method, device, equipment and medium - Google Patents

Engineering drawing layout optimization method, device, equipment and medium Download PDF

Info

Publication number
CN117972812A
CN117972812A CN202410346496.1A CN202410346496A CN117972812A CN 117972812 A CN117972812 A CN 117972812A CN 202410346496 A CN202410346496 A CN 202410346496A CN 117972812 A CN117972812 A CN 117972812A
Authority
CN
China
Prior art keywords
layout
layout optimization
parameter
optimization model
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410346496.1A
Other languages
Chinese (zh)
Other versions
CN117972812B (en
Inventor
孙运雷
代鹏
韩冲
徐可
陈勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Zhongshida Science And Technology Education Group Co ltd
China University of Petroleum East China
Original Assignee
Qingdao Zhongshida Science And Technology Education Group Co ltd
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Zhongshida Science And Technology Education Group Co ltd, China University of Petroleum East China filed Critical Qingdao Zhongshida Science And Technology Education Group Co ltd
Priority to CN202410346496.1A priority Critical patent/CN117972812B/en
Publication of CN117972812A publication Critical patent/CN117972812A/en
Application granted granted Critical
Publication of CN117972812B publication Critical patent/CN117972812B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/20Configuration CAD, e.g. designing by assembling or positioning modules selected from libraries of predesigned modules

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computational Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a method, a device, equipment and a medium for optimizing engineering drawing layout, which relate to the technical field of engineering drawing layout and comprise the following steps: updating a first parameter of a value network of an initial layout optimization model by using a drawing layout data training set and a reward function constructed based on a layout principle and a layout optimization evaluation index to obtain an updated value network parameter; updating a second parameter of the strategy network of the initial layout optimization model by using the updated value network parameter to obtain an updated strategy network parameter; acquiring a layout optimization model in a current state as a preset layout optimization model; and adjusting the position information and the size information of each graphic element in the engineering drawing to be optimized in the drawing layout by utilizing a target strategy function in a preset layout optimization model, and outputting the target engineering drawing after layout optimization. And optimizing and improving the drawing layout generated by the original algorithm by defining engineering drawing layout optimizing and evaluating indexes and introducing a deep reinforcement learning algorithm.

Description

Engineering drawing layout optimization method, device, equipment and medium
Technical Field
The invention relates to the technical field of engineering drawing layout, in particular to an engineering drawing layout optimization method, an engineering drawing layout optimization device, an engineering drawing layout optimization equipment and an engineering drawing layout optimization medium.
Background
The engineering drawing is a schematic file for product design and manufacture, and contains a series of information such as two-dimensional expression, dimension annotation, material detail and the like of the parts. In recent years, although three-dimensional CAD has been developed in domestic and foreign markets, the inability to automatically generate standard-compliant two-dimensional engineering drawings from three-dimensional models has been a key pain point for the popularization of three-dimensional design software. At present, the drawing layout generated by using an automatic layout algorithm based on accurate calculation has a plurality of problems, such as overlapping of the graphic elements, unbalanced spacing of the graphic elements, poor alignment and the like, and the drawing layout can not meet the requirements of drawing rules and attractive appearance, and still can be delivered and used only by manual adjustment.
In summary, how to realize automatic two-dimensional drawing layout optimization, improve layout effect, improve drawing aesthetic property, and reduce acquisition cost of high-quality drawings is a technical problem to be solved in the field.
Disclosure of Invention
In view of the above, the invention aims to provide an engineering drawing layout optimization method, an engineering drawing layout optimization device, engineering drawing layout optimization equipment and engineering drawing layout optimization medium, which can realize automatic two-dimensional drawing layout optimization, improve layout effect, improve drawing attractiveness and reduce acquisition cost of high-quality drawings. The specific scheme is as follows:
in a first aspect, the application discloses an engineering drawing layout optimization method based on deep reinforcement learning, which comprises the following steps:
Updating a first parameter of a value network of an initial layout optimization model by using a drawing layout data training set and a reward function constructed based on a layout principle and a layout optimization evaluation index to obtain an updated value network parameter;
Calculating a policy gradient of the initial layout optimization model by using the updated value network parameters so as to update a second parameter of a policy network of the initial layout optimization model to obtain updated policy network parameters;
Skipping to execute the step of updating the first parameter of the value network of the initial layout optimization model by using the drawing layout data training set and a reward function constructed based on a layout principle and a layout optimization evaluation index until the updated strategy network parameter meets a preset stopping condition, and acquiring the layout optimization model in the current state as a preset layout optimization model;
Inputting the engineering drawing to be optimized into the preset layout optimization model so as to adjust the position information and the size information of each graphic element in the engineering drawing to be optimized in the drawing layout by utilizing the target strategy function in the preset layout optimization model, and outputting the target engineering drawing after layout optimization.
Optionally, before updating the first parameter of the value network of the initial layout optimization model by using the drawing layout data training set and the reward function constructed based on the layout principle and the layout optimization evaluation index to obtain the updated value network parameter, the method further includes:
Defining a state space and an action space; the state space comprises first coordinate position information and first size information of the graphic element in a global coordinate system, and the action space comprises second coordinate position information and second size information of the graphic element in a drawing layout;
Constructing an initial layout optimization model based on a strategy network and a value network; the policy network is used for outputting the second coordinate position information and the second size information of the primitive in the current state; the value network is used for evaluating the quality of the current action output by the strategy network;
Defining a reward function constructed based on a layout principle and a layout optimization evaluation index; the reward function is used for carrying out layout optimization on the drawing layout.
Optionally, before defining the reward function constructed based on the layout principle and the layout optimization evaluation index, the method further includes:
setting layout principles including a chart type rule, a view positioning rule, a node diagram positioning rule, a detail table positioning rule, a pipe orifice table positioning rule, a standard title bar positioning rule and a technical requirement positioning rule;
Setting layout optimization evaluation indexes including alignment reward indexes, primitive overlapping degree reward indexes, primitive quantity balance reward indexes and view position reward indexes between adjacent primitives.
Optionally, updating the first parameter of the value network of the initial layout optimization model by using the reward function constructed based on the layout principle and the layout optimization evaluation index in the drawing layout data training set to obtain an updated value network parameter, including:
Inputting a drawing layout data training set into an initial layout optimization model, so that the initial layout optimization model selects and executes current actions aiming at optimizing drawing layout data based on the reward function and the current state in each time step, and stores the current state, the current actions, the reward function and the next state into an experience buffer area;
repeatedly executing the initial layout optimization model to select and execute the current action for optimizing the drawing layout data based on the reward function and the current state in each time step;
and calculating and updating the first parameter of the value network by using the sample information in the experience buffer zone to obtain an updated value network parameter.
Optionally, before storing the current state, the current action, the reward function, and the next state in an experience buffer, the method further includes:
And randomly initializing a first parameter of the value network and a second parameter of the strategy network, and creating an experience buffer for storing sample information of the current state, the current action, the reward function and the next state of different primitives.
Optionally, before updating the first parameter of the value network of the initial layout optimization model by using the reward function constructed based on the layout principle and the layout optimization evaluation index and using the drawing layout data training set, the method further includes:
Generating an original drawing layout based on the three-dimensional construction model information through an original drawing layout algorithm, and acquiring drawing layout data conforming to the expected layout optimization effect by utilizing manual tuning;
Merging drawing element layout principles and layout optimization evaluation indexes into the drawing layout data to obtain optimized drawing layout data, and skipping to execute the steps of generating original drawing layout based on the three-dimensional construction model information and through an original drawing layout algorithm;
And carrying out layout marking on the optimized drawing layout data in a manual marking mode to construct a drawing layout data training set meeting the layout optimization training requirement.
Optionally, the calculating the policy gradient of the initial layout optimization model by using the updated value network parameter to update the second parameter of the policy network of the initial layout optimization model to obtain the updated policy network parameter includes:
And calculating the strategy gradient of the initial layout optimization model by using the updated value network parameters, and updating the second parameters of the strategy network by using a gradient ascent method to obtain updated strategy network parameters.
In a second aspect, the application discloses an engineering drawing layout optimization device based on deep reinforcement learning, which comprises:
the first parameter updating module is used for updating the first parameter of the value network of the initial layout optimization model by utilizing the drawing layout data training set and a reward function constructed based on the layout principle and the layout optimization evaluation index so as to obtain updated value network parameters;
A second parameter updating module, configured to calculate a policy gradient of the initial layout optimization model by using the updated value network parameter, so as to update a second parameter of a policy network of the initial layout optimization model, so as to obtain an updated policy network parameter;
The model training module is used for skipping and executing the step of updating the first parameter of the value network of the initial layout optimization model by utilizing the drawing layout data training set and the reward function constructed based on the layout principle and the layout optimization evaluation index until the updated strategy network parameter meets the preset stopping condition, and acquiring the layout optimization model in the current state as a preset layout optimization model;
The drawing layout optimization module is used for inputting the engineering drawing to be optimized into the preset layout optimization model so as to adjust the position information and the size information of each graphic element in the engineering drawing to be optimized in the drawing layout by utilizing the target strategy function in the preset layout optimization model, and outputting the target engineering drawing after layout optimization.
In a third aspect, the present application discloses an electronic device, comprising:
A memory for storing a computer program;
and the processor is used for executing the computer program to realize the steps of the engineering drawing layout optimization method based on the deep reinforcement learning.
In a fourth aspect, the present application discloses a computer-readable storage medium for storing a computer program; the method comprises the steps of a method for optimizing the layout of engineering drawings based on deep reinforcement learning, wherein the method comprises the steps of realizing the method for optimizing the layout of engineering drawings based on deep reinforcement learning when the computer program is executed by a processor.
The application discloses an engineering drawing layout optimization method based on deep reinforcement learning, which comprises the following steps: updating a first parameter of a value network of an initial layout optimization model by using a drawing layout data training set and a reward function constructed based on a layout principle and a layout optimization evaluation index to obtain an updated value network parameter; calculating a policy gradient of the initial layout optimization model by using the updated value network parameters so as to update a second parameter of a policy network of the initial layout optimization model to obtain updated policy network parameters; skipping to execute the step of updating the first parameter of the value network of the initial layout optimization model by using the drawing layout data training set and a reward function constructed based on a layout principle and a layout optimization evaluation index until the updated strategy network parameter meets a preset stopping condition, and acquiring the layout optimization model in the current state as a preset layout optimization model; inputting the engineering drawing to be optimized into the preset layout optimization model so as to adjust the position information and the size information of each graphic element in the engineering drawing to be optimized in the drawing layout by utilizing the target strategy function in the preset layout optimization model, and outputting the target engineering drawing after layout optimization. Therefore, by defining engineering drawing layout optimization evaluation indexes, a deep reinforcement learning algorithm is introduced to optimize the layout generated by an original algorithm, and then a final drawing is obtained by two-dimensional rendering of a layout optimization result, so that the layout effect is improved, the drawing attractiveness is improved, and the acquisition cost of a high-quality drawing is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an engineering drawing layout optimization method based on deep reinforcement learning;
FIG. 2 is a schematic diagram of an engineering drawing layout generated by an original algorithm of the present application;
FIG. 3 is a flow chart of a layout optimization model training and test deployment method disclosed by the application;
FIG. 4 is a schematic structural diagram of an engineering drawing layout optimizing device based on deep reinforcement learning;
fig. 5 is a block diagram of an electronic device according to the present disclosure.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The engineering drawing is a schematic file for product design and manufacture, and contains a series of information such as two-dimensional expression, dimension annotation, material detail and the like of the parts. In recent years, although three-dimensional CAD has been developed in domestic and foreign markets, the inability to automatically generate standard-compliant two-dimensional engineering drawings from three-dimensional models has been a key pain point for the popularization of three-dimensional design software. At present, the drawing layout generated by using an automatic layout algorithm based on accurate calculation has a plurality of problems, such as overlapping of the graphic elements, unbalanced spacing of the graphic elements, poor alignment and the like, and the drawing layout can not meet the requirements of drawing rules and attractive appearance, and still can be delivered and used only by manual adjustment.
Therefore, the engineering drawing layout optimization scheme based on deep reinforcement learning can realize automatic two-dimensional drawing layout optimization, improve layout effect, improve drawing attractiveness and reduce acquisition cost of high-quality drawings.
Referring to fig. 1, the embodiment of the invention discloses an engineering drawing layout optimization method based on deep reinforcement learning, which comprises the following steps:
step S11: and updating the first parameter of the value network of the initial layout optimization model by utilizing the drawing layout data training set and a reward function constructed based on the layout principle and the layout optimization evaluation index so as to obtain updated value network parameters.
In this embodiment, a drawing layout data training set is input to an initial layout optimization model, so that the initial layout optimization model selects and executes a current action for optimizing drawing layout data based on the reward function and the current state in each time step, and stores the current state, the current action, the reward function and the next state in an experience buffer; repeatedly executing the initial layout optimization model to select and execute the current action for optimizing the drawing layout data based on the reward function and the current state in each time step; and calculating and updating the first parameter of the value network by using the sample information in the experience buffer zone to obtain an updated value network parameter. It will be appreciated that the policy function is updated. The policy gradient is calculated using the parameters of the value function and the parameters of the policy function are updated using a gradient ascent method. Specifically, a batch of training examples in the drawing layout data training set are randomly selected to start training. In each time step, an action is selected according to the current state and is performed. At the same time, the status, actions, rewards, and next status are stored in an experience buffer. Then randomly extracting a batch of samples from the experience buffer, calculating a target Q value, and updating a first parameter of a value network by using a gradient descent method to obtain an updated value network parameter, wherein the value network is positioned in an initial layout optimization model.
In this embodiment, an Actor-Critic network, that is, an initial layout optimization model, is first constructed, specifically: the strategy network and the value network are represented by using a deep neural network, then network parameter initialization is carried out on the strategy network and the value network, and the strategy network and the value network together form a training framework of an Actor-Critic method. And then updating the first parameter of the value network of the initial layout optimization model by utilizing the drawing layout data training set and a reward function constructed according to the layout principle and the layout optimization evaluation index, wherein it can be understood that in order to better train the layout optimization model, the initial layout optimization model is trained by constructing the reward function through the drawing layout data training set and based on the layout principle and the layout optimization evaluation index meeting the drawing layout requirements in advance, so that a feedback mechanism can be provided through the reward function to guide the learning and optimization of the layout optimization model. Specifically, 1, guide the layout optimization model forward toward the desired goal: the reward function defines the layout optimization objective or reward that the model should pursue during the training process. By setting the appropriate reward function, the layout optimization model can learn what behavior is beneficial, thereby adjusting its parameters to improve performance on a particular task. 2. Measuring the performance of the layout optimization model: the reward function may be used as a quantization index to evaluate the performance of the layout optimization model during training. The method can reflect the success degree of the layout optimization model on specific tasks, thereby helping researchers and developers to know the advantages and disadvantages of the model and improve and optimize the model. 3. Learning process of optimization model: the reward function may guide the learning process of the model so that it can converge to the optimal solution more quickly. By adjusting the shape and parameters of the reward function, the exploration and utilization behavior of the model may be influenced, thereby guiding the model to make more informed decisions during the training process. 4. Solving the reinforcement learning problem: in reinforcement learning, the bonus function is a key component. It defines the rewards that the model gets at each time step, and depending on the size and nature of the rewards, the model can learn how to make optimal decisions to maximize long-term jackpots. 5. Multitasking learning: in multitasking learning, each task may have its own reward function. By considering multiple reward functions simultaneously, the model can learn how to trade-off and optimize between different tasks, thereby improving overall performance. It can be seen that the reward functions play a leading, evaluating and optimizing role in model training. The method helps the model to define targets, measure performances and make more intelligent decisions in the learning process, thereby improving the accuracy and generalization capability of the model.
In this embodiment, before updating the first parameter of the value network of the initial layout optimization model by using the drawing layout data training set and the reward function constructed based on the layout principle and the layout optimization evaluation index to obtain the updated value network parameter, the method further includes: defining a state space and an action space; the state space comprises first coordinate position information and first size information of the graphic element in a global coordinate system, and the action space comprises second coordinate position information and second size information of the graphic element in a drawing layout; constructing an initial layout optimization model based on a strategy network and a value network; the policy network is used for outputting the second coordinate position information and the second size information of the primitive in the current state; the value network is used for evaluating the quality of the current action output by the strategy network; defining a reward function constructed based on a layout principle and a layout optimization evaluation index; the reward function is used for carrying out layout optimization on the drawing layout. It will be appreciated that prior to model training, a state space and an action space are defined, wherein the position information and size information in the state space are relative to a global coordinate system describing the position and size of the primitives throughout the scene. While the location information and size information in the action space is relative to the layout, describing the location and size of the primitives in a particular layout or container. The position information and the size information in the state space are generally used to represent static properties of the graphic element, such as the position, size, shape, etc. of the object, for graphic rendering, collision detection, etc. And the position information and the size information in the action space are mainly used for describing dynamic behaviors of the graphic elements, such as movement, scaling, rotation and the like, and are related to user interaction and animation. The location information and size information in the state space are typically fixed and updated only when the properties of the primitives change. The position information and the size information in the action space may change frequently with time or user interaction, so that the position and the size of each primitive in the current state are output by the strategy network in the initial layout optimization model, and the value network is used for evaluating the quality of the action output by the strategy network.
In this embodiment, before defining the reward function constructed based on the layout principle and the layout optimization evaluation index, the method further includes: setting layout principles including a chart type rule, a view positioning rule, a node diagram positioning rule, a detail table positioning rule, a pipe orifice table positioning rule, a standard title bar positioning rule and a technical requirement positioning rule; setting layout optimization evaluation indexes including alignment reward indexes, primitive overlapping degree reward indexes, primitive quantity balance reward indexes and view position reward indexes between adjacent primitives. It will be appreciated that the layout principles are set prior to constructing the reward function: (1) a graph type rule: common patterns are of the following types, A1, a1×1.25, a1×1.5, A2, A3, A4, a2×1.5, respectively. And subtracting the frame of the picture to obtain the actual layout area. (2) view positioning rules: when a drawing is manually drawn, the views are usually located at the center of the drawing, and when one drawing contains a plurality of views, the relative relationship between the views is usually divided into an up-down relationship and a left-right relationship. (3) node map positioning rules: typically, each device contains one or more node maps. In manual drawing, nodes are required to be placed on a blank area of a drawing in a non-overlapping mode and are usually located below the drawing. (4) detail table positioning rules: depending on the nature of the hand drawing sheet, the detail sheet is typically located below the sheet. Thus, the present invention classifies the positioning of the detail table as either from the left side layout or from the right side layout. (5) pipe orifice table positioning rules: the positioning of orifice tables is also typically divided into two types, either in combination with detail tables or separately. (6) standard title bar positioning rules: the standard title bar contains information such as equipment name, material, proportion, drawing number, construction unit and the like, and is usually placed in the lower right corner of the drawing when drawn manually. (7) technical requirement positioning rules: technical requirements mainly include design, manufacturing, inspection, acceptance requirements of the equipment and detailed design data. When drawn manually, the drawing is usually placed in the upper right corner of the drawing. Setting rewarding indexes for optimizing and evaluating the layout of engineering drawings, which are specifically as follows: first, a rectangular frame in an engineering drawing layout is represented by R (x, y, w, h), where (x, y) represents the coordinates of the top left corner vertex, w represents the length of the horizontal side of the rectangular frame, and h represents the length of the vertical side of the rectangular frame. Thus, the alignability reward index is:
Where n represents the number of primitives, S represents the set of primitives adjacent to primitive i, k represents the number of primitives adjacent to primitive i, Representing a band-alignment bonus function between two adjacent primitives. The value of the final bonus function is between 0 and 1, with a larger value indicating better alignment.
And for primitives in the horizontal direction:
Wherein, Ordinate value representing horizontal symmetry axis of element i,/>Ordinate value representing horizontal symmetry axis of graphical element j,/>Is a threshold in the vertical direction.
Primitive overlap degree reward index:
Where n represents the number of primitives, S represents the set of primitives adjacent to primitive i, k represents the number of primitives adjacent to primitive i, Representing the area of element i,/>Representing the area of the primitive j. Overlap area/>Representing the overlapping area of the two rectangles. The negative sign in this bonus function indicates that the smaller the overlap area, the greater the bonus. The final bonus function has a value between 0 and 1, with a larger value indicating a smaller overlap area.
Primitive number equalization bonus index:
Where n represents the number of primitives, k represents the number of neighboring primitives for primitive i, S is the set of neighboring primitives for primitive i, Representing the Euclidean distance between primitive i and primitive j,/>Is a distance threshold, the closer the distance between primitives is to the threshold, the greater the value of the bonus function. The value of the final bonus function is between 0 and 1, with a larger value indicating that the distance interval between primitives is closer to the threshold. Wherein Euclidean distance/>The calculation formula of (2) is as follows:
Wherein, Represents the abscissa of element i,/>Representing the ordinate of element i,/>Representing the length of the horizontal edge of element i,/>Representing the length of the vertical side of the graphical element i; /(I)Represents the abscissa of element j,/>Representing the ordinate of element j,/>Representing the length of the horizontal edge of element j,/>Representing the length of the vertical side of the graphical element j;
View location rewarding index:
Rectangular box R 1(x1,y1,w1,h1) represents the position of the front view, and rectangular box R 2(x2,y2,w2,h2) represents the position of the left/top view.
For the case where the front view and the left view coexist:
First, to ensure that the left view is to the right of the front view, i.e
Secondly, the water edge central lines of the two rectangles are coincident, namely
For the case where the front view and the top view coexist:
Firstly, it is ensured that the top view is at the lower side of the front view, i.e
Second, the vertical center lines of the two rectangles coincide, i.e
Evaluation function:
For the case where the front view and the left view coexist:
Wherein the value range of the evaluation function is between 0 and 1, if the main view and the left view meet the requirements, the value of the function is 1, otherwise, the function is 0.
For the case of the front view and the top view existing at the same time:
the value range of the function is between 0 and 1, if the position relation of the main view and the top view meets the requirement, the value of the function is 1, otherwise, the function is 0.
In this embodiment, before storing the current state, the current action, the reward function, and the next state in the experience buffer, the method further includes: and randomly initializing a first parameter of the value network and a second parameter of the strategy network, and creating an experience buffer for storing sample information of the current state, the current action, the reward function and the next state of different primitives. It can be understood that before training the model, the first parameters of the value network and the second parameters of the strategy network in the initial layout optimization model are initialized at random, and meanwhile, an experience buffer zone is created for storing the state, the action, the reward function and the next state in the layout optimization process.
In this embodiment, before updating the first parameter of the value network of the initial layout optimization model by using the reward function constructed based on the layout principle and the layout optimization evaluation index and using the drawing layout data training set, the method further includes: generating an original drawing layout based on the three-dimensional construction model information through an original drawing layout algorithm, and acquiring drawing layout data conforming to the expected layout optimization effect by utilizing manual tuning; merging drawing element layout principles and layout optimization evaluation indexes into the drawing layout data to obtain optimized drawing layout data, and skipping to execute the steps of generating original drawing layout based on the three-dimensional construction model information and through an original drawing layout algorithm; and carrying out layout marking on the optimized drawing layout data in a manual marking mode to construct a drawing layout data training set meeting the layout optimization training requirement. It may be appreciated that, before updating the first parameter, the method further includes obtaining a drawing layout data training set, specifically, in the first step, generating a drawing layout through an original algorithm and three-dimensional building model information, for example: the existing drawing layout algorithm in the CAD model, however, the drawing layout formed by the original algorithm has the problems of uneven composition, irregular composition and nonstandard view placement position, as shown in fig. 2.
The composition imbalance is that in the situation that the graphic elements are sparse, compared with the manually drawn layout, in the layout result generated by the original algorithm, the graphic element distribution is concentrated, a large blank is left, and the visual distribution is unbalanced. For example: the node map is too close to the view, resulting in a large blank below. In comparison, the manually drawn drawing has more balanced primitive distribution, the whole drawing is more attractive, and no large white remains. The original layout is too close to the primitive, so that the situation that the primitive is overlapped and marked is mixed after rendering can possibly lead to the error of reading the graph, and the compact layout is easier to cause the visual fatigue of a graph reader.
The composition is irregular, namely, in the layout generated by the original algorithm, because the graphic elements are close to the main view layout during layout, the node diagram of the engineering drawing obtained by rendering lacks alignment in the horizontal direction and is not regular as that of manual drawing.
The view placement positions are not standard, namely, for the layout with the front view and the left view, the layout obtained by calculation through an original algorithm can appear that the left view is arranged below the front view, however, when the left view is drawn manually, the left view is usually arranged on the right of the front view. For a layout with a front view and a top view, the layout calculated by the original algorithm may have the top view laid out on the right of the front view, however, when manually drawn, the top view is usually placed under the front view. Meanwhile, the layout generated by the original algorithm has obvious center line misalignment phenomenon between views.
Therefore, the original drawing layout obtained through the original algorithm is manually optimized, and drawing layout data which accords with the expected effect of layout optimization is obtained. And then integrating the drawing element layout principle and the layout optimization evaluation core index into engineering drawing layout data. And finally, the steps of generating drawing layout data, fusing drawing element layout principles and layout optimization evaluation core indexes to the layout data are circulated, and a drawing layout data training set meeting the requirements of layout optimization training is constructed in a manual labeling mode.
Step S12: and calculating the strategy gradient of the initial layout optimization model by using the updated value network parameters so as to update the second parameters of the strategy network of the initial layout optimization model, thereby obtaining updated strategy network parameters.
In this embodiment, the updated value network parameter is used to calculate the policy gradient of the initial layout optimization model, and the gradient ascent method is used to update the second parameter of the policy network, so as to obtain the updated policy network parameter. It can be understood that the policy gradient is calculated by using the value function parameter, and the second parameter of the policy network is updated by using the gradient rising method, so that the degree of the action output by the policy network can be evaluated by the value network, the policy network is continuously corrected, the training target of the optimal drawing layout is output, and the updated policy network parameter is obtained.
Step S13: and skipping to execute the step of updating the first parameter of the value network of the initial layout optimization model by using the drawing layout data training set and the reward function constructed based on the layout principle and the layout optimization evaluation index until the updated strategy network parameter meets the preset stopping condition, and acquiring the layout optimization model in the current state as the preset layout optimization model.
In this embodiment, after the second parameter is updated to obtain the policy network parameter, the step of updating the first parameter of the value network by using the drawing layout data training set and the reward function is skipped, so that the updating of the second parameter of the policy network and the first parameter of the value network is repeated and iterated until the updated policy network parameter reaches the preset stopping condition, and the updating of the two parameters is stopped, so as to obtain the preset layout optimization model including the target policy network at this time. The preset stopping condition may be preset iteration times, etc.
Step S14: inputting the engineering drawing to be optimized into the preset layout optimization model so as to adjust the position information and the size information of each graphic element in the engineering drawing to be optimized in the drawing layout by utilizing the target strategy function in the preset layout optimization model, and outputting the target engineering drawing after layout optimization.
In this embodiment, a preset layout optimization model is tested, and a layout optimization result is tested. Specifically, the performance of layout optimization is evaluated by testing using a trained strategy function. And deploying the tested preset layout optimization model into an original algorithm to optimize the original layout, thereby improving the efficiency and quality. When a drawing generation request exists, three-dimensional component information is firstly input into software or a device added with a predictive layout optimization model, an engineering drawing to be optimized is generated through an original algorithm in the software or the device, or when the drawing generation request exists, the three-dimensional component information related in the request is directly called from a preset library, and three-dimensional to two-dimensional drawing layout conversion is carried out through the software or the device added with the predictive layout optimization model, so that the engineering drawing to be optimized is generated. And then the software or the device sends the engineering drawing to be optimized to a preset layout optimization model, and the drawing layout is further optimized through the model, so that an optimized target engineering drawing meeting the drawing layout requirement is generated. Therefore, the problem that the original layout is too complex in manual adjustment is solved, various industry standards of engineering drawings are met, the layout is attractive, the expected aim of guiding design and manufacture is achieved, the interaction cost is greatly reduced, and the design efficiency is improved.
Referring to fig. 3, the construction and training process of the preset layout optimization model is as follows: first, a state space and an action space are defined. The state space comprises coordinate position information, size and the like of the graphic element in a global coordinate system, and the action space comprises coordinate position, size and the like of the graphic element in a drawing layout.
And secondly, constructing an Actor-Critic network. The deep neural network is used for representing and constructing a strategy network and a value network, and parameter initialization is carried out, and the strategy network and the value network together form a training framework of actor-critic algorithm. The policy network outputs the position and the size of each graphic element in the current state, and the value network evaluates the action output by the policy network.
Third, network parameters and experience pools are initialized. Parameters of the policy network and the value network are randomly initialized while an experience buffer pool is created.
Fourth, define the rewarding function. The present invention designs the bonus function for performing layout optimization based on the layout principles mentioned above and the layout optimization evaluation core index.
And fifthly, randomly selecting a batch of training examples to start training. In each time step, an action is selected according to the current state and is performed. At the same time, the status, actions, rewards, and next status are stored in an experience buffer.
Sixth, updating the first parameter of the value network. A batch of samples is randomly drawn from the empirical buffer, a target Q value is calculated, and a gradient descent method is used to update a first parameter of the network of values.
And seventh, updating a second parameter of the strategy network. And calculating a strategy gradient by using the updated value network parameters, and updating a second parameter of the strategy network by using a gradient ascent method.
And eighth step, repeating the fifth, sixth and seventh steps until the stopping condition is reached.
And ninth, testing the layout optimization result. And testing by using the trained target strategy network, and evaluating the performance of layout optimization.
And tenth, deploying a network. And deploying the trained strategy network into an original algorithm, optimizing the original layout, and improving the efficiency and quality.
The application discloses an engineering drawing layout optimization method based on deep reinforcement learning, which comprises the following steps: updating a first parameter of a value network of an initial layout optimization model by using a drawing layout data training set and a reward function constructed based on a layout principle and a layout optimization evaluation index to obtain an updated value network parameter; calculating a policy gradient of the initial layout optimization model by using the updated value network parameters so as to update a second parameter of a policy network of the initial layout optimization model to obtain updated policy network parameters; skipping to execute the step of updating the first parameter of the value network of the initial layout optimization model by using the drawing layout data training set and a reward function constructed based on a layout principle and a layout optimization evaluation index until the updated strategy network parameter meets a preset stopping condition, and acquiring the layout optimization model in the current state as a preset layout optimization model; inputting the engineering drawing to be optimized into the preset layout optimization model so as to adjust the position information and the size information of each graphic element in the engineering drawing to be optimized in the drawing layout by utilizing the target strategy function in the preset layout optimization model, and outputting the target engineering drawing after layout optimization. Therefore, by defining engineering drawing layout optimization evaluation indexes, a deep reinforcement learning algorithm is introduced to optimize the layout generated by an original algorithm, and then a final drawing is obtained by two-dimensional rendering of a layout optimization result, so that the layout effect is improved, the drawing attractiveness is improved, and the acquisition cost of a high-quality drawing is reduced.
Referring to fig. 4, the invention also correspondingly discloses an engineering drawing layout optimizing device based on deep reinforcement learning, which comprises:
the first parameter updating module 11 is configured to update a first parameter of a value network of the initial layout optimization model by using a drawing layout data training set and a reward function constructed based on a layout principle and a layout optimization evaluation index, so as to obtain an updated value network parameter;
A second parameter updating module 12, configured to calculate a policy gradient of the initial layout optimization model by using the updated value network parameter, so as to update a second parameter of the policy network of the initial layout optimization model, so as to obtain an updated policy network parameter;
The model training module 13 is configured to skip the step of updating the first parameter of the value network of the initial layout optimization model by using the drawing layout data training set and the reward function constructed based on the layout principle and the layout optimization evaluation index until the updated policy network parameter meets the preset stopping condition, and acquire the layout optimization model in the current state as the preset layout optimization model;
the drawing layout optimization module 14 is configured to input the engineering drawing to be optimized into the preset layout optimization model, so as to adjust the position information and the size information of each primitive in the engineering drawing to be optimized in the drawing layout by using the target strategy function in the preset layout optimization model, so as to output the target engineering drawing after layout optimization.
The application discloses a method for updating a first parameter of a value network of an initial layout optimization model by utilizing a drawing layout data training set and a reward function constructed based on a layout principle and a layout optimization evaluation index so as to obtain an updated value network parameter; calculating a policy gradient of the initial layout optimization model by using the updated value network parameters so as to update a second parameter of a policy network of the initial layout optimization model to obtain updated policy network parameters; skipping to execute the step of updating the first parameter of the value network of the initial layout optimization model by using the drawing layout data training set and a reward function constructed based on a layout principle and a layout optimization evaluation index until the updated strategy network parameter meets a preset stopping condition, and acquiring the layout optimization model in the current state as a preset layout optimization model; inputting the engineering drawing to be optimized into the preset layout optimization model so as to adjust the position information and the size information of each graphic element in the engineering drawing to be optimized in the drawing layout by utilizing the target strategy function in the preset layout optimization model, and outputting the target engineering drawing after layout optimization. Therefore, by defining engineering drawing layout optimization evaluation indexes, a deep reinforcement learning algorithm is introduced to optimize the layout generated by an original algorithm, and then a final drawing is obtained by two-dimensional rendering of a layout optimization result, so that the layout effect is improved, the drawing attractiveness is improved, and the acquisition cost of a high-quality drawing is reduced.
Further, the embodiment of the present application further discloses an electronic device, and fig. 5 is a block diagram of an electronic device 20 according to an exemplary embodiment, where the content of the figure is not to be considered as any limitation on the scope of use of the present application.
Fig. 5 is a schematic structural diagram of an electronic device 20 according to an embodiment of the present application. The electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. The memory 22 is used for storing a computer program, and the computer program is loaded and executed by the processor 21 to implement relevant steps in the engineering drawing layout optimization method disclosed in any of the foregoing embodiments. In addition, the electronic device 20 in the present embodiment may be specifically an electronic computer.
In this embodiment, the power supply 23 is configured to provide an operating voltage for each hardware device on the electronic device 20; the communication interface 24 can create a data transmission channel between the electronic device 20 and an external device, and the communication protocol to be followed is any communication protocol applicable to the technical solution of the present application, which is not specifically limited herein; the input/output interface 25 is used for acquiring external input data or outputting external output data, and the specific interface type thereof may be selected according to the specific application requirement, which is not limited herein.
Processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc. The processor 21 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). The processor 21 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 21 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 21 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
The memory 22 may be a carrier for storing resources, such as a read-only memory, a random access memory, a magnetic disk, or an optical disk, and the resources stored thereon may include an operating system 221, a computer program 222, and the like, and the storage may be temporary storage or permanent storage.
The operating system 221 is used for managing and controlling various hardware devices on the electronic device 20 and the computer program 222, so as to implement the operation and processing of the processor 21 on the mass data 223 in the memory 22, which may be Windows Server, netware, unix, linux, etc. The computer program 222 may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the engineering drawing layout optimization method performed by the electronic device 20 disclosed in any of the previous embodiments. The data 223 may include, in addition to data received by the electronic device and transmitted by the external device, data collected by the input/output interface 25 itself, and so on.
Further, the application also discloses a computer readable storage medium for storing a computer program; the method comprises the steps of executing a computer program by a processor, wherein the computer program realizes the engineering drawing layout optimization method disclosed by the prior art. For specific steps of the method, reference may be made to the corresponding contents disclosed in the foregoing embodiments, and no further description is given here.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in random access Memory RAM (Random Access Memory), memory, read-Only Memory ROM (Read Only Memory), electrically programmable EPROM (Electrically Programmable Read Only Memory), electrically erasable programmable EEPROM (Electric Erasable Programmable Read Only Memory), registers, hard disk, a removable disk, a CD-ROM (Compact Disc-Read Only Memory), or any other form of storage medium known in the art.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The engineering drawing layout optimization method, device, equipment and medium provided by the invention are described in detail, and specific examples are applied to illustrate the principle and implementation of the invention, and the description of the examples is only used for helping to understand the method and core idea of the invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (10)

1. The engineering drawing layout optimization method based on deep reinforcement learning is characterized by comprising the following steps of:
Updating a first parameter of a value network of an initial layout optimization model by using a drawing layout data training set and a reward function constructed based on a layout principle and a layout optimization evaluation index to obtain an updated value network parameter;
Calculating a policy gradient of the initial layout optimization model by using the updated value network parameters so as to update a second parameter of a policy network of the initial layout optimization model to obtain updated policy network parameters;
Skipping to execute the step of updating the first parameter of the value network of the initial layout optimization model by using the drawing layout data training set and a reward function constructed based on a layout principle and a layout optimization evaluation index until the updated strategy network parameter meets a preset stopping condition, and acquiring the layout optimization model in the current state as a preset layout optimization model;
Inputting the engineering drawing to be optimized into the preset layout optimization model so as to adjust the position information and the size information of each graphic element in the engineering drawing to be optimized in the drawing layout by utilizing the target strategy function in the preset layout optimization model, and outputting the target engineering drawing after layout optimization.
2. The method for optimizing an engineering drawing layout based on deep reinforcement learning according to claim 1, wherein before updating the first parameter of the value network of the initial layout optimization model by using the drawing layout data training set and the reward function constructed based on the layout principle and the layout optimization evaluation index to obtain the updated value network parameter, the method further comprises:
Defining a state space and an action space; the state space comprises first coordinate position information and first size information of the graphic element in a global coordinate system, and the action space comprises second coordinate position information and second size information of the graphic element in a drawing layout;
Constructing an initial layout optimization model based on a strategy network and a value network; the policy network is used for outputting the second coordinate position information and the second size information of the primitive in the current state; the value network is used for evaluating the quality of the current action output by the strategy network;
Defining a reward function constructed based on a layout principle and a layout optimization evaluation index; the reward function is used for carrying out layout optimization on the drawing layout.
3. The method for optimizing the layout of engineering drawings based on deep reinforcement learning according to claim 2, wherein before defining the reward function constructed based on the layout principle and the layout optimization evaluation index, the method further comprises:
setting layout principles including a chart type rule, a view positioning rule, a node diagram positioning rule, a detail table positioning rule, a pipe orifice table positioning rule, a standard title bar positioning rule and a technical requirement positioning rule;
Setting layout optimization evaluation indexes including alignment reward indexes, primitive overlapping degree reward indexes, primitive quantity balance reward indexes and view position reward indexes between adjacent primitives.
4. The method for optimizing the layout of an engineering drawing based on deep reinforcement learning according to claim 2, wherein the updating the first parameter of the value network of the initial layout optimization model by using the drawing layout data training set and the reward function constructed based on the layout principle and the layout optimization evaluation index to obtain the updated value network parameter comprises:
Inputting a drawing layout data training set into an initial layout optimization model, so that the initial layout optimization model selects and executes current actions aiming at optimizing drawing layout data based on the reward function and the current state in each time step, and stores the current state, the current actions, the reward function and the next state into an experience buffer area;
repeatedly executing the initial layout optimization model to select and execute the current action for optimizing the drawing layout data based on the reward function and the current state in each time step;
and calculating and updating the first parameter of the value network by using the sample information in the experience buffer zone to obtain an updated value network parameter.
5. The method of engineering drawing layout optimization based on deep reinforcement learning of claim 4, wherein before storing the current state, the current action, the reward function, and the next state in an experience buffer, further comprising:
And randomly initializing a first parameter of the value network and a second parameter of the strategy network, and creating an experience buffer for storing sample information of the current state, the current action, the reward function and the next state of different primitives.
6. The method for optimizing the layout of an engineering drawing based on deep reinforcement learning according to claim 1, wherein before updating the first parameter of the value network of the initial layout optimization model by using the reward function constructed by the drawing layout data training set based on the layout principle and the layout optimization evaluation index, the method further comprises:
Generating an original drawing layout based on the three-dimensional construction model information through an original drawing layout algorithm, and acquiring drawing layout data conforming to the expected layout optimization effect by utilizing manual tuning;
Merging drawing element layout principles and layout optimization evaluation indexes into the drawing layout data to obtain optimized drawing layout data, and skipping to execute the steps of generating original drawing layout based on the three-dimensional construction model information and through an original drawing layout algorithm;
And carrying out layout marking on the optimized drawing layout data in a manual marking mode to construct a drawing layout data training set meeting the layout optimization training requirement.
7. The method of claim 1, wherein calculating the policy gradient of the initial layout optimization model using the updated value network parameters to update the second parameters of the policy network of the initial layout optimization model to obtain updated policy network parameters comprises:
And calculating the strategy gradient of the initial layout optimization model by using the updated value network parameters, and updating the second parameters of the strategy network by using a gradient ascent method to obtain updated strategy network parameters.
8. Engineering drawing layout optimizing device based on degree of depth reinforcement study, characterized by comprising:
the first parameter updating module is used for updating the first parameter of the value network of the initial layout optimization model by utilizing the drawing layout data training set and a reward function constructed based on the layout principle and the layout optimization evaluation index so as to obtain updated value network parameters;
A second parameter updating module, configured to calculate a policy gradient of the initial layout optimization model by using the updated value network parameter, so as to update a second parameter of a policy network of the initial layout optimization model, so as to obtain an updated policy network parameter;
The model training module is used for skipping and executing the step of updating the first parameter of the value network of the initial layout optimization model by utilizing the drawing layout data training set and the reward function constructed based on the layout principle and the layout optimization evaluation index until the updated strategy network parameter meets the preset stopping condition, and acquiring the layout optimization model in the current state as a preset layout optimization model;
The drawing layout optimization module is used for inputting the engineering drawing to be optimized into the preset layout optimization model so as to adjust the position information and the size information of each graphic element in the engineering drawing to be optimized in the drawing layout by utilizing the target strategy function in the preset layout optimization model, and outputting the target engineering drawing after layout optimization.
9. An electronic device, comprising:
A memory for storing a computer program;
A processor for executing the computer program to implement the steps of the depth reinforcement learning based engineering drawing layout optimization method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program; wherein the computer program when executed by a processor implements the steps of the depth reinforcement learning based engineering drawing layout optimization method of any of claims 1 to 7.
CN202410346496.1A 2024-03-26 2024-03-26 Engineering drawing layout optimization method, device, equipment and medium Active CN117972812B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410346496.1A CN117972812B (en) 2024-03-26 2024-03-26 Engineering drawing layout optimization method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410346496.1A CN117972812B (en) 2024-03-26 2024-03-26 Engineering drawing layout optimization method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN117972812A true CN117972812A (en) 2024-05-03
CN117972812B CN117972812B (en) 2024-06-07

Family

ID=90853566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410346496.1A Active CN117972812B (en) 2024-03-26 2024-03-26 Engineering drawing layout optimization method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117972812B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114154412A (en) * 2021-11-25 2022-03-08 上海交通大学 Optimized chip layout system and method based on deep reinforcement learning
CN114896937A (en) * 2022-05-24 2022-08-12 广东工业大学 Integrated circuit layout optimization method based on reinforcement learning
CN115270698A (en) * 2022-06-23 2022-11-01 广东工业大学 Chip global automatic layout method based on deep reinforcement learning
CN115329411A (en) * 2022-08-03 2022-11-11 中国舰船研究设计中心 Ship electrical drawing layout method based on prior rule and deep neural network
CN116362123A (en) * 2023-03-27 2023-06-30 上海交通大学 Chip layout pre-training and optimizing method based on improved rewarding function
CN116451291A (en) * 2023-04-20 2023-07-18 中国石油大学(华东) Quantitative evaluation method and system for layout quality of engineering drawing
CN116560384A (en) * 2023-03-21 2023-08-08 清华大学深圳国际研究生院 Variant aircraft robust control method based on deep reinforcement learning
US20230267250A1 (en) * 2022-02-24 2023-08-24 Mitsubishi Electric Research Laboratories, Inc. Method of RF Analog Circuits Electronic Design Automation Based on GCN and Deep Reinforcement Learning
CN116738923A (en) * 2023-04-04 2023-09-12 暨南大学 Chip layout optimization method based on reinforcement learning with constraint
US20230342594A1 (en) * 2022-04-25 2023-10-26 Ai Randomtrees Llc Artificial intelligence based system and method for recognition of dimensional information within engineering drawings
CN117408215A (en) * 2023-11-01 2024-01-16 苏州芯联成软件有限公司 Layout element automatic layout method and device based on hybrid strategy reinforcement learning
CN117422041A (en) * 2023-09-04 2024-01-19 中国科学院自动化研究所 Training method for automatic wiring model of analog chip and automatic wiring method
CN117556502A (en) * 2023-11-14 2024-02-13 东南大学 Intelligent generation method and system for neighborhood layout based on evolution model

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114154412A (en) * 2021-11-25 2022-03-08 上海交通大学 Optimized chip layout system and method based on deep reinforcement learning
US20230267250A1 (en) * 2022-02-24 2023-08-24 Mitsubishi Electric Research Laboratories, Inc. Method of RF Analog Circuits Electronic Design Automation Based on GCN and Deep Reinforcement Learning
US20230342594A1 (en) * 2022-04-25 2023-10-26 Ai Randomtrees Llc Artificial intelligence based system and method for recognition of dimensional information within engineering drawings
CN114896937A (en) * 2022-05-24 2022-08-12 广东工业大学 Integrated circuit layout optimization method based on reinforcement learning
CN115270698A (en) * 2022-06-23 2022-11-01 广东工业大学 Chip global automatic layout method based on deep reinforcement learning
CN115329411A (en) * 2022-08-03 2022-11-11 中国舰船研究设计中心 Ship electrical drawing layout method based on prior rule and deep neural network
CN116560384A (en) * 2023-03-21 2023-08-08 清华大学深圳国际研究生院 Variant aircraft robust control method based on deep reinforcement learning
CN116362123A (en) * 2023-03-27 2023-06-30 上海交通大学 Chip layout pre-training and optimizing method based on improved rewarding function
CN116738923A (en) * 2023-04-04 2023-09-12 暨南大学 Chip layout optimization method based on reinforcement learning with constraint
CN116451291A (en) * 2023-04-20 2023-07-18 中国石油大学(华东) Quantitative evaluation method and system for layout quality of engineering drawing
CN117422041A (en) * 2023-09-04 2024-01-19 中国科学院自动化研究所 Training method for automatic wiring model of analog chip and automatic wiring method
CN117408215A (en) * 2023-11-01 2024-01-16 苏州芯联成软件有限公司 Layout element automatic layout method and device based on hybrid strategy reinforcement learning
CN117556502A (en) * 2023-11-14 2024-02-13 东南大学 Intelligent generation method and system for neighborhood layout based on evolution model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李江涛;刘静华;何涛;: "进化策略在无比例工程图布局优化中的应用", 北京航空航天大学学报, no. 03, 30 March 2007 (2007-03-30) *

Also Published As

Publication number Publication date
CN117972812B (en) 2024-06-07

Similar Documents

Publication Publication Date Title
CN111177831A (en) BIM technology-based steel bar three-dimensional modeling and automatic calculation method
CN103390088A (en) Full-automatic three-dimensional conversion method aiming at grating architectural plan
CN102945313A (en) Method for constructing and demonstrating teaching content of open type virtual experiment
CN112717414B (en) Game scene editing method and device, electronic equipment and storage medium
CN117828701B (en) Engineering drawing layout optimization method, system, equipment and medium
CN113449355A (en) Building house type graph automatic generation method based on artificial intelligence
CN115731560B (en) Deep learning-based slot line identification method and device, storage medium and terminal
CN113420353A (en) Steel bar arrangement method and device and electronic equipment
US7116341B2 (en) Information presentation apparatus and method in three-dimensional virtual space and computer program therefor
Weinzapfel et al. Architecture-by-yourself: an experiment with computer graphics for house design
CN117972812B (en) Engineering drawing layout optimization method, device, equipment and medium
Lin et al. Urban space simulation based on wave function collapse and convolutional neural network
CN117689833A (en) Urban three-dimensional model construction method, system and medium based on rule modeling
CN114167827B (en) Method and device for producing and processing indoor design material
CN113742804B (en) Furniture layout diagram generation method, device, equipment and storage medium
CN114091133A (en) City information model modeling method and device, terminal equipment and storage medium
CN114609646A (en) Laser mapping method, device, medium and electronic equipment
Groß et al. Glyph-Based Visual Analysis of Q-Leaning Based Action Policy Ensembles on Racetrack
CN118097087B (en) Method, device, equipment and medium for marking layout of component sizes of engineering drawing
Busquets Duran Procedural textures generation: adaptation into a Unity tool
Dintler Parametric 3D Building Modelling in CityEngine: An Evaluation of Potential Benefits and Limitation
Lawlor et al. Bounding recursive procedural models using convex optimization
Zeshan et al. Meta-morphing Architectural Domains: The Role of Humans and AI in Post-human Architecture
CN117919714A (en) Random map generation method, system and storage medium in game
Fukuda et al. LEARNING, PROTOTYPING AND ADAPTING

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant