CN116704542A - Layer classification method, device, equipment and storage medium - Google Patents

Layer classification method, device, equipment and storage medium Download PDF

Info

Publication number
CN116704542A
CN116704542A CN202210172791.0A CN202210172791A CN116704542A CN 116704542 A CN116704542 A CN 116704542A CN 202210172791 A CN202210172791 A CN 202210172791A CN 116704542 A CN116704542 A CN 116704542A
Authority
CN
China
Prior art keywords
target
drawing data
detection area
primitive
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210172791.0A
Other languages
Chinese (zh)
Inventor
梁雄
谭文宇
赵红改
赵野
马超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Glodon Co Ltd
Original Assignee
Glodon Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Glodon Co Ltd filed Critical Glodon Co Ltd
Priority to CN202210172791.0A priority Critical patent/CN116704542A/en
Publication of CN116704542A publication Critical patent/CN116704542A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/12Geometric CAD characterised by design entry means specially adapted for CAD, e.g. graphical user interfaces [GUI] specially adapted for CAD
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computational Mathematics (AREA)
  • Architecture (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a layer classification method, device, equipment and storage medium, in particular to the technical field of computer vision. The method comprises the following steps: acquiring target drawing data; performing data processing on the target drawing data through a target detection model, and determining a target detection area where the primitives of the target type are located; judging whether each graphic element in the target drawing data is of the target type according to the intersection condition of each graphic element in the target drawing data and a target detection area; and when detecting that the number proportion of the primitives of the target type in the target layers in the target drawing data is larger than a specified threshold, determining the target layers as the layers of the target type. According to the scheme, the type of the layer is judged according to the type of the primitive contained in the layer, and the integral drawing data is considered in the primitive classification process, so that the classification accuracy of the layer is improved.

Description

Layer classification method, device, equipment and storage medium
Technical Field
The present application relates to the field of graphics processing, and in particular, to a method, apparatus, device, and storage medium for classifying layers.
Background
In the field of engineering drawing, when drawing is performed by CAD, a plurality of layers are usually included, and in CAD, the layers are like films containing elements such as characters or graphics, and a sheet Zhang An is sequentially stacked together and combined to form the final effect of the page.
In the process of AI identifying CAD drawings, the layers are required to be classified, and if the layers are not classified, a large amount of interference is brought to an algorithm, and the identification precision is affected. In the prior art, usually, when a CAD drawing is constructed manually, the layers are pre-named according to a naming rule with a relatively standard, and at the moment, the layers can be classified directly based on the naming rule of the layers in the process of identifying the CAD drawing by the computer equipment through AI.
The scheme can only process scenes with relatively standard names, the naming rules of the CAD drawings do not have unified standards, the categories of the CAD drawings are more, and the classification accuracy of classifying the layers through the layer naming rules is lower.
Disclosure of Invention
The application provides a layer classification method, device, equipment and storage medium, which improve the accuracy of layer classification.
In one aspect, a layer classification method is provided, the method including:
Acquiring target drawing data;
performing data processing on the target drawing data through a target detection model, and determining a target detection area where the primitives of the target type are located;
judging whether each graphic element in the target drawing data is of the target type according to the intersection condition of each graphic element in the target drawing data and a target detection area;
and when detecting that the number proportion of the primitives of the target type in the target layers in the target drawing data is larger than a specified threshold, determining the target layers as the layers of the target type.
In yet another aspect, a layer classification apparatus is provided, the apparatus including:
the drawing acquisition module is used for acquiring target drawing data;
the detection area acquisition module is used for carrying out data processing on the target drawing data through a target detection model and determining a target detection area where the graphic element of the target type is located;
the target type judging module is used for judging whether each graphic element in the target drawing data is of the target type according to the intersection condition of each graphic element in the target drawing data and the target detection area;
and the layer type determining module is used for determining the target layer as the layer of the target type when detecting that the number of the primitives of the target type in the target layer in the target drawing data is larger than a specified threshold value.
In one possible implementation manner, the detection area acquisition module includes:
the image matrix acquisition unit is used for acquiring a target image matrix according to the primitive distribution of the target drawing data;
the matrix region determining unit is used for performing data processing on the target image matrix through a target detection model and determining a matrix detection region corresponding to the target image matrix;
and the detection area acquisition unit is used for converting the matrix detection area into a coordinate system corresponding to the target drawing data so as to acquire the target detection area.
In a possible implementation, the image matrix acquisition unit is further configured to,
and constructing the target image matrix by taking each pixel point in the target drawing data as a matrix element.
In a possible implementation, the image matrix acquisition unit is further configured to,
dividing the target drawing data into target grids according to a specified size;
determining values in each target grid according to the primitive geometric relationship contained in each target grid so as to form the target image matrix; the primitive geometry relationships comprise at least one of intersecting, parallel, perpendicular.
In one possible implementation manner, the detection area acquisition module further includes:
the primitive attribute detection unit is used for detecting the type attribute of each primitive in the target drawing data and deleting the primitive with the target type attribute in the target drawing data;
wherein the target type attribute comprises at least one of text labels and sizing.
In a possible implementation, the matrix area determination unit is further adapted to,
performing data processing on the target image matrix through a target detection model, and determining a candidate detection area corresponding to the target image matrix;
and determining that the confidence coefficient is larger than a confidence coefficient threshold value in the candidate detection area as the matrix detection area.
In one possible implementation manner, the target type determining module includes:
acquiring intersecting primitives which intersect with a target detection area in target drawing data;
when the length of the intersected graphic element is in a specified interval, a first intersection point and a second intersection point of the intersected graphic element and the target detection area are obtained;
calculating a first distance corresponding to the first intersection point and a second distance corresponding to the second intersection point; the first distance is the sum of the distances between the first intersection point and each end point of the target detection area; the second distance is the sum of the distances between the second intersection point and each end point of the target detection area;
And determining the intersected graphic element as a target type when the difference value between the first distance and the second distance is smaller than a distance threshold value.
In yet another aspect, a computer device is provided, where the computer device includes a processor and a memory, where the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, where the at least one instruction, at least one program, a set of codes, or a set of instructions are loaded and executed by the processor to implement the layer classification method described above.
In yet another aspect, a computer readable storage medium having stored therein at least one instruction loaded and executed by a processor to implement the layer classification method described above is provided.
In yet another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the layer classification method described above.
The technical scheme provided by the application can comprise the following beneficial effects:
when the layers are required to be classified, the computer equipment can acquire the whole target drawing data, target detection is carried out on the target drawing data through a target detection model, a target detection area where the primitives corresponding to the target types are located is acquired, the computer equipment acquires the primitives intersected with the target detection area on the target drawing data, judges whether the primitives on the target drawing data are of the target types or not according to the target detection area, when the judgment of the primitives is finished, and the number of the primitives of the target types in the target drawing layers of the target drawing data is detected, the target drawing layers are determined to be the drawing layers of the target types, the types of the drawing layers are judged through the primitive types contained in the drawing layers according to the scheme, and the whole drawing data are considered in the primitive classification process, so that the classification accuracy of the drawing layers is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram illustrating a layer classification system according to an exemplary embodiment.
FIG. 2 is a method flow diagram illustrating a layer classification method according to an example embodiment.
FIG. 3 is a method flow diagram illustrating a layer classification method according to an exemplary embodiment.
Fig. 4 shows a flowchart of a primitive type detection method according to an embodiment of the present application.
Fig. 5 shows a flow chart of a layer classification method according to an embodiment of the present application.
Fig. 6 is a method block diagram illustrating a layer classification method according to an exemplary embodiment.
Fig. 7 shows a schematic diagram of a primitive cleaning model according to an embodiment of the present application.
Fig. 8 shows a schematic structural diagram of a layer classification model according to an embodiment of the present application.
Fig. 9 is a block diagram illustrating a structure of a layer classification apparatus according to an exemplary embodiment.
Fig. 10 is a schematic diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the application are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be understood that the "indication" mentioned in the embodiments of the present application may be a direct indication, an indirect indication, or an indication having an association relationship. For example, a indicates B, which may mean that a indicates B directly, e.g., B may be obtained by a; it may also indicate that a indicates B indirectly, e.g. a indicates C, B may be obtained by C; it may also be indicated that there is an association between a and B.
In the description of the embodiments of the present application, the term "corresponding" may indicate that there is a direct correspondence or an indirect correspondence between the two, or may indicate that there is an association between the two, or may indicate a relationship between the two and the indicated, configured, etc.
In the embodiment of the present application, the "predefining" may be implemented by pre-storing corresponding codes, tables or other manners that may be used to indicate relevant information in devices (including, for example, terminal devices and network devices), and the present application is not limited to the specific implementation manner thereof.
Before explaining the various embodiments of the present application, a description is given first of several concepts to which the present application relates.
1) AI (Artificial Intelligence )
Artificial intelligence (Artificial Intelligence), english is abbreviated AI. It is a new technical science for researching, developing theory, method, technology and application system for simulating, extending and expanding human intelligence. Artificial intelligence is a branch of computer science that attempts to understand the nature of intelligence and to produce a new intelligent machine that can react in a similar manner to human intelligence, research in this field including robotics, language recognition, image recognition, natural language processing, and expert systems. Since birth, the theory and technology are mature, and the application field is expanding, and it is supposed that the technological product brought by artificial intelligence in the future will be a "container" of human intelligence. Artificial intelligence can simulate the information process of consciousness and thinking of people. Artificial intelligence is not human intelligence, but can think like a human, and may also exceed human intelligence.
The machine used for researching the main material foundation of the artificial intelligence and realizing the artificial intelligence technology platform is a computer. In addition to computer science, artificial intelligence involves multiple disciplines of information theory, control theory, automation, bionics, biology, psychology, mathematical logic, linguistics, medicine, and philosophy. The main content of artificial intelligence discipline research includes: knowledge representation, automatic reasoning and searching methods, machine learning and knowledge acquisition, knowledge processing systems, natural language understanding, computer vision, intelligent robots, automatic programming, and the like.
2) CV (Computer Vision)
The computer vision is a science for researching how to make a machine "see", and more specifically, the computer vision is to replace a human eye with a camera and a computer to identify, track and measure a target, and further perform graphic processing, so that the computer is processed into an image more suitable for the human eye to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can obtain 'information' from images or multidimensional data. The information referred to herein refers to Shannon-defined information that may be used to assist in making a "decision". Because perception can be seen as the extraction of information from sensory signals, computer vision can also be seen as science of how to "perceive" an artificial system from images or multi-dimensional data.
3) Machine Learning (Machine Learning, ML)
Machine learning is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, and the like. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
The layer classification method provided by the embodiment of the application can be applied to computer equipment with stronger data processing capability. The layer classification method is applied with a target detection model, and the target detection model can process input data so as to obtain a target detection area corresponding to the input data. In a possible implementation manner, the layer classification method provided by the embodiment of the application can be applied to a personal computer, a workstation or a server, namely, training of a layer classification model is performed through at least one of the personal computer, the workstation and the server.
Fig. 1 is a schematic diagram illustrating a layer classification system according to an exemplary embodiment. The layer classification system includes a server 110 and a terminal 120. The terminal 120 and the server 110 communicate data through a communication network, which may be a wired network or a wireless network.
Alternatively, an application having a graphic processing function, which may be a professional graphic processing application such as CAD, is installed in the terminal 120.
Alternatively, the terminal 120 may also be a terminal device having a data transmission interface for receiving image data acquired by the image acquisition device.
Optionally, the terminal 120 further has a data acquisition component (such as a mouse, a keyboard, a touch screen, etc.), where the data acquisition component of the terminal may acquire an input operation triggered by a user and generate graphic data corresponding to the input operation in a professional graphic processing application of the terminal 120.
Alternatively, the terminal 120 may be a mobile terminal such as a smart phone, a tablet computer, a laptop portable notebook computer, a desktop computer, a projection computer, or an intelligent terminal with a data processing component, which is not limited in the embodiment of the present application.
The server 110 may be implemented as a server or a server cluster formed by a group of servers, which may be a physical server or a cloud server. In one possible implementation, server 110 is a background server for applications in terminal 120.
In one possible implementation manner of the embodiment of the present application, the server 110 trains the target detection model through a preset training sample set (including each training sample image), where the training sample set may include training sample images of different categories, and each training sample image has respective category labeling information (i.e., a label value). After the training process of the target detection model by the server 110 is completed, the trained target detection model is sent to the terminal 120 through a wired network or a wireless network.
The terminal 120 receives the trained object detection model, and sends data information (e.g., weight information) corresponding to the object detection model to an application program with a layer classification function, so that when a user uses the application program, the user can perform object detection on an input image and perform layer classification according to an object detection result.
Optionally, the server may be an independent physical server, a server cluster formed by a plurality of physical servers or a distributed system, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and technical computing services such as big data and an artificial intelligence platform.
Optionally, the system may further include a management device, where the management device is configured to manage the system (e.g., manage a connection state between each module and the server, etc.), where the management device is connected to the server through a communication network. Optionally, the communication network is a wired network or a wireless network.
Alternatively, the wireless network or wired network described above uses standard communication techniques and/or protocols. The network is typically the internet, but may be any other network including, but not limited to, a local area network, a metropolitan area network, a wide area network, a mobile, a limited or wireless network, a private network, or any combination of virtual private networks. In some embodiments, techniques and/or formats including hypertext markup language, extensible markup language, and the like are used to represent data exchanged over a network. All or some of the links may also be encrypted using conventional encryption techniques such as secure socket layer, transport layer security, virtual private network, internet protocol security, etc. In other embodiments, custom and/or dedicated data communication techniques may also be used in place of or in addition to the data communication techniques described above.
FIG. 2 is a method flow diagram illustrating a layer classification method according to an example embodiment. The method is performed by a computer device, which may be a server or a terminal in a layer classification system as shown in fig. 1. As shown in fig. 2, the layer classification method may include the steps of:
step 201, obtaining target drawing data.
In one possible implementation manner of the embodiment of the present application, the target drawing data may be CAD drawing data, where the CAD drawing data includes layers, and each layer includes primitive data.
Step 202, performing data processing on the target drawing data through a target detection model, and determining a target detection area where the primitive of the target type is located.
In one possible implementation manner of the embodiment of the present application, the target detection model may be obtained by training using sample drawing data as a training sample and a sample area corresponding to a target type in the sample drawing data as a label, so that the target detection model may process input sample drawing data and identify a corresponding target detection area.
For example, when the target drawing data is a building drawing, the target detection model is obtained by training according to a sample building drawing and a column region marked on the sample building drawing, and after the building drawing is identified by the target detection model, the target detection model can input the target detection region where the column on the building drawing is located.
Therefore, in the target drawing data, the target detection area has a high possibility of having the graphic element of the target type.
And 203, judging whether each graphic element in the target drawing data is of the target type according to the intersection condition of each graphic element in the target drawing data and the target detection area.
After the target detection area on the target drawing data is obtained, the computer equipment can compare each graphic element on the target drawing data with the target detection area, and judge whether each graphic element in the target drawing data is of a target type according to the intersection relationship between each graphic element and the target detection area.
For example, when a primitive on the target drawing data does not intersect the target detection area, the primitive may be directly considered as not belonging to the target type corresponding to the target detection area.
When a primitive on the target drawing data intersects the target detection area, whether the primitive is a primitive of the target type can be determined according to the intersecting condition of the primitive and the target detection area, for example, when the intersecting area of the primitive and the target detection area is larger, the primitive is considered to be a primitive of the target type more likely when the intersecting area of the primitive and the target detection area is larger, and when the intersecting area of the primitive and the target detection area is smaller, the primitive is considered to be a primitive of the target type less likely.
And 204, determining the target layer as the layer of the target type when the number of the primitives of the target type in the target layer in the target drawing data is detected to be larger than a specified threshold value.
When the detection of the object type of each primitive on the object drawing data is completed, the primitive number of each layer of the object drawing data can be obtained at this time, for example, for a target layer, when the ratio of the number of the primitives of the object type in the target layer to the total number of the primitives in the target layer is greater than a specified threshold, it is indicated that a large part of the primitives in the target layer are all the primitives of the object type, and then the layer can be directly determined as the layer of the object type. For example, when more than 80% of the primitives in the target layer are column primitives, then the target layer may be determined to be a column layer.
In summary, when the layers need to be classified, the computer device may acquire the overall target drawing data, perform target detection on the target drawing data through the target detection model, acquire a target detection area where the primitives corresponding to the target types are located, acquire the primitives intersecting with the target detection area on the target drawing data, determine whether each primitive on the target drawing data is of the target type according to the target detection area, and determine the target drawing layer as the layer of the target type when the number of primitives of the target type is greater than the specified threshold in the target drawing layer of the target drawing data after the determination of each primitive is completed, where the primitive types are included in the layer, and the primitive classification process considers the overall drawing data, thereby improving the classification accuracy of the layer.
FIG. 3 is a method flow diagram illustrating a layer classification method according to an exemplary embodiment. The method is performed by a computer device, which may be a server or a terminal in a layer classification system as shown in fig. 1. As shown in fig. 3, the layer classification method may include the steps of:
step 301, obtaining target drawing data.
In the embodiment of the application, the target drawing data is CAD drawing data (namely CAD file), each layer is contained in the CAD file, and each primitive object is contained in each layer.
Step 302, detecting the type attribute of each primitive in the target drawing data, and deleting the primitive with the target type attribute in the target drawing data.
Wherein the target type attribute includes at least one of text labels and sizing.
In CAD files there is also certain structured information that may be literally based on certain information of the user reading the CAD file, such as text labeling, sizing, etc. When the computer equipment acquires the target drawing data, the type data of each graphic element in the target drawing data is detected, and the graphic element with the text marking or the type attribute of the size marking is deleted, so that the influence of the structural information on the subsequent identification process is avoided.
In one possible implementation, each primitive on the target drawing data is traversed, the primitive geometry is analyzed, and when the primitive geometry is detected to be the specified shape, the specified shape primitive is deleted from the target drawing data.
For example, on the target drawing data, a geometric line segment repeatedly appearing according to a certain distance is detected, and is identified as an axial network element by the computer equipment, and obviously, the axial network element easily has an interference effect on layer classification, so that the axial network element needs to be deleted, and interference on a subsequent identification process is avoided.
Step 303, obtaining a target image matrix according to the primitive distribution of the target drawing data.
After the steps, the target drawing data (namely, CAD drawing) is cleaned of most of the primitives which can cause interference to subsequent recognition, and at the moment, the rest of the primitives can be further processed.
Before the target drawing data is identified by the target detection model, the target drawing data needs to be converted into input corresponding to the target detection model.
In one possible implementation, each pixel point in the target drawing data is used as a matrix element to construct the target image matrix.
The target drawing data is not distinguished from the layers, and the display image is directly converted into a matrix form according to the pixel points according to the target drawing data (namely, CAD drawing), namely, the pixel points of all coordinates are respectively used as matrix elements of all positions, so that a target image matrix is constructed.
The relationship between the line segments in the target drawing data can be reflected most directly by the target image matrix directly constructed by the pixels in the image.
In one possible implementation, the target drawing data is divided into target grids according to a specified size; determining values in each target grid according to the geometric relationship of the primitives contained in each target grid so as to form the target image matrix; the primitive geometry comprises at least one of intersecting, parallel, perpendicular.
Values in the target grid are determined by primitive geometry relationships to form a target image matrix, and values in the target image matrix at the time are more prone to characterize the geometry relationships contained in primitives in the target drawing data. For example, different primitive geometries may be assigned different values, e.g., intersection assigned 1, parallel assigned 3, vertical assigned 10, etc., such that values in different grids represent the complexity of the geometry in different grids.
And 304, performing data processing on the target image matrix through a target detection model, and determining a matrix detection area corresponding to the target image matrix.
After a target image matrix is constructed according to the target drawing data, the target image matrix can be subjected to data processing through a target detection model, and at the moment, the target detection model can output a matrix detection area corresponding to the target image matrix, wherein the matrix detection area is an area where a graphic element of a target type is positioned in the target image matrix.
In one possible implementation manner, performing data processing on the target image matrix through a target detection model, and determining a candidate detection area corresponding to the target image matrix; and determining that the confidence coefficient is larger than a confidence coefficient threshold value in the candidate detection region as the matrix detection region.
When the target detection model is used for carrying out data processing on the target image matrix, the target detection model can output the detection area corresponding to the target image matrix and correspondingly input the confidence coefficient of each detection area, and at the moment, the determination that the confidence coefficient is larger than the confidence coefficient threshold value in each candidate detection area output by the target detection model can be used as a matrix detection area, so that the situation of misclassification of the graphic elements is avoided as far as possible.
Step 305, converting the matrix detection area to a coordinate system corresponding to the target drawing data, so as to obtain the target detection area.
After the matrix detection area is obtained through the target detection model, the corresponding relation between the matrix coordinates corresponding to the matrix detection area and the coordinates corresponding to the target drawing data can be confirmed according to the corresponding relation between the target image matrix and the target drawing data, so that the matrix detection area is converted into a coordinate system corresponding to the target drawing data, and the target detection area represented by the coordinates is obtained.
Step 306, judging whether each graphic element in the target drawing data is of the target type according to the intersection condition of each graphic element in the target drawing data and the target detection area.
In one possible implementation manner, acquiring intersecting primitives intersecting with a target detection area in target drawing data;
when the length of the intersected graphic element is within a specified interval, a first intersection point and a second intersection point of the intersected graphic element and the target detection area are obtained;
calculating a first distance corresponding to the first intersection point and a second distance corresponding to the second intersection point; the first distance is the sum of the distances between the first intersection point and each end point of the target detection area; the second distance is the sum of the distances between the second intersection point and each end point of the target detection area;
And determining the intersected graphic element as a target type when the difference value between the first distance and the second distance is smaller than a distance threshold value.
For example, in the building field, the types of primitives to be divided are usually wall, column, beam, door and window structures, and are usually structures composed of several straight lines, so that the primitives are also usually composed of straight lines, and for example, when detecting whether the primitives are of a column type, each primitive may be compared with a target detection area of a column type to detect whether each primitive intersects the target detection area of the column type.
When the length of the intersected graphic element is not in a designated interval, for example, the length is larger than an interval threshold value or smaller than the interval threshold value, the intersected graphic element is obviously unlikely to be of a column type; when the length of the intersecting graphic element is within the specified interval, the intersecting graphic element is further judged.
At this time, the intersection primitive with the length in the appointed section is acquired, the first intersection point and the second intersection point of the intersection primitive and the target detection area are obtained, and the sum of the distances between the intersection primitive and each endpoint of the target detection area is calculated. After the first distance and the second distance are calculated, the first distance and the second distance are compared, and when the difference value between the first distance and the second distance is smaller than a distance threshold value, the intersecting graphic element is considered to be better covered by the target detection area, and the intersecting graphic element is more likely to be the graphic element of the target type; when the difference between the first distance and the second distance is smaller than the distance threshold, the intersecting primitive is not better covered by the target detection area, and the intersecting primitive is less likely to be the primitive of the target type.
Referring to fig. 4, a flowchart of a primitive type detection method according to an embodiment of the present application is shown, and as shown in fig. 4, the primitive type detection method includes the following steps:
a: obtaining primitive information;
b: judging whether the confidence of the component area detected by the target is larger than a threshold value 1 or not; if so, go to c; otherwise, turning to k;
c: obtaining a graphic element intersected with the component area;
d: judging whether the intersected graphic element exists or not; presence go to e; otherwise, turning to k;
e: judging whether the intersecting graphic elements are processed or not; if yes, go to f; otherwise, turning to k;
f: the length of the intersected graphic element is greater than a threshold value 1; if yes, go to e; otherwise, turning to g;
g: the length of the intersected graphic element is smaller than a threshold value 2; if yes, go to e; otherwise, turning to h;
solving a unique intersecting line segment of each graphic element and different component areas, wherein in ideal conditions, the graphic element belonging to the component should be completely contained in the target detection area, so that if one graphic element and one component area have a plurality of intersecting line segments, the intersecting line segments are not reserved;
j: the component area obtained by the target detection algorithm is a quadrilateral area, four endpoints of the component area are obtained, then the distance difference between two endpoints of an intersecting line segment and the four endpoints of the component area is calculated, if the distance difference is within a certain threshold value, the distance difference is added into a corresponding component category list, the primitive list of a layer where the candidate component category is located in the identified component area is counted, and the process is transferred to e; otherwise, turning to e;
k: ending the flow.
Step 307, obtaining the number of primitives in each layer of the target drawing data and the number of primitives of the target type in each layer.
Before the number of the graphic elements is detected for each layer in each target drawing data, the attribute of each layer can be detected.
In one possible implementation manner, the geometric features of each layer are acquired, when the geometric features of the first layer are detected to not meet the specified condition, the first layer is determined to be an invalid layer, and the number of the primitives is not detected.
In one possible implementation manner, when the second layer is detected to contain the target identification information, the second layer is determined to be an invalid layer, and the number of the primitives is not detected.
And 308, determining the target layer as the layer of the target type when the number of the primitives of the target type in the target layer in the target drawing data is detected to be larger than a specified threshold value.
When the number of primitives of the target type in the target layer is larger than the specified threshold, it is indicated that a large part of the primitives of the target type in the target layer are the primitive data of the target type, so that the target layer can be determined as the layer of the target type.
Fig. 5 is a schematic flow chart of a layer classification method according to an embodiment of the application. As shown in fig. 5, a primitive list of a candidate layer (i.e. a primitive list required to be subjected to type detection) and a primitive list obtained by a classifier are obtained, and then the number of primitives of each layer in the candidate layer list is counted according to the layer name, and the number of primitives of each layer calculated by the classifier is counted according to the layer name.
Traversing the layer geometry classifier, calculating the ratio of the number of the primitives of each layer calculated by the classifier to the number of the primitives of the layers in the candidate layer list, and adding the layer name into the final primitive category when the ratio is greater than a threshold value.
In summary, when the layers need to be classified, the computer device may acquire the overall target drawing data, perform target detection on the target drawing data through the target detection model, acquire a target detection area where the primitives corresponding to the target types are located, acquire the primitives intersecting with the target detection area on the target drawing data, determine whether each primitive on the target drawing data is of the target type according to the target detection area, and determine the target drawing layer as the layer of the target type when the number of primitives of the target type is greater than the specified threshold in the target drawing layer of the target drawing data after the determination of each primitive is completed, where the primitive types are included in the layer, and the primitive classification process considers the overall drawing data, thereby improving the classification accuracy of the layer.
Fig. 6 is a method block diagram illustrating a layer classification method according to an exemplary embodiment. The method is performed by a computer device, which may be a server or a terminal in a layer classification system as shown in fig. 1. As shown in fig. 6, the method modules of the layer classification method may include the following:
1. and the primitive cleaning module is used for:
fig. 7 is a schematic diagram of a primitive cleaning model according to an embodiment of the present application. The module is an optional module, and has the main function of removing interference graphic elements affecting a target detection algorithm, mainly dimension marking, character numbering, shaft network and the like in a drawing, and is a filter structure.
The first layer carries out accurate filtering based on the structured information in the CAD file, such as text and dimension marking, and the primitive objects in the CAD file have type attributes, so that primitives affecting target detection and identification can be efficiently and accurately filtered based on attribute information;
the second layer is filtering based on general rules of construction engineering design, clustering and geometric configuration analysis are carried out on the primitive features by traversing primitives, and the filter is suitable for primitives with the detection and identification of interference targets with the custom drawing rules, such as an axis network.
After being processed by the primitive cleaning module, the residual primitive input module 2 of the CAD drawing (namely the target drawing data) performs data conversion.
2. Data conversion module
The module mainly converts the two-dimensional CAD drawings of different types after data cleaning into matrix expression as the input of the module 3 algorithm.
The feature matrix is mainly of two types: a. an image feature matrix; b. vector feature matrix.
Image feature matrix: converting the cleaned CAD drawing without distinguishing the layers into a bitmap pixel image matrix;
vector feature matrix: dividing the cleaned CAD drawing into grids with fixed sizes without distinguishing layers, calculating the geometric relationship (intersecting, parallel, vertical and the like) of the primitives in each grid, and assigning values to the grids to generate a vector feature matrix.
3. Target detection module
The basic steps of the module are that the matrix output by the last module is expressed and input into a neural network model for component target detection and identification, and the neural network model can be an open source model such as YOLO, retinaNet, faster R-CNN and the like, or a self-grinding neural network model. The module detects the area of the component (such as wall, column, beam, door, window, furniture, etc.) where the layer classification is required.
4. Coordinate conversion module
The geometric coordinate system of the detection result of the component area is a matrix coordinate system which is subjected to data conversion and is different from the primitive vector coordinate system. The function of this module is to map the identified component area coordinates from the matrix coordinate system to the CAD drawing original vector coordinate system as input to module 5 for matching the CAD primitives with the identified component areas.
5. Primitive marking module
The identified component area (i.e. the target detection area) often has an intersecting relation with a plurality of different primitives, and the primitive marking module is used for marking the component category of the primitive belonging to the specified component according to the intersecting relation with the identified component area, adding the component category into a corresponding candidate component category list, and simultaneously counting the primitive list of the layer where the candidate component category is located in the identified component area. The operation logic of the primitive tagging module may be similar to the primitive classification logic shown in fig. 4, and will not be described herein.
6. Layer classification module
The module is mainly used for classifying the layers by utilizing the geometric information, the name information and the extracted component detection information of the primitives. The module uses mainly several general, short-cut features for classification. Fig. 8 is a schematic structural diagram of a layer classification model according to an embodiment of the present application.
Firstly, filtering out layers which do not need to be concerned, such as invalid layers of description AXIS networks, dimension marks and characters of AXIS, PUB_DIM and the like through a picture name filter;
then in each layer, calculating obvious geometric characteristics of the components, such as whether parallel line characteristics in the layers have parallel lines, the ratio of the parallel line segments, the ratio of the number of arcs of the arc line characteristics, and the like, and filtering out the layers without geometric relationships of the components;
finally, in different types of object detection classifiers (such as walls, beams, columns, doors, windows and the like), the ratio of the number of the primitives of the candidate component category marked by the primitive marking module to the total number of the primitives of the candidate component area set of each layer is calculated to judge what component the layer belongs to.
7. Man-machine interaction module
This is an optional module that can enhance the final recognition rate if manual correction can be added during recognition. When the application scenario can contain a man-machine interaction process, the module can be added. The function of this module is: 1. manually delineating a primitive region needing to be subjected to layer classification, and reducing unfiltered noise data of a primitive cleaning module; 2. deleting and supplementing a target frame of target detection error identification and missing identification; 3. and correcting the final result of the layer classification module.
Fig. 9 is a block diagram illustrating a structure of a layer classification apparatus according to an exemplary embodiment. The device comprises:
the drawing acquisition module 901 is used for acquiring target drawing data;
the detection area acquisition module 902 is configured to perform data processing on the target drawing data through a target detection model, and determine a target detection area where the primitive of the target type is located;
the target type judging module 903 is configured to judge whether each primitive in the target drawing data is of the target type according to the intersection situation of each primitive in the target drawing data and the target detection area;
the layer type determining module 904 is configured to determine, when detecting that the number of primitives of the target type in the target layer of the target drawing data is greater than a specified threshold, the target layer as a layer of the target type.
In one possible implementation manner, the detection area acquisition module includes:
the image matrix acquisition unit is used for acquiring a target image matrix according to the primitive distribution of the target drawing data;
the matrix region determining unit is used for performing data processing on the target image matrix through a target detection model and determining a matrix detection region corresponding to the target image matrix;
And the detection area acquisition unit is used for converting the matrix detection area into a coordinate system corresponding to the target drawing data so as to acquire the target detection area.
In a possible implementation, the image matrix acquisition unit is further configured to,
and constructing the target image matrix by taking each pixel point in the target drawing data as a matrix element.
In a possible implementation, the image matrix acquisition unit is further configured to,
dividing the target drawing data into target grids according to a specified size;
determining values in each target grid according to the primitive geometric relationship contained in each target grid so as to form the target image matrix; the primitive geometry relationships comprise at least one of intersecting, parallel, perpendicular.
In one possible implementation manner, the detection area acquisition module further includes:
the primitive attribute detection unit is used for detecting the type attribute of each primitive in the target drawing data and deleting the primitive with the target type attribute in the target drawing data;
wherein the target type attribute comprises at least one of text labels and sizing.
In a possible implementation, the matrix area determination unit is further adapted to,
performing data processing on the target image matrix through a target detection model, and determining a candidate detection area corresponding to the target image matrix;
and determining that the confidence coefficient is larger than a confidence coefficient threshold value in the candidate detection area as the matrix detection area.
In one possible implementation manner, the target type determining module includes:
acquiring intersecting primitives which intersect with a target detection area in target drawing data;
when the length of the intersected graphic element is in a specified interval, a first intersection point and a second intersection point of the intersected graphic element and the target detection area are obtained;
calculating a first distance corresponding to the first intersection point and a second distance corresponding to the second intersection point; the first distance is the sum of the distances between the first intersection point and each end point of the target detection area; the second distance is the sum of the distances between the second intersection point and each end point of the target detection area;
and determining the intersected graphic element as a target type when the difference value between the first distance and the second distance is smaller than a distance threshold value.
In summary, when the layers need to be classified, the computer device may acquire the overall target drawing data, perform target detection on the target drawing data through the target detection model, acquire a target detection area where the primitives corresponding to the target types are located, acquire the primitives intersecting with the target detection area on the target drawing data, determine whether each primitive on the target drawing data is of the target type according to the target detection area, and determine the target drawing layer as the layer of the target type when the number of primitives of the target type is greater than the specified threshold in the target drawing layer of the target drawing data after the determination of each primitive is completed, where the primitive types are included in the layer, and the primitive classification process considers the overall drawing data, thereby improving the classification accuracy of the layer.
Referring to fig. 10, a schematic diagram of a computer device according to an exemplary embodiment of the present application is provided, where the computer device includes a memory and a processor, and the memory is configured to store a computer program, where the computer program is executed by the processor to implement the method described above.
The processor may be a central processing unit (Central Processing Unit, CPU). The processor may also be any other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules, corresponding to the methods in embodiments of the present application. The processor executes various functional applications of the processor and data processing, i.e., implements the methods of the method embodiments described above, by running non-transitory software programs, instructions, and modules stored in memory.
The memory may include a memory program area and a memory data area, wherein the memory program area may store an operating system, at least one application program required for a function; the storage data area may store data created by the processor, etc. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some implementations, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
In an exemplary embodiment, a computer readable storage medium is also provided for storing at least one computer program that is loaded and executed by a processor to implement all or part of the steps of the above method. For example, the computer readable storage medium may be Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), compact disc Read-Only Memory (CD-ROM), magnetic tape, floppy disk, optical data storage device, and the like.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method of layer classification, the method comprising:
acquiring target drawing data;
performing data processing on the target drawing data through a target detection model, and determining a target detection area where the primitives of the target type are located;
judging whether each graphic element in the target drawing data is of the target type according to the intersection condition of each graphic element in the target drawing data and a target detection area;
And when detecting that the number proportion of the primitives of the target type in the target layers in the target drawing data is larger than a specified threshold, determining the target layers as the layers of the target type.
2. The method of claim 1, wherein the determining, by the object detection model, the object detection area in which the primitives of the object type are located, includes:
acquiring a target image matrix according to the primitive distribution of the target drawing data;
performing data processing on the target image matrix through a target detection model, and determining a matrix detection area corresponding to the target image matrix;
and converting the matrix detection area into a coordinate system corresponding to the target drawing data to obtain the target detection area.
3. The method of claim 2, wherein obtaining a target image matrix from the primitive distribution of the target drawing data comprises:
and constructing the target image matrix by taking each pixel point in the target drawing data as a matrix element.
4. The method of claim 2, wherein obtaining a target image matrix from the primitive distribution of the target drawing data comprises:
Dividing the target drawing data into target grids according to a specified size;
determining values in each target grid according to the primitive geometric relationship contained in each target grid so as to form the target image matrix; the primitive geometry relationships comprise at least one of intersecting, parallel, perpendicular.
5. The method according to claim 2, wherein before the obtaining the target image matrix according to the primitive distribution of the target drawing data, the method further comprises:
detecting the type attribute of each primitive in the target drawing data, and deleting the primitive with the target type attribute in the target drawing data;
wherein the target type attribute comprises at least one of text labels and sizing.
6. The method according to claim 2, wherein the data processing the target image matrix by the target detection model, and determining a matrix detection area corresponding to the target image matrix, includes:
performing data processing on the target image matrix through a target detection model, and determining a candidate detection area corresponding to the target image matrix;
and determining that the confidence coefficient is larger than a confidence coefficient threshold value in the candidate detection area as the matrix detection area.
7. The method according to any one of claims 1 to 6, wherein determining whether each primitive in the target drawing data is of the target type according to an intersection condition of each primitive in the target drawing data and a target detection area includes:
acquiring intersecting primitives which intersect with a target detection area in target drawing data;
when the length of the intersected graphic element is in a specified interval, a first intersection point and a second intersection point of the intersected graphic element and the target detection area are obtained;
calculating a first distance corresponding to the first intersection point and a second distance corresponding to the second intersection point; the first distance is the sum of the distances between the first intersection point and each end point of the target detection area; the second distance is the sum of the distances between the second intersection point and each end point of the target detection area;
and determining the intersected graphic element as a target type when the difference value between the first distance and the second distance is smaller than a distance threshold value.
8. A layer classification apparatus, the apparatus comprising:
the drawing acquisition module is used for acquiring target drawing data;
the detection area acquisition module is used for carrying out data processing on the target drawing data through a target detection model and determining a target detection area where the graphic element of the target type is located;
The target type judging module is used for judging whether each graphic element in the target drawing data is of the target type according to the intersection condition of each graphic element in the target drawing data and the target detection area;
and the layer type determining module is used for determining the target layer as the layer of the target type when detecting that the number of the primitives of the target type in the target layer in the target drawing data is larger than a specified threshold value.
9. A computer device comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, code set, or instruction set, and wherein the at least one instruction, at least one program, code set, or instruction set is loaded and executed by the processor to implement the layer classification method of any of claims 1-7.
10. A computer readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement the layer classification method of any of claims 1-7.
CN202210172791.0A 2022-02-24 2022-02-24 Layer classification method, device, equipment and storage medium Pending CN116704542A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210172791.0A CN116704542A (en) 2022-02-24 2022-02-24 Layer classification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210172791.0A CN116704542A (en) 2022-02-24 2022-02-24 Layer classification method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116704542A true CN116704542A (en) 2023-09-05

Family

ID=87841917

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210172791.0A Pending CN116704542A (en) 2022-02-24 2022-02-24 Layer classification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116704542A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237977A (en) * 2023-11-16 2023-12-15 江西少科智能建造科技有限公司 Area division method and system for CAD drawing
CN117237978A (en) * 2023-11-16 2023-12-15 江西少科智能建造科技有限公司 CAD drawing electrical bridge information extraction method and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237977A (en) * 2023-11-16 2023-12-15 江西少科智能建造科技有限公司 Area division method and system for CAD drawing
CN117237978A (en) * 2023-11-16 2023-12-15 江西少科智能建造科技有限公司 CAD drawing electrical bridge information extraction method and system
CN117237977B (en) * 2023-11-16 2024-03-08 江西少科智能建造科技有限公司 Area division method and system for CAD drawing
CN117237978B (en) * 2023-11-16 2024-03-08 江西少科智能建造科技有限公司 CAD drawing electrical bridge information extraction method and system

Similar Documents

Publication Publication Date Title
US10360703B2 (en) Automatic data extraction from a digital image
US11416672B2 (en) Object recognition and tagging based on fusion deep learning models
Zhao et al. Reconstructing BIM from 2D structural drawings for existing buildings
CN108804815A (en) A kind of method and apparatus assisting in identifying wall in CAD based on deep learning
CN111709409A (en) Face living body detection method, device, equipment and medium
CN108875599A (en) A kind of identification check of drawings method of building trade ENGINEERING CAD drawing
CN116704542A (en) Layer classification method, device, equipment and storage medium
CN110874618B (en) OCR template learning method and device based on small sample, electronic equipment and medium
CN113780270B (en) Target detection method and device
CN113033321A (en) Training method of target pedestrian attribute identification model and pedestrian attribute identification method
CN114647713A (en) Knowledge graph question-answering method, device and storage medium based on virtual confrontation
KR102083786B1 (en) Method and apparatus for identifying string and system for identifying displaing image using thereof
CN115131803A (en) Document word size identification method and device, computer equipment and storage medium
CN117058723B (en) Palmprint recognition method, palmprint recognition device and storage medium
CN111898528B (en) Data processing method, device, computer readable medium and electronic equipment
CN117932763A (en) Expressway traffic model construction method based on digital twin
CN117690098A (en) Multi-label identification method based on dynamic graph convolution under open driving scene
CN111950646A (en) Hierarchical knowledge model construction method and target identification method for electromagnetic image
CN114937277B (en) Image-based text acquisition method and device, electronic equipment and storage medium
Tian Analysis of Chinese Painting Color Teaching Based on Intelligent Image Color Processing Technology in the Network as a Green Environment
KR20230036327A (en) Automatic extraction method of indoor spatial information from floor plan images through patch-based deep learning algorithms and device thereof
CN110309285B (en) Automatic question answering method, device, electronic equipment and storage medium
CN113658195A (en) Image segmentation method and device and electronic equipment
CN111612890A (en) Method and device for automatically generating three-dimensional model from two-dimensional house type diagram and electronic equipment
CN117173731B (en) Model training method, image processing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination