CN113177995A - Text recombination method for CAD drawing and computer readable storage medium - Google Patents

Text recombination method for CAD drawing and computer readable storage medium Download PDF

Info

Publication number
CN113177995A
CN113177995A CN202110484752.XA CN202110484752A CN113177995A CN 113177995 A CN113177995 A CN 113177995A CN 202110484752 A CN202110484752 A CN 202110484752A CN 113177995 A CN113177995 A CN 113177995A
Authority
CN
China
Prior art keywords
text
target
coordinates
cad drawing
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110484752.XA
Other languages
Chinese (zh)
Inventor
丁冠华
谭文宇
陈家宁
陈兵
郭鑫
王忠家
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Glodon Co Ltd
Original Assignee
Glodon Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Glodon Co Ltd filed Critical Glodon Co Ltd
Priority to CN202110484752.XA priority Critical patent/CN113177995A/en
Publication of CN113177995A publication Critical patent/CN113177995A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities

Abstract

The invention discloses a text recombination method of a CAD drawing, which comprises the following steps: determining bitmap information and element vector information according to a CAD drawing containing a target text, wherein the element vector information comprises vector information of each element in the CAD drawing; inputting the bitmap information into a preset element identification model to determine a text mapping position of a target text of the CAD drawing in the bitmap information; determining target vector information corresponding to the text mapping position from the element vector information; and recombining the target text of the CAD drawing according to the target vector information. The invention also discloses a text recombination device of the CAD drawing, a computer device and a computer readable storage medium.

Description

Text recombination method for CAD drawing and computer readable storage medium
Technical Field
The invention relates to the technical field of computers, in particular to a text recombination method and device of a CAD drawing, computer equipment and a computer readable storage medium.
Background
In the building field and other industrial fields, users often rely on professional knowledge to identify the content in the CAD drawing for building a solid three-dimensional model or guiding actual production.
In practical application, a great amount of text content is included in the CAD drawing to supplement the deficiency of the graphic expression. When a user identifies the CAD drawing, the text contents are often required to be recombined after professional judgment.
However, the inventor researches to find that the data structure of the text content in the CAD drawing is not universal, and even for the same type of text content, different expressions can be generated due to different habits of designers. Therefore, a user must understand the text content first and then manually reconstruct the text content, so that when processing batch CAD drawings, the reconstruction work of the text content is heavy in repetition and extremely inefficient.
Aiming at the technical problems that in the prior art, work is large in repetition amount and low in efficiency due to the fact that text contents in batch CAD drawings are manually recombined, an effective solution is not provided at present.
Disclosure of Invention
The invention aims to provide a method and a device for text recombination of CAD drawings, computer equipment and a computer readable storage medium, which can solve the technical problems of large work repetition amount and low efficiency caused by manual recombination of texts in batched CAD drawings in the prior art.
One aspect of the present invention provides a method for reconstructing a text of a CAD drawing, the method including: determining bitmap information and element vector information according to a CAD drawing containing a target text, wherein the element vector information comprises vector information of each element in the CAD drawing; inputting the bitmap information into a preset element identification model to determine a text mapping position of a target text of the CAD drawing in the bitmap information; determining target vector information corresponding to the text mapping position from the element vector information; and recombining the target text of the CAD drawing according to the target vector information.
Optionally, the inputting the bitmap information into a preset element recognition model to determine a text mapping position of a target text of the CAD drawing in the bitmap information includes: inputting the bitmap information into the preset element identification model to obtain element types contained in the CAD drawing and coordinates of elements of various types in the bitmap information; and screening out coordinates corresponding to the element type as the text type from the obtained coordinates, and taking the coordinates as the text mapping position.
Optionally, the determining, from the element vector information, target vector information corresponding to the text mapping position includes: determining the coordinates of the outer frame of the CAD drawing in the bitmap information, and recording the coordinates as first coordinates; determining coordinates of the outer frame of the CAD drawing in the CAD drawing from the element vector information, and recording the coordinates as second coordinates; determining a mapping relation between the first coordinate and the second coordinate, and calculating a target coordinate having the same mapping relation with the text mapping position; determining the target vector information including the target coordinates from the element vector information.
Optionally, the determining, in the bitmap information, coordinates of an outer frame of the CAD drawing, which is recorded as first coordinates, includes: determining a pixel point position set of an outer frame of the CAD drawing in the bitmap information, wherein the pixel point position set comprises pixel point positions in the horizontal direction and pixel point positions in the vertical direction; and determining the coordinates of the outer frame of the CAD drawing in the bitmap information according to the pixel point position set, and recording the coordinates as the first coordinates.
Optionally, the reconstructing the target text of the CAD drawing according to the target vector information includes: determining coordinates and text content of the target text according to the target vector information; clustering the text contents of the target text according to the coordinates of the target text to obtain the text contents belonging to the same line; and recombining the target text of the CAD drawing according to the text content which belongs to the same line and is obtained by clustering.
Optionally, the determining the coordinates and the text content of the target text according to the target vector information includes: acquiring coordinates and a text structure of the target text from the target vector information; judging whether the text structure of the target text contains a preset type of text structure; if so, converting the contained text structure of the preset category into corresponding text content, and taking the text structure of a non-preset category in the converted text content and the text structure of the target text as the text content of the target text; if not, directly taking the text structure of the target text as the text content of the target text.
Optionally, the reconstructing the target text of the CAD drawing according to the clustered text contents belonging to the same line includes: judging whether text contents of preset categories exist in the text contents which belong to the target line and are obtained through clustering; if so, intercepting the text content of the preset category from the text content which is obtained by clustering and belongs to the target line, determining the vacant position in the remaining text content of the non-preset category, and filling the intercepted text content of the preset category to the vacant position to recombine the target line of the target text; if not, directly taking the text content which is obtained by clustering and belongs to the target line as the target line of the recombined target text.
Optionally, after the target text of the CAD drawing is recombined according to the clustered text contents belonging to the same line, the method further includes: determining start-stop marks for marking the text content of each sentence from the recombined target texts; recognizing text content of each sentence from the target text according to the start-stop mark; inputting the identified text content of the target sentence into a preset feature labeling model so that the preset feature labeling model identifies each target feature in the text content of the target sentence and outputs the text content of the target sentence with each target feature labeled; and inputting the text content of the target sentence marked with the target characteristics into a preset characteristic relation marking model, so that the preset characteristic relation marking model performs semantic analysis on the input text content of the target sentence and outputs the text content of the target sentence marked with the relation among the target characteristics.
Optionally, the preset element identification model is obtained through retinet model or Yolov5 model learning.
Yet another aspect of the present invention provides a computer apparatus, comprising: the computer program is stored on the memory and can be run on the processor, and the processor executes the computer program to realize the text reorganization method of the CAD drawing in any embodiment.
Yet another aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the text reorganization method of the CAD drawings according to any of the embodiments described above.
The text recombination method of the CAD drawing automatically identifies the text mapping position of the target text in the bitmap information through the preset element identification model, and then determines the target vector information corresponding to the text mapping position from the element vector information, wherein the target vector information comprises the vector information of the target text, and the vector information comprises the specific numerical values, font sizes, position information and the like of corresponding elements. Meanwhile, the invention considers that the bitmap information is an image with a fixed resolution, and if the target text is directly identified from the bitmap information, the identification result may be inaccurate because of distortion.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart of a text reorganization method for a CAD drawing according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a labeled feature relationship provided in an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a text reorganization method of a CAD drawing according to a second embodiment of the present invention;
FIG. 4 is a block diagram of a text reorganization apparatus of a CAD drawing provided in the third embodiment of the present invention;
fig. 5 is a block diagram of a computer device suitable for implementing a text reorganization method of a CAD drawing according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The terms in the present invention are explained as follows:
and bitmap information: the bitmap is also called bitmap image, and consists of pixel points, and pictures shot by a digital camera, pictures scanned by a scanner, computer screenshots and the like belong to the bitmap. For example, the pictures in png format, bmp format, and jpeg format all belong to bitmaps.
And the graphic elements are points, lines, circles (circular arcs), area fillings, characters and the like.
Elements: the CAD drawings comprise a plurality of elements, each element consisting of a graphical element, and the elements in the CAD drawings may comprise tables, text or superimposed plate models, and the like.
Persistence, namely persisting transient data (e.g., data in memory, which cannot be persisted) into persistent data (e.g., persisting into a database, which can be persisted for a long time). In the application, the element vector information may be stored in a persistent file, where the persistent file may be obtained by parsing out vector elements in a CAD drawing and persistently outputting the vector elements, and the persistent file may be a json file.
Example one
In the prior art, when a text in a CAD drawing is recombined, the limitation on the text format is large, a user needs to intervene in a plurality of operations, for example, the user needs to manually specify the text range, when the text amount is large, the work is repeated and time-consuming, for example, when the text content is recombined, the user needs to determine the type of the operated text, the operation needs to accumulate certain services and professional knowledge, and the requirement on the user is high. In the text recombination method provided by the invention, the bitmap information is an image with a fixed resolution, if the target text is directly recognized from the bitmap information, the recognized target text can be distorted to cause inaccurate recognition results, and the vector information stored in the element vector information comprises specific numerical values, font sizes, colors, position information and the like of each element, so that the target vector information is determined by combining the bitmap information and the element vector information, and the target text is recombined through the target vector information, thereby not only realizing the automation of text recombination, but also ensuring the accuracy of the recombination results. Specifically, fig. 1 shows a flowchart of a text reorganization method of a CAD drawing according to a first embodiment of the present invention, and as shown in fig. 1, the text reorganization method may include steps S1 to S4, where:
step S1, determining bitmap information and element vector information according to a CAD drawing containing a target text, wherein the element vector information comprises vector information of each element in the CAD drawing.
The purpose of the embodiment is to convert the format of the CAD drawing into a format which can be processed later, wherein the format of the CAD drawing comprises a dwg format. Firstly, format analysis is carried out on a CAD drawing through an ODA (open Design alliance) database, vector elements in the CAD drawing are output to be bitmap information with fixed resolution, and the CAD drawing is analyzed to obtain element vector information, wherein the vector elements comprise element vector information, and the element vector information can comprise specific numerical values, font sizes, colors of the elements, position information of the elements in the CAD drawing and the like. Optionally, to prevent data loss, these element vector information may also be persisted out into a persisted file.
And step S2, inputting the bitmap information into a preset element identification model to determine the text mapping position of the target text of the CAD drawing in the bitmap information.
The target text may include one text, two texts, or multiple texts in the CAD drawing. The text mapping position may be a specific coordinate or a region range.
Alternatively, step S2 may include step S21 and step S22, wherein:
step S21, inputting the bitmap information into the preset element identification model, and obtaining element types contained in the CAD drawing and coordinates of elements of each type in the bitmap information;
and step S22, screening out coordinates with element types corresponding to the text types from the obtained coordinates, and taking the coordinates as the text mapping positions.
In this embodiment, the determined text mapping position is a coordinate, where the element type may include a plan view, a text, a table, a title, a sub-description, a detailed view, a tab, a countersign bar, and the like.
Optionally, the preset element recognition model is obtained by training, and specifically includes:
acquiring a bitmap sample data set, wherein the bitmap sample data set comprises a training set and a testing set, the training set and the testing set both comprise a plurality of pieces of sample data, and the sample data comprises a bitmap sample, element types contained in a CAD sample and coordinates of elements of each type in the bitmap sample; the bitmap samples are obtained by CAD sample conversion;
taking bitmap samples of a plurality of sample data in a training set as input, and taking corresponding element types and coordinates as output to train a preset learning model;
inputting bitmap samples of a plurality of sample data in the test set into a trained learning model to obtain output element types and corresponding coordinates;
comparing the element types output by the trained learning model with the element types corresponding to the test set, and comparing the coordinates output by the trained learning model with the coordinates corresponding to the test set, and judging whether the accuracy of the trained learning model is greater than or equal to a preset threshold value;
and when the accuracy is more than or equal to a preset threshold value, taking the correspondingly trained learning model as a preset element recognition model.
The preset learning model may include a retinet model or a Yolov5 model. Specifically, the bitmap samples of a plurality of pieces of sample data in the training set can be used as the input of the retinet model, and the corresponding element types and coordinates can be used as the output of the retinet model, so that the obtained preset identification model has higher speed and higher accuracy when identifying the element types and the coordinates of the elements in the bitmap information. Or, the bitmap samples of a plurality of sample data in the training set are used as the input of the Yolov5 model, and the corresponding element types and coordinates are used as the output of the Yolov5 model, so that the speed of the obtained preset recognition model is faster than that of the Retianet model when the element types and the coordinates of the elements in the bitmap information are recognized.
In this embodiment, labeling frames may be set for the elements in the bitmap sample in advance, each labeling frame only includes one element, and the coordinates of the element identified by the preset element identification model are the coordinates of the element labeling frame. The coordinates marked in the bitmap sample data set can be coordinates of each element which are determined by using a coordinate system which is specified in advance for the bitmap sample; or determining coordinates of the element according to the horizontal pixel point position and the vertical pixel point position of the element in the bitmap sample, for example, scaling the horizontal pixel point position and the vertical pixel point position of the element in proportion to obtain the horizontal and vertical coordinates of the element, where the proportion may include 1.
Step S3, determining target vector information corresponding to the text mapping position from the element vector information. The target vector information comprises vector information of a target text of the CAD drawing.
In this embodiment, the element vector information includes vector information of each element in the CAD drawing, and the CAD drawing includes the target text, so that the element vector information inevitably includes the vector information of the target text. The target vector information corresponding to the text mapping position determined from the element vector information may be the vector information of the target text, or may be not only the vector information of the target text, for example, the target vector information further includes predefined numbers, such as the first text, the second text, …, and the like.
Alternatively, the step S3 may include steps S31 to S34 when the text mapping position is the coordinates, because of a positioning function that can be quickly and accurately implemented according to the coordinates, wherein:
and step S31, determining the coordinates of the outer frame of the CAD drawing in the bitmap information, and recording the coordinates as first coordinates.
The first coordinate may be determined by two schemes, specifically:
scheme one (first coordinate is determined by the preset element identification model)
Inputting the bitmap information into the preset element identification model to obtain element types contained in the CAD drawing and coordinates of elements of various types in the bitmap information;
and screening out coordinates corresponding to the element type as the outer frame type from the obtained coordinates, and taking the coordinates as the first coordinates.
Scheme two (position of horizontal pixel point and vertical pixel point in bitmap information determines first coordinate through outer frame)
Because the bitmap information has a fixed resolution, and the resolution is the number of pixels in the horizontal direction of the picture and the number of pixels in the vertical direction of the picture, the horizontal and vertical coordinates of the outer frame in the bitmap information can be determined through the positions of the pixels in the horizontal direction and the vertical direction in the bitmap information. Specifically, step S31 may include step S311 and step S312, wherein:
step S311, determining a pixel point position set of the outer frame of the CAD drawing in the bitmap information, wherein the pixel point position set comprises pixel point positions in the horizontal direction and pixel point positions in the vertical direction;
step S312, determining the coordinates of the outer frame of the CAD drawing in the bitmap information according to the pixel point position set, and recording the coordinates as the first coordinates.
The outer frame indicates that all elements are inside the frame, namely the outer frame is located on the outermost side of the CAD drawing compared with other elements. Therefore, when determining the pixel position set of the outer frame, traversal can be started from the outside to the inside from four sides of bitmap information respectively until the horizontal pixel position and the vertical pixel position of the first non-zero pixel are determined, and then the pixel position set is formed. In order to reduce workload, only the horizontal pixel point position and the vertical pixel point position of the four vertexes of the outer frame in the bitmap information can be determined, and then the pixel point positions corresponding to the frame formed by connecting the four vertexes by straight lines are collected to form a pixel point position set corresponding to the outer frame. For example, the resolution of the bitmap information is 1024 × 768, that is, there are 768 rows in the horizontal direction, each row has 1024 pixels, there are 1024 columns in the vertical direction, each column has 768 pixels, and the identified pixel point position set may include: the 104 th pixel point in the 10 th line in the horizontal direction and the 100 th pixel point in the 50 th column in the vertical direction, the 204 th pixel point in the 81 th line in the horizontal direction and the 137 th pixel point in the 108 th column in the vertical direction, …, and so on. Further, the pixel point position set is scaled proportionally to obtain a first coordinate, wherein the proportion is a positive number. For example, if the ratio is 1, the horizontal pixel position in the pixel position set is directly used as the abscissa, and the vertical pixel position is used as the ordinate.
And step S32, determining the coordinates of the outer frame of the CAD drawing in the CAD drawing from the element vector information, and recording the coordinates as second coordinates.
And the vector information of each element in the CAD drawing comprises the coordinate of each element in the CAD drawing. And traversing the coordinate with the largest area range surrounded by the element vector information, namely the coordinate of the outer frame.
Step S33, determining a mapping relationship between the first coordinate and the second coordinate, and calculating a target coordinate having the same mapping relationship as the text mapping position.
The mapping relationship between the first coordinate and the second coordinate may be a proportional relationship between the coordinates of the first coordinate and the second coordinate, and if the proportional relationship is 2, each coordinate in the text mapping position (the coordinate of the target text in the bitmap information) is enlarged by 2 times to obtain the target coordinate.
Step S34, determining the target vector information including the target coordinates from the element vector information.
Because the vector information of the elements comprises the coordinates of the elements, the vector information containing the target coordinates can be reversely searched, and the target vector information can be obtained, wherein the search can be carried out through a KD (K-Dimensional) tree search algorithm, and the search can also be carried out through traversing the coordinates of all the elements.
And step S4, recombining the target text of the CAD drawing according to the target vector information.
The target text includes at least one of two categories of text: one is a preset category of text, such as legend symbols and/or model building blocks; and the other is non-preset text, such as characters, punctuation marks, numbers, letters, operation symbols and/or the like. The target vector information includes coordinates of the target text and a text structure of the target text, where the coordinates of the target text may include coordinates of each primitive in the target text, specifically: when the target text comprises the text of the preset category, the coordinates of the target text comprise the coordinates of each primitive in the text of the preset category; when the target text comprises the text of the non-preset category, the coordinates of the target text comprise the coordinates of each primitive in the text of the non-preset category. The text structure of the target text may include each primitive in the target text, the color of each primitive, and the size of each primitive, or include the color and line thickness of each primitive, etc. Specifically, the method comprises the following steps: when the target text comprises a preset category of text, the text structure of the target text comprises the color and line thickness of each primitive in the preset category of text, for example, a certain primitive is a legend symbol (e.g., "/"), and then the text structure of the target text may comprise the color and line thickness of the legend symbol; when the target text includes text of a non-preset category, the text structure of the target text includes each primitive, the color of each primitive, and the size of each primitive in the text of the non-preset category, for example, if a certain primitive is a character, the text structure of the target text may include the character primitive itself and the color and the size of the character primitive.
In this embodiment, the text content of the target text may be determined according to the text structure of the target text, and then the target text may be reconstructed according to the coordinates of the target text and the text content of the target text.
It should be noted that, when the target text includes a text of a preset category, correspondingly, the text content of the target text includes the text content of the preset category; when the target text comprises the text of the non-preset category, correspondingly, the text content of the target text comprises the text content of the non-preset category.
Specifically, step S4 may include steps S41 to S43, in which:
and step S41, determining the coordinates and text content of the target text according to the target vector information.
The coordinates of the target text can be directly obtained from the target vector information, the text structure of the target text is obtained, and then the text content of the target text is determined according to the text structure of the target text.
Specifically, step S41 may include steps S411 to S414, in which:
step S411, obtaining coordinates and a text structure of the target text from the target vector information;
step S412, judging whether the text structure of the target text contains a preset type of text structure;
step S413, if yes, converting the included text structure of the preset category into corresponding text content, and using a text structure of a non-preset category in the converted text content and the text structure of the target text as the text content of the target text;
and step S414, if not, directly taking the text structure of the target text as the text content of the target text.
In this embodiment, the text structure of the target text included in the target vector information includes: and in the process of determining the text content of the target text according to the target vector information, a preset type text structure and/or a non-preset type text structure: for example, the lengths and the relations of the primitives forming the text of the preset category can be determined according to the coordinates corresponding to the text structure of the preset category, and then the colors and the line thicknesses of the primitives are determined according to the text structure of the preset category, so as to form the text content of the preset category; the text structure of the non-preset category is not required to be converted, and the text structure can be directly used as the text content.
And step S42, clustering the text contents belonging to the same line from the text contents of the target text according to the coordinates of the target text.
Specifically, the coordinate mean value of each primitive in the longitudinal direction is determined, and then the primitives with the closest coordinate mean values are combined together to serve as the text content of the same line. The coordinate mean is, for example, the mean of the highest point coordinate and the lowest point coordinate in the longitudinal direction of the primitive. The clustered text contents belonging to the same line may include: a preset category of text content and/or a non-preset category of text content.
And step S43, recombining the target text of the CAD drawing according to the text content which belongs to the same line and is obtained by clustering.
In this embodiment, the text content of each line in the target text may be determined according to the text content of the same line obtained by clustering, and then the text of each line is determined according to the text content of each line, for example, the text content of a certain line of text only has characters (non-preset category), and the text of the line is determined according to the information of the characters, the size of the characters, the color of the characters, and the like defined in the text content of the characters. Further, after each line of text is determined, the target text is recombined. In the following, how to recombine a line of text of the target text according to the line of text content obtained by clustering is explained in detail by using one of the line examples, specifically, step S43 may include steps S431 to S433, where:
step S431, judging whether the text content which is obtained by clustering and belongs to the target line has the text content of a preset category;
step S432, if yes, intercepting the text content of the preset category from the text content which is obtained by clustering and belongs to the target line, determining the vacant position in the remaining text content of the non-preset category, and filling the intercepted text content of the preset category to the vacant position to recombine the target line of the target text;
and step S433, if not, directly taking the text content which is obtained by clustering and belongs to the target line as the target line of the recombined target text.
In general, if a certain line of the target text includes both the preset type of text and the non-preset type of text, the preset type of text is interspersed between the non-preset type of text in the target text, but the text content of each line obtained by clustering is independent from the text content of different types, and the interspersing phenomenon does not exist. Therefore, when the text content of the preset category exists in the text content belonging to the target line obtained by clustering, the text content of the preset category is cut out, the vacant position in the remaining text content of the non-preset category is judged, and then the cut-out text content of the preset category is filled in the vacant position to recombine the target line of the target text. For example, the target behavior of the target text in the CAD drawing: 3. and/represents a building model a, wherein ' 3 ' represents that the building model a ' belongs to a non-preset category, and ' v ' belongs to a preset category, the clustered target behaviors are as follows: 3. representing the building model a/, the recombined target row is obtained by truncating "/" and placing it at the vacant position in "3, representing the building model a". If the text content which is obtained by clustering and belongs to the target line does not have the text content of the preset type, the blank filling operation is not required to be executed, and the text content which is obtained by clustering and belongs to the target line is directly used as the target line of the recombined target text.
Alternatively, in the process of calculating the amount in the building field, the number of each member used in the building needs to be calculated, wherein one part of the members is embodied in the building model drawn in the CAD drawing, the other part of the members is embodied in the text contained in the CAD drawing, and there is a dependency relationship among the members, such as which type of the reinforcing steel bar is arranged on which plate, which type of the reinforcing steel bar is arranged on the plate, and which type of the reinforcing steel bar is arranged under the plate, and the like. In the prior art, the relationships among the components are marked manually, but the manual marking has the defects of large repeated workload and low accuracy, and under the conditions of insufficient data volume and single service scene, the effect of the manual marking is far lower than that of a complete hard coding and rule extraction method, which is difficult to process complex information relationships of all semantics, and a large number of logic rules need continuous manual addition and maintenance. Therefore, in order to solve the above problems, reduce the workload in the computation process and improve the accuracy of the computation, the embodiment marks the dependency relationship between the components by combining the preset feature labeling model and the preset feature relationship labeling model, and further marks the relationship between the model, the location, and the like of each component, so as to solve the defect caused by manual labeling and solve the problem of semantic-based complex information relationship which is difficult to process based on the regular and regular algorithm in the building field. Specifically, after step S43, the method further includes steps a1 to a4, wherein:
step A1, determining a start-stop mark for marking the text content of each sentence from the recombined target text;
step A2, recognizing text content of each sentence from the target text according to the start-stop mark;
step A3, inputting the identified text content of the target sentence into a preset feature labeling model, so that the preset feature labeling model identifies each target feature in the text content of the target sentence and outputs the text content of the target sentence with each target feature labeled;
step A4, inputting the text content of the target sentence marked with each target feature into a preset feature relation labeling model, so that the preset feature relation labeling model performs semantic analysis on the input text content of the target sentence and outputs the text content of the target sentence marked with the relation between each target feature.
The target feature is a component, a component model, a position (condition in fig. 2) of the component, or the like.
In this embodiment, if the recombined target text is directly input into the preset feature labeling model, the preset feature labeling model extracts each line of text content for processing, and the preset feature relationship labeling model labels the relationship between the target features of each line according to the semantics of the text content of each line. However, there is usually a case that a line of text is not a complete sentence, and if the recombined target text is directly input into the preset feature labeling model and then the preset feature relationship labeling model labels the relationship of the target features of each line, the output result is inaccurate due to incomplete sentences. Therefore, after obtaining the target text after the reorganization, the embodiment identifies each text content according to the start-stop flag, where the start-stop flag includes a start flag and an end flag, the start flag is used to identify the beginning of each text content, the end flag is used to identify the end of each text content, the start flag is, for example, a sequence number "1", "2", "3", and the like, and the end flag is, for example. ". Further, for each sentence of text content, the relationship between the target features in the sentence of text content can be marked through the step A3 and the step A4. As shown in fig. 2, for example, one target feature is "thickness value", and one target feature is "rebar type", and when the two target features are marked to have an association relationship, a line segment can be drawn out, and the word "associate with each other" is marked on the line segment to show the association relationship between the two target features.
Optionally, the preset feature recognition model and the preset relationship recognition model are obtained by training, and specifically include:
acquiring a text data set, wherein the text data set comprises a training set and a test set, the training set and the test set both comprise a plurality of pieces of sample data, and the sample data comprises input data and output data;
input data of a plurality of pieces of sample data in the training set is used as input, and corresponding output data is used as output to train a preset learning model;
inputting input data of a plurality of sample data in the test set to a trained learning model to obtain output data;
comparing the output data of the trained learning model with the corresponding output data in the test set, and judging whether the accuracy of the trained learning model is greater than or equal to a preset threshold value or not;
when the accuracy is greater than or equal to a preset threshold value, determining a target preset identification model according to a corresponding trained learning model, wherein the target preset identification model is the preset feature identification model or the preset relation identification model;
when the target preset identification model is the preset feature identification model, the input data are target sentence text content samples in the recombined text samples, and the output data are target sentence text content samples marked with all the target feature samples; and when the target preset identification model is the preset relation identification model, the input data are target sentence text content samples marked with each target characteristic sample, and the output data are target sentence text content samples marked with the relation among the target characteristic samples.
When the accuracy is greater than or equal to a preset threshold value, determining a target preset recognition model according to the corresponding trained learning model, including:
when the accuracy is greater than or equal to a preset threshold value, taking the correspondingly trained learning model as an alternative model;
when the alternative model comprises one model, taking the alternative model as the target preset recognition model; or, when the candidate models include a plurality of models, taking the model with the highest accuracy rate in the candidate models as the preset target identification model.
The text data set comprises a training set and a test set, and the sample data set can be divided into one training set and K-1 test sets by using a K-fold cross-checking algorithm, wherein the substantial content of data in the sample data set is consistent with the substantial content of data in the text data set.
In this embodiment, the learning model may be a neural network model, and specifically, target sentence text content samples of a plurality of sample data in the training set may be used as input of the neural network model, and corresponding target sentence text content samples marked with each target feature sample may be used as output of the neural network model, so as to obtain a preset feature marking model; and then, the target sentence text content samples of a plurality of sample data marked with each target characteristic sample in the training set are used as the input of another neural network model, the corresponding target sentence text content samples marked with the relation among the target characteristic samples are used as the output of the other neural network model to obtain a preset characteristic relation marking model, and the preset characteristic marking model and the preset characteristic relation marking model obtained by the scheme can overcome the defect of complicated information based on semantics which is difficult to process based on a regular and regular algorithm.
The text recombination method of the CAD drawing automatically identifies the text mapping position of the target text in the bitmap information through the preset element identification model, and then determines the target vector information corresponding to the text mapping position from the element vector information, wherein the target vector information comprises the vector information of the target text, and the vector information comprises the specific numerical values, font sizes, position information and the like of corresponding elements. Meanwhile, the invention considers that the bitmap information is an image with a fixed resolution, and if the target text is directly identified from the bitmap information, the identification result may be inaccurate because of distortion.
Example two
Fig. 3 shows a flowchart of a text reorganization method of a CAD drawing according to a second embodiment of the present invention.
As shown in fig. 3, the CAD drawing may also be referred to as dwg drawing, by analyzing the format of the CAD drawing, bitmap information (e.g. png picture) and element vector information may be obtained, then, the element vector information is output to a json file in a lasting mode, an element area is identified through a visual identification model (a preset element identification model), namely the coordinate of the element in bitmap information, then searching out target vector information in the json file according to the coordinates of the target text in the bitmap information, and reducing (also called as recombining) the target text according to the target vector information, further executing an entity relationship extraction step (extracting the entity first and then labeling the relationship between the entities, wherein the entity described in this embodiment may also be called as a target feature), target features are marked according to a preset feature marking model, and then the relation among all the target features is marked according to a preset feature relation marking model.
EXAMPLE III
The third embodiment of the present invention further provides a text reorganization device for a CAD drawing, where the text reorganization device corresponds to the text reorganization method provided in the first embodiment, and corresponding technical features and technical effects are not described in detail in this embodiment, and reference may be made to the first embodiment for relevant points. Specifically, fig. 4 shows a block diagram of a text reorganization apparatus of a CAD drawing provided by the third embodiment of the present invention. As shown in fig. 4, the text reorganization apparatus 400 of the CAD drawing includes a first determining module 401, an input module 402, a second determining module 403, and a reorganization module 404, where:
a first determining module 401, configured to determine bitmap information and element vector information according to a CAD drawing containing a target text, where the element vector information includes vector information of each element in the CAD drawing;
an input module 402, configured to input the bitmap information into a preset element identification model, so as to determine a text mapping position of a target text of the CAD drawing in the bitmap information;
a second determining module 403, configured to determine, from the element vector information, target vector information corresponding to the text mapping position;
and the restructuring module 404 is configured to restructure the target text of the CAD drawing according to the target vector information.
Optionally, the input module is further configured to: inputting the bitmap information into the preset element identification model to obtain element types contained in the CAD drawing and coordinates of elements of various types in the bitmap information; and screening out coordinates corresponding to the element type as the text type from the obtained coordinates, and taking the coordinates as the text mapping position.
Optionally, the input module is further configured to: determining the coordinates of the outer frame of the CAD drawing in the bitmap information, and recording the coordinates as first coordinates; determining coordinates of the outer frame of the CAD drawing in the CAD drawing from the element vector information, and recording the coordinates as second coordinates; determining a mapping relation between the first coordinate and the second coordinate, and calculating a target coordinate having the same mapping relation with the text mapping position; determining the target vector information including the target coordinates from the element vector information.
Optionally, the input module is further configured to: determining a pixel point position set of an outer frame of the CAD drawing in the bitmap information, wherein the pixel point position set comprises pixel point positions in the horizontal direction and pixel point positions in the vertical direction; and determining the coordinates of the outer frame of the CAD drawing in the bitmap information according to the pixel point position set, and recording the coordinates as the first coordinates.
Optionally, the restructuring module is further configured to: determining coordinates and text content of the target text according to the target vector information; clustering the text contents of the target text according to the coordinates of the target text to obtain the text contents belonging to the same line; and recombining the target text of the CAD drawing according to the text content which belongs to the same line and is obtained by clustering.
Optionally, the second determining module is further configured to: acquiring coordinates and a text structure of the target text from the target vector information; judging whether the text structure of the target text contains a preset type of text structure; if so, converting the contained text structure of the preset category into corresponding text content, and taking the text structure of a non-preset category in the converted text content and the text structure of the target text as the text content of the target text; if not, directly taking the text structure of the target text as the text content of the target text.
Optionally, the restructuring module is further configured to: judging whether text contents of preset categories exist in the text contents which belong to the target line and are obtained through clustering; if so, intercepting the text content of the preset category from the text content which is obtained by clustering and belongs to the target line, determining the vacant position in the remaining text content of the non-preset category, and filling the intercepted text content of the preset category to the vacant position to recombine the target line of the target text; if not, directly taking the text content which is obtained by clustering and belongs to the target line as the target line of the recombined target text.
Optionally, the apparatus further comprises: a third determining module, configured to determine, after recombining the target texts of the CAD drawing according to the clustered text contents belonging to the same line, a start-stop identifier for identifying each sentence of text content from the recombined target texts; the recognition module is used for recognizing the text content of each sentence from the target text according to the start-stop mark; the first processing module is used for inputting the identified text content of the target sentence into a preset characteristic marking model so as to enable the preset characteristic marking model to identify each target characteristic in the text content of the target sentence and output the text content of the target sentence marked with each target characteristic; and the second processing module is used for inputting the target sentence text content marked with each target characteristic into a preset characteristic relation marking model so that the preset characteristic relation marking model performs semantic analysis on the input target sentence text content and outputs the target sentence text content marked with the relation among the target characteristics.
Optionally, the preset element identification model is obtained through retinet model or Yolov5 model learning.
Example four
Fig. 5 is a block diagram of a computer device suitable for implementing a text reorganization method of a CAD drawing according to a fourth embodiment of the present invention. In this embodiment, the computer device 500 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack-mounted server, a blade server, a tower server, or a rack-mounted server (including an independent server or a server cluster composed of a plurality of servers) for executing programs, and the like. As shown in fig. 5, the computer device 500 of the present embodiment includes at least but is not limited to: a memory 501, a processor 502, and a network interface 503 communicatively coupled to each other via a system bus. It is noted that FIG. 5 only illustrates the computer device 500 having components 501 and 503, but it is to be understood that not all illustrated components are required to be implemented, and that more or fewer components can alternatively be implemented.
In this embodiment, the memory 503 includes at least one type of computer-readable storage medium, and the readable storage medium includes a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the storage 501 may be an internal storage unit of the computer device 500, such as a hard disk or a memory of the computer device 500. In other embodiments, the memory 501 may also be an external storage device of the computer device 500, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the computer device 500. Of course, the memory 501 may also include both internal and external memory units of the computer device 500. In the present embodiment, the memory 501 is generally used for storing an operating system installed in the computer device 500 and various types of application software, such as program codes of a text reorganization method of a CAD drawing.
Processor 502 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 502 generally operates to control the overall operation of the computer device 500. Such as performing control and processing related to data interaction or communication with computer device 500. In this embodiment, the processor 502 is configured to execute the program codes of the steps of the text reorganization method of the CAD drawing stored in the memory 501.
In this embodiment, the text reorganization method of the CAD drawing stored in the memory 501 may be further divided into one or more program modules and executed by one or more processors (in this embodiment, the processor 502) to complete the present invention.
The network interface 503 may include a wireless network interface or a wired network interface, and the network interface 503 is typically used to establish communication links between the computer device 500 and other computer devices. For example, the network interface 503 is used to connect the computer device 500 to an external terminal via a network, establish a data transmission channel and a communication link between the computer device 500 and the external terminal, and the like. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a Global System of Mobile communication (GSM), Wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, Bluetooth (Bluetooth), or Wi-Fi.
EXAMPLE five
The present embodiment also provides a computer-readable storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application mall, etc., on which a computer program is stored, which when executed by a processor, implements the steps of the text recognition method of the CAD drawing.
It will be apparent to those skilled in the art that the modules or steps of the embodiments of the invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, embodiments of the invention are not limited to any specific combination of hardware and software.
It should be noted that the numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A text recombination method of a CAD drawing is characterized by comprising the following steps:
determining bitmap information and element vector information according to a CAD drawing containing a target text, wherein the element vector information comprises vector information of each element in the CAD drawing;
inputting the bitmap information into a preset element identification model to determine a text mapping position of a target text of the CAD drawing in the bitmap information;
determining target vector information corresponding to the text mapping position from the element vector information;
and recombining the target text of the CAD drawing according to the target vector information.
2. The method of claim 1, wherein the inputting the bitmap information into a preset element recognition model to determine a text mapping position of the target text of the CAD drawing in the bitmap information comprises:
inputting the bitmap information into the preset element identification model to obtain element types contained in the CAD drawing and coordinates of elements of various types in the bitmap information;
and screening out coordinates corresponding to the element type as the text type from the obtained coordinates, and taking the coordinates as the text mapping position.
3. The method of claim 2, wherein the determining target vector information corresponding to the text mapping position from the element vector information comprises:
determining the coordinates of the outer frame of the CAD drawing in the bitmap information, and recording the coordinates as first coordinates;
determining coordinates of the outer frame of the CAD drawing in the CAD drawing from the element vector information, and recording the coordinates as second coordinates;
determining a mapping relation between the first coordinate and the second coordinate, and calculating a target coordinate having the same mapping relation with the text mapping position;
determining the target vector information including the target coordinates from the element vector information.
4. The method according to claim 3, wherein the determining coordinates of the outline of the CAD drawing, denoted as first coordinates, in the bitmap information comprises:
determining a pixel point position set of an outer frame of the CAD drawing in the bitmap information, wherein the pixel point position set comprises pixel point positions in the horizontal direction and pixel point positions in the vertical direction;
and determining the coordinates of the outer frame of the CAD drawing in the bitmap information according to the pixel point position set, and recording the coordinates as the first coordinates.
5. The method of claim 1, wherein the reconstructing the target text of the CAD drawing from the target vector information comprises:
determining coordinates and text content of the target text according to the target vector information;
clustering the text contents of the target text according to the coordinates of the target text to obtain the text contents belonging to the same line;
and recombining the target text of the CAD drawing according to the text content which belongs to the same line and is obtained by clustering.
6. The method of claim 5, wherein determining coordinates and text content of the target text according to the target vector information comprises:
acquiring coordinates and a text structure of the target text from the target vector information;
judging whether the text structure of the target text contains a preset type of text structure;
if so, converting the contained text structure of the preset category into corresponding text content, and taking the text structure of a non-preset category in the converted text content and the text structure of the target text as the text content of the target text;
if not, directly taking the text structure of the target text as the text content of the target text.
7. The method according to claim 5, wherein the reconstructing the target text of the CAD drawing according to the clustered text contents belonging to the same line comprises:
judging whether text contents of preset categories exist in the text contents which belong to the target line and are obtained through clustering;
if so, intercepting the text content of the preset category from the text content which is obtained by clustering and belongs to the target line, determining the vacant position in the remaining text content of the non-preset category, and filling the intercepted text content of the preset category to the vacant position to recombine the target line of the target text;
if not, directly taking the text content which is obtained by clustering and belongs to the target line as the target line of the recombined target text.
8. The method according to claim 5, wherein after the re-assembling the target text of the CAD drawing according to the clustered text contents belonging to the same line, the method further comprises:
determining start-stop marks for marking the text content of each sentence from the recombined target texts;
recognizing text content of each sentence from the target text according to the start-stop mark;
inputting the identified text content of the target sentence into a preset feature labeling model so that the preset feature labeling model identifies each target feature in the text content of the target sentence and outputs the text content of the target sentence with each target feature labeled;
and inputting the text content of the target sentence marked with the target characteristics into a preset characteristic relation marking model, so that the preset characteristic relation marking model performs semantic analysis on the input text content of the target sentence and outputs the text content of the target sentence marked with the relation among the target characteristics.
9. The method according to any one of claims 1 to 8, wherein the preset element identification model is obtained by Retianet model or Yolov5 model learning.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 9.
CN202110484752.XA 2021-04-30 2021-04-30 Text recombination method for CAD drawing and computer readable storage medium Pending CN113177995A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110484752.XA CN113177995A (en) 2021-04-30 2021-04-30 Text recombination method for CAD drawing and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110484752.XA CN113177995A (en) 2021-04-30 2021-04-30 Text recombination method for CAD drawing and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113177995A true CN113177995A (en) 2021-07-27

Family

ID=76925843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110484752.XA Pending CN113177995A (en) 2021-04-30 2021-04-30 Text recombination method for CAD drawing and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113177995A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6332032B1 (en) * 1998-12-03 2001-12-18 The United States Of America As Represented By The Secretary Of The Army Method for generating test files from scanned test vector pattern drawings
CN111611935A (en) * 2020-05-22 2020-09-01 青矩技术股份有限公司 Automatic identification method for similar vector diagrams in CAD drawing
WO2020232872A1 (en) * 2019-05-22 2020-11-26 平安科技(深圳)有限公司 Table recognition method and apparatus, computer device, and storage medium
CN112651373A (en) * 2021-01-04 2021-04-13 广联达科技股份有限公司 Identification method and device for text information of construction drawing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6332032B1 (en) * 1998-12-03 2001-12-18 The United States Of America As Represented By The Secretary Of The Army Method for generating test files from scanned test vector pattern drawings
WO2020232872A1 (en) * 2019-05-22 2020-11-26 平安科技(深圳)有限公司 Table recognition method and apparatus, computer device, and storage medium
CN111611935A (en) * 2020-05-22 2020-09-01 青矩技术股份有限公司 Automatic identification method for similar vector diagrams in CAD drawing
CN112651373A (en) * 2021-01-04 2021-04-13 广联达科技股份有限公司 Identification method and device for text information of construction drawing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴俊;苏寅生;马骞;吴芳慈;耿大庆;: "一种厂站主接线图模型自动转换与存储管理方法", 自动化与仪器仪表, no. 04 *
张琪;叶颖;: "基于对象图例及其拓扑关系识别的二维工程CAD图纸矢量化方法", 计算机与现代化, no. 11 *

Similar Documents

Publication Publication Date Title
CN108229470B (en) Character image processing method, device, equipment and storage medium
CN106980856B (en) Formula identification method and system and symbolic reasoning calculation method and system
CN107688789B (en) Document chart extraction method, electronic device and computer readable storage medium
CN107689070B (en) Chart data structured extraction method, electronic device and computer-readable storage medium
CN112269872B (en) Resume analysis method and device, electronic equipment and computer storage medium
CN111639717A (en) Image character recognition method, device, equipment and storage medium
CN114005126A (en) Table reconstruction method and device, computer equipment and readable storage medium
CN114663904A (en) PDF document layout detection method, device, equipment and medium
CN114528413A (en) Knowledge graph updating method, system and readable storage medium supported by crowdsourced marking
CN106776527B (en) Electronic book data display method and device and terminal equipment
CN116052195A (en) Document parsing method, device, terminal equipment and computer readable storage medium
CN116311300A (en) Table generation method, apparatus, electronic device and storage medium
CN114579796B (en) Machine reading understanding method and device
CN112560849B (en) Neural network algorithm-based grammar segmentation method and system
CN113177995A (en) Text recombination method for CAD drawing and computer readable storage medium
CN113657279B (en) Bill image layout analysis method and device
CN115034177A (en) Presentation file conversion method, device, equipment and storage medium
CN111783737B (en) Mathematical formula identification method and device
CN114581923A (en) Table image and corresponding annotation information generation method, device and storage medium
CN114417788A (en) Drawing analysis method and device, storage medium and electronic equipment
CN110276051B (en) Method and device for splitting font part
CN113158632A (en) Form reconstruction method for CAD drawing and computer readable storage medium
CN114399782B (en) Text image processing method, apparatus, device, storage medium, and program product
CN114138214B (en) Method and device for automatically generating print file and electronic equipment
CN116306575B (en) Document analysis method, document analysis model training method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination