CN113869199A - Image detection method and device, computer equipment and storage medium - Google Patents

Image detection method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113869199A
CN113869199A CN202111136588.XA CN202111136588A CN113869199A CN 113869199 A CN113869199 A CN 113869199A CN 202111136588 A CN202111136588 A CN 202111136588A CN 113869199 A CN113869199 A CN 113869199A
Authority
CN
China
Prior art keywords
detected
information
target
determining
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111136588.XA
Other languages
Chinese (zh)
Inventor
罗棕太
伊帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202111136588.XA priority Critical patent/CN113869199A/en
Publication of CN113869199A publication Critical patent/CN113869199A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/12Geometric CAD characterised by design entry means specially adapted for CAD, e.g. graphical user interfaces [GUI] specially adapted for CAD

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computational Mathematics (AREA)
  • Architecture (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Character Input (AREA)

Abstract

The present disclosure provides an image detection method, apparatus, computer device and storage medium, wherein the method comprises: acquiring a target image containing an object to be detected; identifying the character information marked on the target image; matching the identification information of the object to be detected with the recognized character information, and determining target character information matched with the identification information of the object to be detected and position information of the target character information; and determining the size information of the object to be detected from the recognized character information based on the position information of the target character information.

Description

Image detection method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image detection method, an image detection apparatus, a computer device, and a storage medium.
Background
In the field of housing design, after an engineer designs a design drawing, whether the position and the size of an object to be detected in the design drawing meet position specifications and size specifications needs to be detected.
In the related art, when detecting a design drawing, a professional can usually manually detect the design drawing, but various types of objects in the design drawing are large in quantity, large in information amount and complicated in layout, so that the manual detection efficiency is low, and even the situations that the detected object is overlooked and identification information is mistakenly seen occur.
Disclosure of Invention
The embodiment of the disclosure at least provides an image detection method, an image detection device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides an image detection method, including:
acquiring a target image containing an object to be detected;
identifying the character information marked on the target image;
matching the identification information of the object to be detected with the recognized character information, and determining target character information matched with the identification information of the object to be detected and position information of the target character information;
and determining the size information of the object to be detected from the recognized character information based on the position information of the target character information.
According to the image detection method, the character information marked on the target image containing the object to be detected is identified, the identification information of the object to be detected is matched with the identified character information, so that the target character information matched with the identification information of the object to be detected and the position information of the target character information are determined, then the size information of the object to be detected is determined from the identified character information based on the position information of the target character information, and the efficiency and the accuracy of size detection of the object to be detected are improved. Furthermore, the normalization of the target image can be detected based on the size specification and the size information corresponding to the object to be detected, and the efficiency and the accuracy of normalization detection of the target image can be improved.
In a possible embodiment, the method further comprises acquiring the target image according to the following steps:
acquiring an initial image comprising a plurality of image layers; different image layers of the initial image are used for displaying different objects in the initial image and the labeling information of the different objects;
determining a target layer containing the object to be detected from the plurality of layers according to the layer identifier of each layer;
and determining the target image based on the target image layer.
By adopting the method, the layer which does not contain the object to be detected can be eliminated, so that only the target layer is subjected to subsequent operation, the calculated amount is reduced, and the working efficiency is improved.
In one possible embodiment, the initial image is a CAD drawing;
the determining the target image based on the target image layer includes:
and removing other layers except the target layer in the initial CAD graph, and performing format conversion to obtain the target image.
In a possible implementation, the determining the target image based on the target image layer includes:
determining the position information of the central point of the target layer;
determining a target area in the target layer based on the position information of the central point; the target area is the minimum area which contains all the objects to be detected and the labeling information of the objects to be detected in the target image layer;
and determining the target image based on the target area in the target image layer.
By adopting the method, part of the region which does not contain the object to be detected can be excluded, so that only the target image determined by the target region is subjected to subsequent operation, the calculation amount is reduced, and the working efficiency is improved.
In a possible implementation manner, the determining, from the recognized text information, the size information of the object to be detected based on the position information of the target text information includes:
determining the size text information of the object to be detected from the recognized text information based on the position information of the target text information and preset relative position information;
and identifying the size character information and determining the size information of the object to be detected.
In a possible embodiment, after determining the size information of the object to be detected, the method further includes:
and detecting the normalization of the target image based on the size specification corresponding to the object to be detected and the size information.
In a possible implementation manner, after detecting the normalization of the target image based on the size specification and the size information corresponding to the object to be detected, the method further includes:
determining a target object which does not meet the size specification in the object to be detected;
and marking the target object on the target image.
By adopting the method, the target object which does not accord with the size specification is marked on the target image, so that the target object which does not accord with the size specification can be clearly displayed to a user, and the user can conveniently check the target object.
In a possible embodiment, the method further comprises:
and taking the position information of the target character information as the position information of the object to be detected, and generating a log file for the object to be detected based on the size information of the object to be detected and the position information of the object to be detected.
In a possible embodiment, the object to be detected comprises a plurality of classes of objects;
the generating a log file for the object to be detected includes:
determining the category of the object to be detected based on the target character information;
generating a log file for the object to be detected of each category based on the category of the object to be detected, the size information of the object to be detected, and the position information of the object to be detected.
By adopting the method, the size information, the size specification and the like of the object to be detected of each category can be clearly displayed to the user, and the display content is rich.
In a possible embodiment, after generating the log file for the object to be detected, the method further comprises:
and after responding to the log viewing request, displaying the log file corresponding to the log viewing request.
In a second aspect, an embodiment of the present disclosure further provides an image detection apparatus, including:
the acquisition module is used for acquiring a target image containing an object to be detected;
the identification module is used for identifying the character information marked on the target image;
the first determining module is used for matching the identification information of the object to be detected with the recognized character information and determining target character information matched with the identification information of the object to be detected and position information of the target character information;
the second determining module is used for determining the size information of the object to be detected from the recognized character information based on the position information of the target character information;
in a possible implementation, the acquiring module is further configured to acquire the target image according to the following steps:
acquiring an initial image comprising a plurality of image layers; different image layers of the initial image are used for displaying different objects in the initial image and the labeling information of the different objects;
determining a target layer containing the object to be detected from the plurality of layers according to the layer identifier of each layer;
and determining the target image based on the target image layer.
In one possible embodiment, the initial image is a CAD drawing;
the obtaining module, when determining the target image based on the target image layer, is configured to:
and removing other layers except the target layer in the initial CAD graph, and performing format conversion to obtain the target image.
In a possible implementation manner, when determining the target image based on the target image layer, the obtaining module is configured to:
determining the position information of the central point of the target layer;
determining a target area in the target layer based on the position information of the central point; the target area is the minimum area which contains all the objects to be detected and the labeling information of the objects to be detected in the target image layer;
and determining the target image based on the target area in the target image layer.
In a possible implementation manner, the second determining module, when determining the size information of the object to be detected from the recognized text information based on the position information of the target text information, is configured to:
determining the size text information of the object to be detected from the recognized text information based on the position information of the target text information and preset relative position information;
and identifying the size character information and determining the size information of the object to be detected.
In a possible embodiment, the second determining module, after determining the dimension information of the object to be detected, is further configured to:
and detecting the normalization of the target image based on the size specification corresponding to the object to be detected and the size information.
In a possible implementation manner, the second determining module, after detecting the normalization of the target image based on the size specification and the size information corresponding to the object to be detected, is further configured to:
determining a target object which does not meet the size specification in the object to be detected;
and marking the target object on the target image.
In a possible implementation, the second determining module is further configured to:
and taking the position information of the target character information as the position information of the object to be detected, and generating a log file for the object to be detected based on the size information of the object to be detected and the position information of the object to be detected.
In a possible embodiment, the object to be detected comprises a plurality of classes of objects;
the second determining module, when generating the log file for the object to be detected, is configured to:
determining the category of the object to be detected based on the target character information;
generating a log file for the object to be detected of each category based on the category of the object to be detected, the size information of the object to be detected, and the position information of the object to be detected.
In a possible implementation, after generating the log file for the object to be detected, the second determining module is further configured to:
and after responding to the log viewing request, displaying the log file corresponding to the log viewing request.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
For the description of the effects of the image detection apparatus, the computer device, and the computer-readable storage medium, reference is made to the description of the image detection method, which is not repeated herein.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of an image detection method provided by an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating an architecture of an image detection apparatus provided in an embodiment of the present disclosure;
fig. 3 shows a schematic structural diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
In the field of housing design, after an engineer designs a design drawing, whether the position and the size of an object to be detected in the design drawing meet position specifications and size specifications needs to be detected.
In the related art, when detecting a design drawing, a professional can usually manually detect the design drawing, but various types of objects in the design drawing are large in quantity, large in information amount and complicated in layout, so that the manual detection efficiency is low, and even the situations that the detected object is overlooked and identification information is mistakenly seen occur.
Based on the research, the present disclosure provides an image detection method, which matches identification information of an object to be detected with recognized text information by recognizing text information marked on a target image including the object to be detected, thereby determining target text information matched with the identification information of the object to be detected and position information of the target text information, and then determining size information of the object to be detected from the recognized text information based on the position information of the target text information, thereby improving efficiency and accuracy of size detection of the object to be detected. Furthermore, the normalization of the target image can be detected based on the size specification and the size information corresponding to the object to be detected, and the efficiency and the accuracy of normalization detection of the target image can be improved.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, an image detection method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the image detection method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a User terminal, or other processing devices. In some possible implementations, the image detection method may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of an image detection method provided in an embodiment of the present disclosure is shown, where the method includes steps 101 to 104, where:
step 101, acquiring a target image containing an object to be detected;
step 102, identifying character information marked on the target image;
103, matching the identification information of the object to be detected with the recognized character information, and determining target character information matched with the identification information of the object to be detected and position information of the target character information;
and 104, determining the size information of the object to be detected from the recognized character information based on the position information of the target character information.
For step 101,
The steps 101 to 104 may be executed by the server or the user. The object to be detected can be an enclosure structure such as a door, a window and the like, the target image can be a design drawing containing the object to be detected, and the target image can be an image in a pdf format, a jpg format or the like.
In one possible embodiment, the target image may be acquired according to the following steps: firstly, obtaining an initial image comprising a plurality of image layers; different image layers of the initial image are used for displaying different objects in the initial image and the labeling information of the different objects; then determining a target layer containing the object to be detected from the plurality of layers according to the layer identifier of each layer; and finally, determining the target image based on the target image layer.
Specifically, the display method of the different layers of the initial image may be displaying different types of objects, for example, a first layer is used for displaying ventilation components such as doors and windows, a second layer is used for displaying enclosure components such as walls and columns, and a third layer is used for displaying pipeline facilities such as water pipes and electric wires; or, the display method of the different layers of the initial image may also be displaying specific different objects, such as a first layer for displaying a door, a second layer for displaying a window, and a third layer for displaying a water pipe; then, corresponding label information may be presented beside each object, wherein the label information may be a size, a model, and the like of the object.
In a possible implementation manner, the layer identifier may be a layer name, and when a target layer is determined, a keyword related to the object to be detected may be determined first, and then whether the keyword is included in each layer name is detected, and if the keyword is included, the layer is determined to be the target layer.
It should be noted that, in naming, a user is required to name a layer according to a preset naming standard, so that the layer name including the object to be detected includes a corresponding keyword.
For example, if the object to be detected is a door and the keyword corresponding to the door is "DR", it is sequentially detected whether each layer name includes the keyword "DR", and the layer including the keyword "DR" is the target layer.
In one possible embodiment, if the initial image is a Computer Aided Design (CAD); when the target image is determined based on the target layer, other layers except the target layer in the initial CAD image may be removed, and format conversion may be performed to obtain the target image.
Specifically, the format of the CAD drawing for CAD design is generally dwg file format, after the target layer is determined, the target layer may be converted into image formats such as jpg, pdf, and the like, and the image formats are used as the target image, and after the CAD drawing is converted into a common image format, the target image may be referred to more quickly, and the target image is identified; and deleting other image layers except the target image layer, so that the storage space is saved, and the speed of subsequent character recognition can be increased after irrelevant images are eliminated.
Illustratively, a certain CAD drawing includes three layers, objects in each layer are a door, a window, and a water pipe, respectively, and if the object to be detected is a door, the layer including the window and the water pipe is deleted, and the target layer including the door is converted into a pdf picture format for storage.
In a possible implementation manner, when the target image is determined based on the target layer, position information of a central point of the target layer may be determined; determining a target area in the target layer based on the position information of the central point; the target area is the minimum area which contains all the objects to be detected and the labeling information of the objects to be detected in the target image layer; and determining the target image based on the target area in the target image layer.
In one possible embodiment, the center point of each target layer may be the same. When the target area in the target layer is determined based on the position information of the central point, the position of the central point can be kept unchanged, and the target layer is amplified in equal proportion until any vertex of all the objects to be detected is amplified to the boundary of a preset area.
It should be noted that the step of enlarging the target layer may be performed manually. Or when the above steps are automatically performed by the device, the target layer may be enlarged to hide the surrounding image labels, so that the initial image displayed after enlargement is the target image.
In a possible implementation manner, when the target image is determined based on the target area in the target layer, the target area may be intercepted, and format conversion is performed to obtain the target image, where the format conversion process is consistent with the above method, and is not described herein again.
And after the target image layer is intercepted, the target image is determined, so that the calculation amount of a subsequent system can be reduced, and the working efficiency is improved.
For steps 102 and 103,
The text information can be size information of an object on the target image, comments input by a user, a specification of size specifications and the like; the identification information may be a code for indicating the type of the object, such as a door or a window, or a code for indicating the type of the object, such as PM or FM, and is typically expressed in a standardized format of "type + number", for example.
Specifically, when recognizing the text information marked on the target image, an Optical Character Recognition (OCR) technique may be used to recognize all the text information on the target image.
In a possible implementation manner, since the text information generally includes two directions from left to right and from bottom to top, when matching the text information, it is necessary to first determine an arrangement direction of the text information, and then recognize the text based on at least one preset recognition direction; wherein, the range orientation includes horizontal arrangement and vertical arrangement, predetermined discernment direction includes from left to right, from right to left, from top to bottom and from bottom to top.
With respect to step 104,
In a possible implementation manner, when determining the size information of the object to be detected from the recognized text information based on the position information of the target text information, the size text information of the object to be detected may be determined from the recognized text information based on the position information of the target text information and preset relative position information, and then the size text information is recognized to determine the size information of the object to be detected; the preset relative position information may include an upper direction, a lower direction, a left direction, and a right direction.
Specifically, when determining the size information of the object to be detected, it is necessary to first determine the position information and the arrangement direction of the target text information, then determine the preset relative position information corresponding to the arrangement direction, and finally determine the position information of the size text information of the object to be detected based on the preset relative position information between the size text information and the target text information.
It should be noted that the relative position information corresponding to the vertical arrangement may only be an upper or lower position, and the relative position information corresponding to the horizontal arrangement may only be a left or right position, so that the corresponding relative position information may be preset for different arrangement directions.
For example, if the matched target text information is "PM" and the arrangement direction of the target text information is determined to be the vertical arrangement, the corresponding preset relative position information is above, and the size text information is located above the target text information, that is, "1123"; for the character information 'FM third 1018' from left to right, if the matched target character information is 'FM', it is determined that the arrangement direction of the target character information is the horizontal arrangement, the preset relative position information is the right, and the size character information is located on the right of the target character information, that is, '1018'.
In a possible implementation manner, after the position information of the target text information is determined, the size text information may be identified, and the size information of the object to be detected may be determined.
Specifically, the size text information is written according to a preset format and a preset length unit, and when the size text information is identified, the size text information can be detected according to the format, and the preset length unit is added to a detected digital result.
Illustratively, if the preset format is that the first two characters in the size text information represent a width and the last two characters represent a height, the preset length unit is decimeter, in the target text information "1123", 11 "represents that the width of the door is 11 decimeter, and 23" represents that the height of the door is 23 decimeter.
In a possible implementation manner, after the size information of the object to be detected is determined, the normalization of the target image may be detected based on the size specification corresponding to the object to be detected and the size information.
Wherein the size specification is a standard established for size information of the object, such as a unit door width of not less than 20 decimeters and a height of not less than 22 decimeters.
Specifically, the corresponding preset conditions may be set based on the size specifications corresponding to the objects to be detected, and then, the judgment is performed based on the size information corresponding to each object to be detected and the preset conditions corresponding to the objects to be detected, so as to determine whether the objects to be detected meet the specifications.
In a possible implementation manner, after the normalization of the target image is detected based on the size specification and the size information corresponding to the object to be detected, the target object which does not meet the size specification in the object to be detected is determined; and marking the target object on the target image.
For example, after determining a target object that does not meet the size specification in the object to be measured, the target object may be highlighted, for example, highlighted, and the size specification that does not meet the size specification in the target object and the size information corresponding to the size specification may be highlighted.
In a possible implementation manner, after the size information of the object to be detected is determined, the position information of the target text information may be used as the position information of the object to be detected, and a log file for the object to be detected is generated based on the size information of the object to be detected and the position information of the object to be detected.
Specifically, the position information, the size specification, the thumbnail, and the like of each object to be detected may be displayed in the generated log file.
In a possible embodiment, the object to be detected comprises a plurality of classes of objects; when generating the log file for the object to be detected, determining the category of the object to be detected based on the target character information; and then generating a log file of the object to be detected for each category based on the category of the object to be detected, the size information of the object to be detected, and the position information of the object to be detected.
Illustratively, if the object to be detected comprises a door and a window, two log files for respectively recording the door and the window are generated, and after responding to a log viewing request, two types of trigger buttons of the door and the window are displayed in the list, after a first trigger button of the door type is triggered, all size information and size specifications about the door are displayed, and after a second trigger button of the window type is triggered, all size information and size specifications about the window are displayed.
In a possible implementation manner, after the log file for the object to be detected is generated, the log file may display the log file corresponding to the log viewing request after responding to the log viewing request.
Illustratively, the user may open the display interface of the log file by clicking the object to be detected, or clicking a log viewing button.
In a possible embodiment, said size information not complying with the corresponding said size specification is highlighted in said log file. For example, the size specification that is not met in the target object and the size information corresponding to the size specification may be highlighted, such as displaying a font color changed to red.
In a possible implementation manner, after any object to be detected in the log file is triggered, the position of the object to be detected in the target image is skipped to display. And the user can conveniently and quickly view the object to be detected.
Illustratively, the number identifiers can be established for all the objects to be detected, and after the number identifiers are triggered, the positions of the objects to be detected corresponding to the number identifiers are skipped to display.
According to the image detection method, the character information marked on the target image containing the object to be detected is identified, the identification information of the object to be detected is matched with the identified character information, so that the target character information matched with the identification information of the object to be detected and the position information of the target character information are determined, then the size information of the object to be detected is determined from the identified character information based on the position information of the target character information, the automatic detection of the size of the object to be detected on the target image is achieved, and the efficiency and the accuracy of the size detection of the object to be detected are improved. Furthermore, the normalization of the target image can be detected based on the size specification and the size information corresponding to the object to be detected, and the efficiency and the accuracy of normalization detection of the target image can be improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, an image detection apparatus corresponding to the image detection method is also provided in the embodiments of the present disclosure, and since the principle of the apparatus in the embodiments of the present disclosure for solving the problem is similar to the image detection method described above in the embodiments of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 2, a schematic diagram of an architecture of an image detection apparatus provided in an embodiment of the present disclosure is shown, where the apparatus includes: the system comprises an acquisition module 201, an identification module 202, a first determination module 203 and a second determination module 204; wherein the content of the first and second substances,
an obtaining module 201, configured to obtain a target image including an object to be detected;
the identification module 202 is configured to identify text information labeled on the target image;
the first determining module 203 is configured to match the identification information of the object to be detected with the recognized text information, and determine target text information matched with the identification information of the object to be detected and position information of the target text information;
the second determining module 204 is configured to determine, based on the position information of the target text message, size information of the object to be detected from the recognized text message;
in a possible implementation, the obtaining module 201 is further configured to obtain the target image according to the following steps:
acquiring an initial image comprising a plurality of image layers; different image layers of the initial image are used for displaying different objects in the initial image and the labeling information of the different objects;
determining a target layer containing the object to be detected from the plurality of layers according to the layer identifier of each layer;
and determining the target image based on the target image layer.
In one possible embodiment, the initial image is a CAD drawing;
the obtaining module 201, when determining the target image based on the target image layer, is configured to:
and removing other layers except the target layer in the initial CAD graph, and performing format conversion to obtain the target image.
In a possible implementation manner, the obtaining module 201, when determining the target image based on the target image layer, is configured to:
determining the position information of the central point of the target layer;
determining a target area in the target layer based on the position information of the central point; the target area is the minimum area which contains all the objects to be detected and the labeling information of the objects to be detected in the target image layer;
and determining the target image based on the target area in the target image layer.
In a possible implementation manner, the second determining module 204, when determining the size information of the object to be detected from the recognized text information based on the position information of the target text information, is configured to:
determining the size text information of the object to be detected from the recognized text information based on the position information of the target text information and preset relative position information;
and identifying the size character information and determining the size information of the object to be detected.
In a possible implementation, the second determining module 204, after determining the dimension information of the object to be detected, is further configured to:
and detecting the normalization of the target image based on the size specification corresponding to the object to be detected and the size information.
In a possible implementation manner, the second determining module 204, after detecting the normalization of the target image based on the size specification and the size information corresponding to the object to be detected, is further configured to:
determining a target object which does not meet the size specification in the object to be detected;
and marking the target object on the target image.
In a possible implementation, the second determining module 204 is further configured to:
and taking the position information of the target character information as the position information of the object to be detected, and generating a log file for the object to be detected based on the size information of the object to be detected and the position information of the object to be detected.
In a possible embodiment, the object to be detected comprises a plurality of classes of objects;
the second determining module 204, when generating the log file for the object to be detected, is configured to:
determining the category of the object to be detected based on the target character information;
generating a log file for the object to be detected of each category based on the category of the object to be detected, the size information of the object to be detected, and the position information of the object to be detected.
In a possible implementation, the second determining module 204, after generating the log file for the object to be detected, is further configured to:
and after responding to the log viewing request, displaying the log file corresponding to the log viewing request.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 3, a schematic structural diagram of a computer device 300 provided in the embodiment of the present disclosure includes a processor 301, a memory 302, and a bus 303. The memory 302 is used for storing execution instructions and includes a memory 3021 and an external memory 3022; the memory 3021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 301 and data exchanged with an external memory 3022 such as a hard disk, the processor 301 exchanges data with the external memory 3022 through the memory 3021, and when the computer device 300 is operated, the processor 301 communicates with the memory 302 through the bus 303, so that the processor 301 executes the following instructions:
acquiring a target image containing an object to be detected;
identifying the character information marked on the target image;
matching the identification information of the object to be detected with the recognized character information, and determining target character information matched with the identification information of the object to be detected and position information of the target character information;
and determining the size information of the object to be detected from the recognized character information based on the position information of the target character information.
In a possible implementation, the processor 301 executes instructions that, in the method, further include acquiring the target image according to the following steps:
acquiring an initial image comprising a plurality of image layers; different image layers of the initial image are used for displaying different objects in the initial image and the labeling information of the different objects;
determining a target layer containing the object to be detected from the plurality of layers according to the layer identifier of each layer;
and determining the target image based on the target image layer.
In one possible embodiment, the processor 301 executes instructions in which the initial image is a CAD drawing of a computer-aided design;
the determining the target image based on the target image layer includes:
and removing other layers except the target layer in the initial CAD graph, and performing format conversion to obtain the target image.
In a possible implementation, in instructions executed by processor 301, the determining the target image based on the target image layer includes:
determining the position information of the central point of the target layer;
determining a target area in the target layer based on the position information of the central point; the target area is the minimum area which contains all the objects to be detected and the labeling information of the objects to be detected in the target image layer;
and determining the target image based on the target area in the target image layer.
In a possible implementation manner, the instructions executed by processor 301, the determining, from the recognized text information and based on the position information of the target text information, size information of the object to be detected includes:
determining the size text information of the object to be detected from the recognized text information based on the position information of the target text information and preset relative position information;
and identifying the size character information and determining the size information of the object to be detected.
In a possible implementation, the instructions executed by the processor 301, after determining the size information of the object to be detected, further include:
and detecting the normalization of the target image based on the size specification corresponding to the object to be detected and the size information.
In a possible implementation manner, after the processor 301 executes instructions to detect the normalization of the target image based on the size specification and the size information corresponding to the object to be detected, the method further includes:
determining a target object which does not meet the size specification in the object to be detected;
and marking the target object on the target image.
In a possible implementation, in the instructions executed by the processor 301, the method further includes:
and taking the position information of the target character information as the position information of the object to be detected, and generating a log file for the object to be detected based on the size information of the object to be detected and the position information of the object to be detected.
In a possible implementation, the processor 301 executes instructions, where the object to be detected includes a plurality of categories of objects;
the generating a log file for the object to be detected includes:
determining the category of the object to be detected based on the target character information;
generating a log file for the object to be detected of each category based on the category of the object to be detected, the size information of the object to be detected, and the position information of the object to be detected.
In a possible implementation, after generating the log file for the object to be detected, the processor 301 executes instructions that further include:
and after responding to the log viewing request, displaying the log file corresponding to the log viewing request.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the image detection method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product bears a program code, and instructions included in the program code may be used to execute the steps of the image detection method in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (13)

1. An image detection method, comprising:
acquiring a target image containing an object to be detected;
identifying the character information marked on the target image;
matching the identification information of the object to be detected with the recognized character information, and determining target character information matched with the identification information of the object to be detected and position information of the target character information;
and determining the size information of the object to be detected from the recognized character information based on the position information of the target character information.
2. The method of claim 1, further comprising acquiring the target image according to the steps of:
acquiring an initial image comprising a plurality of image layers; different image layers of the initial image are used for displaying different objects in the initial image and the labeling information of the different objects;
determining a target layer containing the object to be detected from the plurality of layers according to the layer identifier of each layer;
and determining the target image based on the target image layer.
3. The method of claim 2, wherein the initial image is a computer-aided design (CAD) drawing;
the determining the target image based on the target image layer includes:
and removing other layers except the target layer in the initial CAD graph, and performing format conversion to obtain the target image.
4. The method according to claim 2 or 3, wherein the determining the target image based on the target image layer comprises:
determining the position information of the central point of the target layer;
determining a target area in the target layer based on the position information of the central point; the target area is the minimum area which contains all the objects to be detected and the labeling information of the objects to be detected in the target image layer;
and determining the target image based on the target area in the target image layer.
5. The method according to any one of claims 1 to 4, wherein the determining the size information of the object to be detected from the recognized text information based on the position information of the target text information comprises:
determining the size text information of the object to be detected from the recognized text information based on the position information of the target text information and preset relative position information;
and identifying the size character information and determining the size information of the object to be detected.
6. The method according to any one of claims 1 to 5, further comprising, after determining the dimensional information of the object to be detected:
and detecting the normalization of the target image based on the size specification corresponding to the object to be detected and the size information.
7. The method according to claim 6, wherein after detecting the normalization of the target image based on the dimensional specification and the dimensional information corresponding to the object to be detected, the method further comprises:
determining a target object which does not meet the size specification in the object to be detected;
and marking the target object on the target image.
8. The method of claim 5, further comprising:
and taking the position information of the target character information as the position information of the object to be detected, and generating a log file for the object to be detected based on the size information of the object to be detected and the position information of the object to be detected.
9. The method of claim 8, wherein the objects to be detected comprise a plurality of classes of objects;
the generating a log file for the object to be detected includes:
determining the category of the object to be detected based on the target character information;
generating a log file for the object to be detected of each category based on the category of the object to be detected, the size information of the object to be detected, and the position information of the object to be detected.
10. The method according to claim 8 or 9, wherein after generating the log file for the object to be detected, the method further comprises:
and after responding to the log viewing request, displaying the log file corresponding to the log viewing request.
11. An image detection apparatus, characterized by comprising:
the acquisition module is used for acquiring a target image containing an object to be detected;
the identification module is used for identifying the character information marked on the target image;
the first determining module is used for matching the identification information of the object to be detected with the recognized character information and determining target character information matched with the identification information of the object to be detected and position information of the target character information;
and the second determining module is used for determining the size information of the object to be detected from the recognized character information based on the position information of the target character information.
12. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when a computer device is run, the machine-readable instructions when executed by the processor performing the steps of the image detection method according to any one of claims 1 to 10.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the image detection method according to any one of claims 1 to 10.
CN202111136588.XA 2021-09-27 2021-09-27 Image detection method and device, computer equipment and storage medium Pending CN113869199A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111136588.XA CN113869199A (en) 2021-09-27 2021-09-27 Image detection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111136588.XA CN113869199A (en) 2021-09-27 2021-09-27 Image detection method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113869199A true CN113869199A (en) 2021-12-31

Family

ID=78991341

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111136588.XA Pending CN113869199A (en) 2021-09-27 2021-09-27 Image detection method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113869199A (en)

Similar Documents

Publication Publication Date Title
US10824801B2 (en) Interactively predicting fields in a form
US10853638B2 (en) System and method for extracting structured information from image documents
CN111476227B (en) Target field identification method and device based on OCR and storage medium
JP5665125B2 (en) Image processing method and image processing system
Nurminen Algorithmic extraction of data in tables in PDF documents
US10783325B1 (en) Visual data mapping
CN111738252B (en) Text line detection method, device and computer system in image
CN114005126A (en) Table reconstruction method and device, computer equipment and readable storage medium
CN116168351A (en) Inspection method and device for power equipment
WO2023038722A1 (en) Entry detection and recognition for custom forms
CN113408323B (en) Extraction method, device and equipment of table information and storage medium
CN110929647B (en) Text detection method, device, equipment and storage medium
CN112149680A (en) Wrong word detection and identification method and device, electronic equipment and storage medium
CN108170838B (en) Topic evolution visualization display method, application server and computer readable storage medium
CN112149570A (en) Multi-person living body detection method and device, electronic equipment and storage medium
CN113869199A (en) Image detection method and device, computer equipment and storage medium
RU2641452C2 (en) Incomplete standards
CN115544620A (en) Method, device and equipment for analyzing door and window tables in drawing and storage medium
JP6582464B2 (en) Information input device and program
CN114706886A (en) Evaluation method and device, computer equipment and storage medium
CN112966671A (en) Contract detection method and device, electronic equipment and storage medium
US9064088B2 (en) Computing device, storage medium and method for analyzing step formatted file of measurement graphics
JP2021152696A (en) Information processor and program
CN114283437A (en) Legend identification method, device, equipment and storage medium
CN110968677B (en) Text addressing method and device, medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination