CN116490901A - Device and method for analyzing marks included in facility plan - Google Patents
Device and method for analyzing marks included in facility plan Download PDFInfo
- Publication number
- CN116490901A CN116490901A CN202180059835.2A CN202180059835A CN116490901A CN 116490901 A CN116490901 A CN 116490901A CN 202180059835 A CN202180059835 A CN 202180059835A CN 116490901 A CN116490901 A CN 116490901A
- Authority
- CN
- China
- Prior art keywords
- plan
- facility plan
- view
- facility
- dst
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims description 20
- 238000003062 neural network model Methods 0.000 claims abstract description 39
- 238000004458 analytical method Methods 0.000 claims abstract description 34
- 238000002372 labelling Methods 0.000 claims abstract description 21
- 238000003709 image segmentation Methods 0.000 claims abstract description 11
- 230000015654 memory Effects 0.000 claims description 13
- 238000001514 detection method Methods 0.000 claims description 11
- 238000013528 artificial neural network Methods 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 9
- 239000003550 marker Substances 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 238000012986 modification Methods 0.000 claims description 5
- 230000004048 modification Effects 0.000 claims description 5
- 238000013527 convolutional neural network Methods 0.000 claims description 2
- 238000012549 training Methods 0.000 description 18
- 238000010586 diagram Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 11
- 239000003086 colorant Substances 0.000 description 6
- 238000004891 communication Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000010287 polarization Effects 0.000 description 3
- 241001270131 Agaricus moelleri Species 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
- G06V30/42—Document-oriented image-based pattern recognition based on the type of document
- G06V30/422—Technical drawings; Geographical maps
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/18—Extraction of features or characteristics of the image
- G06V30/18105—Extraction of features or characteristics of the image related to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/191—Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06V30/19173—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
Abstract
A mark analysis device included in a facility plan according to an embodiment of the present invention may perform an operation of acquiring a plurality of facility plan; detecting rectangles and arcs connected with the rectangles, which are included in each of the facility plan views; operations to specify window and door regions according to rectangles and arcs; an operation of labeling pixels of a specific window area as a category of window and labeling pixels of a specific door area as a category of door; and an operation of inputting a plurality of facility plan views and data labeled in units of pixels to a neural network model designed based on a predetermined image segmentation algorithm to train a weighted value of the neural network model for deriving a correlation of the categories of windows and doors included in the plurality of facility plan views and positions of the labeled pixels, thereby generating a neural network model discriminating the positions and the categories of the windows and doors included in the facility plan views according to the correlation.
Description
Technical Field
The present invention relates to a marker analysis device and a method included in a facility plan.
Background
The existing method for acquiring three-dimensional space information is a photographing mode by using a panoramic camera. This has a problem that it is necessary to take a photograph by physically going to a relevant place in order to acquire three-dimensional space information, and the articles or the like taken at the actual place of the photograph are exposed and may violate privacy. In addition, photographing with a panoramic camera is costly, and a photographed image is composed of two-dimensional image data, which has a problem in that physical information of an actual space cannot be extracted and utilized.
In addition, the existing three-dimensional space information acquisition method has a three-dimensional modeling file production mode performed manually. This requires a professional to personally perform modeling, and thus has problems of high labor cost and long manufacturing time.
Accordingly, korean patent publication No. 10-1638378 (a two-dimensional graphic-based three-dimensional auto-stereoscopic modeling method and program) is known as a technology for automatically extracting three-dimensional space information from two-dimensional plan view information without directly photographing the three-dimensional space. Korean patent publication No. 10-1638378 performs three-dimensional modeling based on spatial information and numerical values given by a two-dimensional plan view, and thus can provide three-dimensional spatial information with high reliability.
However, the window and door markings are both rectangular in plan view. In order to train a neural network that automatically distinguishes the marks of these windows and doors, there is a limitation in using a bounding box (bounding box) for marking, that is, a bounding box must be manually specified before it is marked, and the bounding box can only be marked in a non-rotatable rectangular form.
Because of such limitations, if the door and window marks are arranged diagonally on a plan view or if the door and window marks are adjacent to other marks, there is a problem in that the accuracy of the neural network training cannot be improved.
Disclosure of Invention
Technical problem
An object of an embodiment of the present invention is to provide a technique for automatically distinguishing a door from a window mark on a plan view, and marking the distinguished door and window areas in units of pixels, thereby improving accuracy of training for distinguishing a window from a door on a plan view.
However, the technical problems to be achieved by the embodiments of the present invention are not limited to the above-described problems, and various technical problems can be derived from what will be described below within the scope of the present invention as will be apparent to those of ordinary skill in the art.
Technical proposal
The mark analysis device included in a facility plan according to an embodiment of the present invention includes: one or more memories storing instructions that cause predetermined operations to be performed; and one or more processors operatively connected to the one or more memories and configured to execute the instructions, the operations performed by the processors may include: an operation of acquiring a plurality of facility plan views; an operation of detecting a rectangle included in each of the plurality of facility plan views and an arc connected to the rectangle; specifying operation of window and door regions according to the rectangle and the arc; labeling pixels of the specific window area as a category of the window, and labeling pixels of the specific door area as a category of the door; and an operation of inputting the plurality of facility plan views and the data labeled in units of pixels to a neural network model designed based on a predetermined image segmentation algorithm to train a weighted value of the neural network model that derives a correlation of the types of windows and doors included in the plurality of facility plan views and the positions of the labeled pixels, thereby generating a neural network model that discriminates the positions and types of windows and doors included in the facility plan views from the correlation.
And, the detecting may include: an operation of turning all parts other than black included in the facility plan view white; and detecting the rectangle and the arc according to the line segment formed by the black or the outline connecting the edges of the white area.
And, the detecting may include: and removing the text included in the facility plan.
And, the operation of turning the white color may include: an operation of holding RGB information for a pixel whose RG B information of the pixel included in the facility plan view is (0, 0) and changing RGB information of a pixel whose RGB information of the pixel included in the facility plan view is not (0, 0) to (255, 255, 255).
And, the operations of specifying the window area and the door area may include: an operation of detecting a first rectangle connected to the arc among the rectangles as a door region; and detecting a second rectangle, which is not connected to the arc, of the rectangles as a window area.
In the operation of detecting the gate region, if there is a line segment connected to the first rectangle and forming a perpendicular line, and the arc is connected to the end of the first rectangle and the end of the line segment, the first rectangle may be detected as the gate region.
And, the operations of specifying the window area and the door area may include: and an operation of removing from the detection in a case where the width in the rectangle is smaller than a preset value or larger than a preset value.
And, the noted operations may include: and labeling the pixels of all areas except the window and the door as empty.
And, the detecting may include: an operation of generating a first plan view from which the text is removed by an OCR detection algorithm from the facility plan view; an operation of generating a second plan view in which pixel information of the first plan view is converted by the following equations 1 and 2;
[ mathematics 1]
dst(I)=round(max(0,min(α*src(I)-β,255)))
( src (I): pixel information pre-change element value (x, y, z), α:10, β: -350, dst (I): modified element value of pixel information (x ', y ', z ') )
[ math figure 2]
Y=0.5*R+O.3334*G+0.1667*B
( R: x ', G in (x', y ', z') of dst (I) obtained in expression 1: y ', B in (x', y ', z') of dst (I) obtained in expression 1: z ', Y in (x', Y ', z') of dst (I) obtained in expression 1: one-dimensional element value )
An operation of generating a third plan view in which only a portion larger than or smaller than a preset width of a rectangle made up of line segments constituting the second plan view is shown in black; an operation of generating a fourth plan view for converting pixels having values of 0 to 30, 80 to 220, and 150 to 225 of the luminance element values among pixels constituting the first plan view into white; an operation of generating a fifth plan view to which the black area of the third plan view and the white area of the fourth plan view are applied in the first plan view; an operation of generating a sixth plan view in which pixel information of the fifth plan view is transformed by the following equations 3 to 5;
[ math 3]
dst(I)=round(max(0,min(α*src(I)-β,255)))
( src (I): pixel information pre-change element value (x, y, z), α:3, beta: -350, dst (I): modified element value of pixel information (x ', y ', z ') )
[ mathematics 4]
Y=0.5*R+0.3334*G+0.1667*B
( R: x ', G in (x', y ', z') of dst (I) obtained in equation 3: y ', B in (x', y ', z') of dst (I) obtained in equation 3: z ', Y in (x', Y ', z') of dst (I) obtained in equation 3: one-dimensional element value )
[ math 5]
Y′=(Y<40,Y′=oor Y≥40,Y′=255)
(Y: one-dimensional element value obtained in equation 4)
And generating a contour detection rectangle connecting the edges of the white area of the sixth plan view.
And, the detecting may include: an operation of generating a first plan from which characters are removed by an OCR detection algorithm at the facility plan; an operation of generating a seventh plane map in which pixel information of the second plane map is transformed by the following equations 6 to 8;
[ math figure 6]
dst(I)=round(max(0,min(α*src(I)-β,255)))
( src (I): pixel information pre-change element value (x, y, z), α:57, β: -12500, dst (I): modified element value of pixel information (x ', y ', z ') )
[ math 7]
Y=0.5*R+0.3334*G+0.1667*B
( R: x ', G in (x', y ', z') of dst (I) obtained in equation 6: y ', B in (x', y ', z') of dst (I) obtained in equation 6: z ', Y in (x', Y ', z') of dst (I) obtained in equation 6: one-dimensional element value )
[ math figure 8]
Y′=(Y=0,Y′=oor Y≠0,Y′=255)
(Y: one-dimensional element value obtained in equation 7)
An operation of generating a contour connecting edges of the white area of the seventh plan view; and an operation of approximating the contours according to a morse-plck algorithm to detect contours corresponding to Convex hulls (Convex hull) among the approximated contours, and detecting a case where a width of the Convex hulls (Convex hull) is within a predetermined range as an arc.
And, the operation of generating the neural network model may include: an operation of setting up to input the multiple facility plan views to an input layer of a neural network designed according to a Mask R-CN N algorithm, setting up to input categories of windows and doors included in the multiple facility plan views and positions of the noted pixels to an output layer to train weighting values of the neural network for deriving correlation relations between the categories of windows and doors included in the multiple facility plan views and the positions of the noted pixels.
According to one embodiment of the invention, an apparatus may be included that includes a neural network model generated by the apparatus.
The signature analysis method included in the facility plan executed by the signature analysis device included in the facility plan according to an embodiment of the present invention may include: a step of acquiring a plurality of facility plan views; detecting rectangles included in each of the facility plan views and arcs connected to the rectangles; a step of specifying window and door regions according to the rectangle and the arc; labeling pixels of the specific window area as a category of the window, and labeling pixels of the specific door area as a category of the door; and a step of inputting the plurality of facility plan views and the data labeled in units of pixels to a neural network model designed based on a predetermined image segmentation (image segmentation) algorithm to train a weighted value of the neural network model for deriving a correlation between the type of windows and doors included in the plurality of facility plan views and the positions of the pixels labeled, thereby generating a neural network model for discriminating the positions and types of windows and doors included in the facility plan views from the correlation.
Technical effects
According to the embodiments of the present invention, it is possible to generate a model that automatically distinguishes between door and window marks on a plan view and marks areas of the distinguished doors and windows in units of pixels to accurately distinguish between windows and doors in the plan view.
In the case where the image segmentation model generated according to the embodiment of the present invention is used together with the technology of korean issued patent publication No. 10-1638378 (two-dimensional map-based three-dimensional auto-stereoscopic modeling method and program), it is possible to more accurately distinguish the doors and windows of the two-dimensional plan map, and thus it is possible to perform efficient three-dimensional modeling.
In addition, various effects directly or indirectly understood through the present specification may be provided.
Drawings
Fig. 1 is an example diagram of a facility plan.
Fig. 2 is a functional block diagram of a tag analysis apparatus included in a facility plan view according to an embodiment of the present invention.
Fig. 3 to 5 are exemplary diagrams of operations of a marking analysis device included in a facility plan according to an embodiment of the present invention by detecting a door and a window by transforming the facility plan and marking them.
Fig. 6 is an exemplary diagram showing the result of distinguishing doors and windows from a facility plan by a neural network model generated by a marker analysis device included in the facility plan according to one embodiment of the present invention.
Fig. 7 is an exemplary diagram of a three-dimensional map in which three-dimensional modeling is performed from a two-dimensional plan using a neural network model generated by a marker analysis device included in a facility plan according to one embodiment of the present invention and the technique of korean patent laid-open publication No. 10-1638378.
Fig. 8 is a flow chart of a method of tag analysis included in a facility plan according to one embodiment of the present invention.
Detailed Description
Advantages and features of the present invention and methods of accomplishing the same may be apparent from the following examples which may be read in connection with the accompanying drawings and the detailed description thereof. The present invention is not limited to the embodiments disclosed below, but may be embodied in various forms, which are merely to complete the disclosure and to fully convey the scope of the invention to those skilled in the art, which is defined by the appended claims.
In describing the embodiments of the present invention, detailed descriptions of well-known functions or configurations will be omitted when it may be required in practice. Also, the following terms are terms defined in consideration of functions in the embodiments of the present invention, and may be different depending on the intention or the convention of a user, an operator, or the like. And thus their definition should be based on the entire content of this specification.
The functional blocks shown in the drawings and described below are only examples that can be implemented. In other implementations, other functional blocks may be used without departing from the spirit and scope of the detailed description. Further, although one or more functional blocks of the present invention are shown as separate blocks, one or more functional blocks of the present invention may be a combination of various hardware and software components that perform the same function.
In addition, the inclusion of certain components is an open-ended expression, and merely indicates the presence of such components, and is not to be construed to exclude additional components.
In addition, when a certain component is mentioned as being connected or connected to another component, it is to be understood that other components may exist in between although it may be directly connected or connected to the other component.
The expression "first and second" is used merely to distinguish a plurality of components, and does not limit the order or other features between the components.
Embodiments of the present invention will be described below with reference to the accompanying drawings.
Fig. 1 is an example diagram of a facility plan.
Referring to fig. 1, the indicia of the windows and doors in the plan view of the facility each comprise indicia of rectangular configuration. In order to train a neural network that automatically distinguishes such a marking of a window and a door, a bounding box (box) is used to mark the bounding box only in a rectangular form that cannot be rotated. Further, the door and window marks on the plan view cannot be accurately marked when the marks of the door and window are arranged diagonally and adjacent to other marks, and thus there is a problem that the accuracy of the neural network training cannot be improved.
Embodiments of the present invention provide a technique for automatically distinguishing between door and window markings on a plan view, with the distinguished door and window areas marked in pixels to improve the accuracy of the training used to distinguish between windows and doors on the plan view.
Fig. 2 is a functional block diagram of a tag analysis apparatus 100 included in a facility plan view according to an embodiment of the present invention.
Referring to fig. 2, the tag analysis apparatus 100 included in a facility plan according to one embodiment may include a memory 110, a processor 120, an input interface 130, a display part 140, and a communication interface 150.
The memory 110 may include a training data DB 111, a neural network model 113, and an instruction DB 115.
The training data DB 111 may include a plurality of image files with respect to facility plan. The facility plan may be acquired through an external server, an external DB, or from an image on the network. Here, the facility plan view may be configured by a plurality of pixels (for example, m×n pixels in a matrix configuration of M in the horizontal direction and N in the vertical direction), and each pixel may include pixel information configured by RGB element values (x, y, z) indicating the intrinsic colors of R (Red), G (Green), and B (Blue), or HSV information indicating colors, saturation, and brightness.
The neural network model 113 may include a neural network model that discriminates the category and location of door/window markings included within the entered facility plan. The neural network model may be generated by the operation of the processor 120 described below and stored in the memory 110.
The instruction DB 115 may store instructions capable of causing operations of the processor 120 to be performed. For example, the instruction DB 115 may store computer code that causes operations corresponding to the operations of the processor 120 described below to be performed.
The processor 120 can control the constitution of the tag analysis apparatus 100 included in the facility plan view, that is, the overall operation of the memory 110, the input interface 130, the display section 140, and the communication interface 150. The processor 120 may include a tag discrimination module 121, a labeling module 123, a training module 125, and a control module 127. The processor 120 may execute instructions stored in the memory 110 to drive the tag discrimination module 121, the labeling module 123, the training module 125, and the control module 127, and operations performed by the tag discrimination module 121, the labeling module 123, the training module 125, and the control module 127 may be understood as operations performed by the processor 120.
The mark discrimination module 121 may respectively specify a window area and a door area included in the facility plan for the plurality of facility plan included in the training data DB 111.
The mark discrimination module 121 may remove the text included in the facility plan view after detecting the text according to an algorithm for detecting the text (for example, OCR detection algorithm), and turn all the parts except black in the color included in the facility plan view into white. For example, the mark discrimination module 121 may perform an operation of converting RGB information of a pixel whose RGB information is not (0, 0) included in the facility plan into (255, 255, 255) by holding RGB information of a pixel whose RGB information is (0, 0) included in the facility plan.
After that, the mark discrimination module 121 may perform an operation of detecting a rectangle and an arc from a line segment composed of black or a contour connecting edges of a white area. Here, the mark discrimination module 121 may discriminate a rectangle other than a door or a window when the width of the rectangle is smaller than a preset value or larger than a preset value, and exclude the rectangle from detection objects of the door or the window. Here, the range of the preset value may be determined according to the width of the door or window mark included in the facility plan to be used for training, and the width of the mark may vary depending on the image size of the facility plan, and thus may be determined according to the image size of the facility plan to be used for training.
The mark discrimination module 121 may discriminate the first rectangular shape as the gate region if there is a line segment connected to the first rectangular shape to form a vertical line segment when there is a first rectangular shape connected to the arc among the rectangular shapes and the arc is connected to the end of the first rectangular shape and the end of the line segment at this time. The mark discriminating module 121 may discriminate a second rectangle, which is not connected to the arc, among the rectangles as a window area.
The operation of the mark discriminating module 121 is suitable for the case where the facility plan view corresponds to an original with high image quality. In addition, the operation of the mark discrimination module 121 to detect the door and window from the facility plan view containing much noise will be described later with reference to fig. 3 to 5.
The labeling module 123 may label the pixels of the window area specified by the label discrimination module 121 as the category of the window and the pixels of the door area specified by the label discrimination module 121 as the category of the door. Also, the labeling module 123 may label pixels of all areas other than windows and doors as empty (null) categories. The labeling module 123 can label pixel regions corresponding to the same type by applying RGB information on the same color to the pixel regions, and apply RGB information on different colors to pixel regions corresponding to different types. For example, the labeling module 123 may turn the pixel area of the facility plan view where the window is located yellow, the pixel area of the door is located red, and the remaining pixel areas are turned black to alter the pixel information, labeling the various categories (e.g., window, door, remaining areas).
The training module 125 may generate a neural network model that discriminates the locations and categories of windows and doors included in the facility plan from the correlations by inputting a plurality of facility plan and data labeled in units of pixels to the neural network model designed based on the image segmentation algorithm so as to train weighted values of the neural network model for deriving the correlations of the categories of windows and doors included in the plurality of facility plan and the labeled pixel locations.
The training module 125 is configured such that a plurality of facility plan views are input to an input layer of a neural network designed according to a Mask R-CNN algorithm in the image segmentation algorithm, and a category of windows/doors/background (background) included in the plurality of facility plan views and positions of labeled pixels are input to an output layer, to perform an operation of training a weighting value of the neural network for deriving a correlation between the category of windows and doors included in the plurality of facility plan views and the positions of the labeled pixels.
The control module 127 may input a facility plan to the trained neural network model to specify areas of doors and windows, and three-dimensionally model information on the two-dimensional facility plan according to information and numerical values of the space given by the technology of korean patent publication No. 10-1638378 to provide three-dimensional space information on the two-dimensional facility plan.
The input interface 130 may receive an entered facility plan to be used for training or testing.
The display part 140 may include a hardware configuration including a display panel to output an image.
The communication interface 150 is capable of transmitting and receiving information by communicating with an external device. To this end, the communication interface 150 may include a wireless communication module or a wired communication module.
A more specific operation of the above-described mark discriminating module 121 is described below with reference to fig. 3 to 5.
Fig. 3 to 5 are exemplary diagrams of operations of a marking analysis device included in a facility plan to detect and mark a door and a window through transformation of the facility plan according to an embodiment of the present invention.
Referring to fig. 3 (a), the mark discrimination module 121 may generate a first plan view from which text is removed from the facility plan view by an OCR detection algorithm.
Referring to fig. 3 (b), the mark discrimination module 121 may generate a second plan view in which pixel information of the first plan view is converted by the following equations 1 and 2 in order to convert the first plan view into black and white.
[ mathematics 1]
dst(I)=round(max(0,min(α*src(I)-β,255)))
( src (I): pixel information pre-change element value (x, y, z), α:10, β: -350, dst (I): modified element value of pixel information (x ', y ', z ') )
[ math figure 2]
Y=0.5*R+0.3334*G+0.1667*B
( R: x ', G in (x', y ', z') of dst (I) obtained in expression 1: y ', B in (x', y ', z') of dst (I) obtained in expression 1: z ', Y in (x', Y ', z') of dst (I) obtained in expression 1: one-dimensional element value )
Thus, looking at the transformed second plan view of fig. 3 (b), the black line segments are partially broken because of noise insertion in the image of the facility plan view, and thus the pixel information originally in the black part is information-transformed by uploading/downloading/compressing or the like so that the non-RGB information has a value other than (0, 0). So that the mark discrimination module 121 can remove noise information by the following operation.
Referring to fig. 3 (c), the mark discrimination module 121 may generate a third plan view in which a portion of the rectangle made up of the line segments making up the second plan view, which is larger than or smaller than the preset width, is shown in black. The third plan view is used to remove areas that are significantly larger or smaller than the width of the areas of the door and window as outliers.
Referring to fig. 3 (d), the mark discrimination module 121 may generate a fourth plan view that converts pixels having values of 0 to 30, 80 to 220, and 150 to 225, among pixels constituting the first plan view, into white. The mark discrimination module 121 may perform this operation based on HSV information included in the first plan view, which may be derived from RGB information transformation. The fourth plane diagram distinguishes colors with respect to an edge region in which color changes sharply.
Referring to fig. 4 (a), the mark discrimination module 121 may generate a fifth plan view to which the black area of the third plan view and the white area of the fourth plan view are applied in the first plan view. The third plan view of fig. 3 (c) is applied to remove a region having a maximum or minimum width compared to the width of the region of the door and window as an outlier. The fourth plan view to which fig. 3 (d) is applied focuses on the point where normal noise occurs in the pixel region where color change is steep, the color of the facility plan view is divided into black and white based on the edge where color change is steep, and this is synthesized to the first plan view as in fig. 4 (a) to increase the color contrast of the original to attenuate the influence of noise.
Referring to fig. 4 (b), the mark discrimination module 121 may generate a sixth plan view in which pixel information of the fifth plan view is converted by the following equations 3 to 5.
[ math 3]
dst(I)=round(max(0,min(α*src(I)-β,255)))
( src (I): pixel information pre-change element value (x, y, z), α:3, beta: -350, dst (I): modified element value of pixel information (x ', y ', z ') )
[ mathematics 4]
Y=0.5*R+0.3334*G+0.1667*B
( R: x ', G in (x', y ', z') of dst (I) obtained in equation 3: y ', B in (x', y ', z') of dst (I) obtained in equation 3: z ', Y in (x', Y ', z') of dst (I) obtained in equation 3: one-dimensional element value )
[ math 5]
Y′=(Y<40,Y′=oor Y≥40,Y′=255)
(Y: one-dimensional element value obtained in equation 4)
In contrast to formula 1 in which the portions other than the black portion are white, formula 3 in which the colors are polarized toward the white and black sides, formula 4 in which the colors of the two polarizations are changed to gray scales, and formula 5 in which the gray scales of the two polarizations are changed to black and white. Therefore, it can be confirmed that all black line segments in the generated sixth plan view of fig. 4 (b) are transformed without disconnection. Here, the mark discrimination module 121 may be further adapted to reduce morphological erosion (morphological erode) operation of the white portion in the sixth plan view to minimize noise.
Referring to fig. 4 (c) and 4 (d), the mark discrimination module 121 may generate an outline connecting the edges of the white area of the sixth plan view to detect a rectangular area that may correspond to a window or door mark from the facility plan view.
When the outline pattern is rectangular, the mark discrimination module 121 discriminates a rectangle corresponding to a window or a door region based on the width of the rectangle or the ratio of the horizontal/vertical directions of the rectangle.
For example, when the image width of the facility plan view is 923 and the image height is 676 (the number is replaced by a usable ratio, and hence the unit is omitted) is applied, the white region may be excluded in fig. 4 b under the following conditions, and a quadrangle region corresponding to a door or window may be detected as in fig. 4 c.
a. Except for the case where the width of the profile-formed pattern is greater than 1,000
b. Except for the case where the minimum value of the width or height of the quadrangle formed by the outline is greater than 10
c. Except in the case where the width of the outline-formed pattern is abnormally small (e.g., 40 or less)
d. Except for the case where the width of the minimum unit quadrangle formed by the outline and the width of the pattern formed by the outline are different beyond the error range
e. The ratio of width to height of the minimum unit quadrangle formed by the profile is x:1 (x is 0.5 to 2)
Referring to fig. 5 (a), the mark discrimination module 121 may generate a seventh plan view in which pixel information of the second plan view is converted by the following equations 6 to 8.
[ math figure 6]
dst(I)=round(max(O,min(α*src(I)-β,255)))
( src (I): pixel information pre-change element value (x, y, z), α:57, β: -12500, dst (I): modified element value of pixel information (x ', y ', z ') )
[ math 7]
Y=0.5*R+0.3334*G+0.1667*B
( R: x ', G in (x', y ', z') of dst (I) obtained in equation 6: y ', B in (x', y ', z') of dst (I) obtained in equation 6: z ', Y in (x', Y ', z') of dst (I) obtained in equation 6: one-dimensional element value )
[ math figure 8]
Y′=(Y=0,Y′=oor Y≠0,Y′=255)
(Y: one-dimensional element value obtained in equation 7)
Equation 6 can polarize the color of the pixel information with reference to the color and saturation of the facility plan view, equation 7 changes the color of the two polarizations to grayscale, and equation 8 changes the grayscale to black and white.
Referring to fig. 5 (b), the mark discrimination module 121 may generate a contour connecting the edges of the white area of the seventh plan view, approximate the contour of the jumping pattern according to the Douglas-Peucker (Douglas-Peucker) algorithm (for example, an epsilon index of the Douglas-Peucker algorithm is a value corresponding to 0.02=1/50 of the total length of the contour formed by the contour), and detect a contour corresponding to a Convex hull (Convex hull) among the approximated contours as an arc when the width of the contour corresponding to the Convex hull (Convex hull) is within a predetermined range. For example, the width of the convex hull is greater than 200 and less than 800, and the case where the approximated contour points are 10 or less is detected as an arc.
Referring to fig. 5 (b), the marking discrimination module 121 may detect a rectangle adjacent to the arc of fig. 5 (b) among the rectangles detected in fig. 4 (c) as a door and the remaining rectangles as windows, and the marking module 123 may mark based on the detected information.
Referring to fig. 5 (d), the training module 125 inputs a plurality of facility plan views (fig. 1) and data (fig. 5 (d)) labeled in units of pixels to a neural network model designed based on an image segmentation algorithm to train weighted values of the neural network model for deriving correlations of categories of windows and doors included in the plurality of facility plan views and labeled pixel positions, thereby generating a neural network model for discriminating the positions and categories of windows and doors included in the facility plan views according to the correlations.
Fig. 6 is an exemplary diagram showing the result of distinguishing doors and windows from a facility plan by a neural network model generated by the tag analysis apparatus 100 included in the facility plan according to one embodiment of the present invention.
Referring to fig. 6, it can be known that the positions of windows and doors included in a new plan view input to the neural network model can be detected in units of pixels, and categories can be detected with high accuracy. Such a neural network model can be combined with the technology of korean patent publication No. 10-1638378 as in fig. 7.
Fig. 7 is an exemplary diagram of a three-dimensional map in which three-dimensional modeling is performed from a two-dimensional plan using a neural network model generated by the tag analysis apparatus 100 included in a facility plan according to one embodiment of the present invention and the technology of korean patent laid-open publication No. 10-1638378.
Referring to fig. 7, the mark analysis device 100 included in a facility plan according to one embodiment of the present invention generates a model for automatically distinguishing doors and window marks on the plan, marks the distinguished doors and windows themselves in units of pixels to accurately distinguish the doors and windows from the plan, and combines with the technology of korean patent publication No. 10-1638378 (two-dimensional graph-based three-dimensional auto-stereoscopic modeling method and program) to more accurately distinguish the doors and windows of the two-dimensional plan (fig. 7 (a)) to perform efficient three-dimensional modeling (fig. 7 (b), fig. 7 (c)).
Fig. 8 is a flow chart of a method of tag analysis included in a facility plan according to one embodiment of the present invention. The steps of the marker analysis method included in the facility plan view of fig. 8 can be performed by the marker analysis device 100 included in the facility plan view illustrated by fig. 1, and each step will be described below.
The input interface 130 acquires a plurality of facility plan views (S810). The mark discriminating module 121 detects rectangles and arcs connected to the rectangles included in each of the plurality of facility plan views (S820). The mark discrimination module 121 specifies a window area and a door area according to the rectangle and the arc (S830). The labeling module 123 labels pixels of a particular window region as a category of window and labels pixels of a particular door region as a category of door (S840). The training module 125 inputs a plurality of facility plan views and data labeled in units of pixels to the neural network model designed based on the image segmentation algorithm to train weighted values of the neural network model for deriving correlations between the types of windows and doors included in the plurality of facility plan views and the positions of the labeled pixels, thereby generating a neural network model for discriminating the positions and types of windows and doors included in the facility plan views according to the correlations (S850).
In addition, the process of implementing the respective steps as the constituent elements of the main body of each of the above steps has been described with reference to fig. 1 to 7, and thus, duplicate description is omitted.
The embodiments of the invention described above may be implemented by various means. For example, embodiments of the invention may be implemented in hardware, firmware (firmware), software, a combination thereof, or the like.
Where implemented in hardware, the methods of embodiments of the invention may be implemented by more than one ASIC s (Application Specific Integrated Circuits, application specific integrated circuit), DSPs (Digital Sign al Processors, digital signal processor), DSPDs (Digital Signal Processing Devices, digital signal processing device), PLDs (Programmable Logic Devices ), FPGAs (Field Programmable Gate Arrays, field programmable gate array), processors, controllers, microcontrollers, microprocessors, and the like.
In the case of implementation in firmware or software, the method according to the embodiment of the present invention may be implemented by a module, step, or function, etc. that performs the functions or operations described above. The computer program storing the software code or the like may be stored in a computer readable storage medium or a memory unit driven by a processor. The memory unit may be located inside or outside the processor and may transmit and receive data to and from the processor by various means known in the art.
Moreover, combinations of blocks of the block diagrams and steps of the flowchart illustrations of the invention can also be implemented by computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute via the processor of the computer or other programmable data processing apparatus create means for implementing the functions specified in the block diagrams or flowchart step(s). These computer program instructions may also be stored in a computer-usable or computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-usable or computer-readable memory produce an article of manufacture including instruction means that implement the function specified in the block diagrams' blocks or steps of the flowchart. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the block diagrams and flowchart block or blocks.
Furthermore, the blocks or steps may represent a portion of a module, segment, or code that comprises one or more executable instructions for implementing the specified logical function(s). In addition, it should be noted that in several alternative embodiments, the functions noted in the blocks or steps may occur out of the order. For example, two blocks or steps shown in succession may, in fact, be executed substantially concurrently, or the blocks or steps may sometimes be executed in the reverse order, depending upon the functionality involved.
As described above, it should be understood by those skilled in the art that the present invention may be embodied in other specific forms without changing the technical spirit or essential characteristics thereof. It is to be understood that the above-described embodiments are intended to be comprehensive examples and not limiting. The scope of the invention is indicated by the appended claims rather than by the foregoing detailed description, and all changes and modifications that come within the meaning and range of equivalency of the claims are intended to be embraced therein.
Claims (14)
1. A signature analysis device included in a facility plan, comprising:
one or more memories storing instructions that cause predetermined operations to be performed; and
one or more processors operatively coupled to the one or more memories and configured to execute the instructions,
the operations performed by the processor include:
an operation of acquiring a plurality of facility plan views;
an operation of detecting a rectangle included in each of the plurality of facility plan views and an arc connected to the rectangle;
specifying operation of window and door regions according to the rectangle and the arc;
labeling pixels of the specific window area as a category of the window, and labeling pixels of the specific door area as a category of the door; and
and an operation of inputting the plurality of facility plan views and the data labeled in units of pixels to a neural network model designed based on a predetermined image segmentation algorithm to train a weighted value of the neural network model that derives a correlation between a type of window and door included in the plurality of facility plan views and a position of the labeled pixel, thereby generating a neural network model that discriminates the position and type of the window and door included in the facility plan views from the correlation.
2. The signature analysis device included in a facility plan of claim 1, wherein the operation of detecting comprises:
an operation of turning all parts other than black included in the facility plan view white; and
and detecting the operation of the rectangle and the arc according to the line segment formed by the black or the outline connecting the edges of the white area.
3. The signature analysis device included in a facility plan of claim 2, wherein the operations of detecting further comprise:
and removing the text included in the facility plan.
4. The mark analysis device included in a facility plan view according to claim 2, wherein the operation of turning into the white color includes:
an operation of holding RGB information for a pixel whose RGB information is (0, 0) included in the facility plan view, and changing RGB information for a pixel whose RGB information is not (0, 0) included in the facility plan view to (255, 255, 255).
5. The signature analysis device included in a facility plan of claim 1, wherein the operation of specifying the window area and the door area includes:
an operation of detecting a first rectangle connected to the arc among the rectangles as a door region; and
and detecting a second rectangle which is not connected with the arc among the rectangles as an operation of a window area.
6. The signature analysis device included in a facility plan of claim 5, wherein:
in the operation of detecting the gate region, there is a line segment connected to the first rectangle and forming a perpendicular, and the first rectangle is detected as the gate region when the arc is connected to the end of the first rectangle and the end of the line segment.
7. The signature analysis device included in a facility plan of claim 5, wherein the operation of specifying the window area and the door area further comprises:
and an operation of removing from the detection in a case where the width in the rectangle is smaller than a preset value or larger than a preset value.
8. The signature analysis device included in a facility plan of claim 1, wherein the operation of marking includes:
and labeling the pixels of all areas except the window and the door as empty.
9. The signature analysis device included in a facility plan of claim 1, wherein the operation of detecting comprises:
an operation of generating a first plan view from which the text is removed by an OCR detection algorithm from the facility plan view;
an operation of generating a second plan view in which pixel information of the first plan view is converted by the following equations 1 and 2;
[ mathematics 1]
dst(I)=round(max(0,min(α*src(I)-β,255)))
(src (I)) pixel information element values before modification (x, y, z), alpha: 10, beta: -350, dst (I)) pixel information element values after modification (x ', y ', z ')),
[ math figure 2]
Y=0.5*R+0.3334*G+0.1667*B
(R: x 'in (x', Y ', z') of dst (I) obtained in formula 1, G: Y 'in (x', Y ', z') of dst (I) obtained in formula 1, B: z 'in (x', Y ', z') of dst (I) obtained in formula 1, Y: one-dimensional element value),
an operation of generating a third plan view in which only a portion larger than or smaller than a preset width of a rectangle made up of line segments constituting the second plan view is shown in black;
an operation of generating a fourth plan view for converting pixels having values of 0 to 30, 80 to 220, and 150 to 225 of the luminance element values among pixels constituting the first plan view into white;
an operation of generating a fifth plan view to which the black area of the third plan view and the white area of the fourth plan view are applied in the first plan view;
an operation of generating a sixth plan view in which pixel information of the fifth plan view is transformed by the following equations 3 to 5;
[ math 3]
dst(I)=round(max(0,min(α*src(I)-β,255)))
(src (I)) pixel information element values before modification (x, y, z), alpha: 3, beta: -350, dst (I)) pixel information element values after modification (x ', y ', z ')),
[ mathematics 4]
Y=0.5*R+0.3334*G+0.1667*B
(R: x 'in (x', Y ', z') of dst (I) obtained in formula 3, G: Y 'in (x', Y ', z') of dst (I) obtained in formula 3, B: z 'in (x', Y ', z') of dst (I) obtained in formula 3, Y: one-dimensional element value),
[ math 5]
Y′=(Y<40,Y′=oor Y≥40,Y′=255)
(Y: one-dimensional element value obtained in equation 4),
and generating a contour detection rectangle connecting the edges of the white area of the sixth plan view.
10. The signature analysis device included in a facility plan of claim 1, wherein the operation of detecting comprises:
an operation of generating a first plan from which characters are removed by an OCR detection algorithm at the facility plan;
an operation of generating a seventh plane map in which pixel information of the second plane map is transformed by the following equations 6 to 8;
[ math figure 6]
dst (I) =round (max (0, min (α×src (I) - β, 255))) (src (I): pre-change element values (x, y, z) of pixel information, α:57, β: -12500, dst (I): post-change element values (x ', y ', z ') of pixel information),
[ math 7]
Y=0.5*R+0.3334*G+0.1667*B
(R: x 'in (x', Y ', z') of dst (I) obtained in formula 6, G: Y 'in (x', Y ', z') of dst (I) obtained in formula 6, B: z 'in (x', Y ', z') of dst (I) obtained in formula 6, Y: one-dimensional element value),
[ math figure 8]
Y′=(Y=O,Y′=oor Y≠O,Y′=255)
(Y: one-dimensional element value obtained in equation 7),
an operation of generating a contour connecting edges of the white area of the seventh plan view; and
and approximating the contour according to a Probex-Praeck algorithm to detect a contour corresponding to a Convex hull (Convex hull) in the approximated contour, and detecting a condition that the width of the Convex hull (Convex hull) is within a preset range as an arc.
11. The signature analysis device included in the facility plan of claim 1, wherein the operation of generating the neural network model includes:
an operation of setting up to input the multiple facility plan views to an input layer of a neural network designed according to a Mask R-CNN algorithm, setting up to input categories of windows and doors included in the multiple facility plan views and positions of the noted pixels to an output layer to train weighting values of the neural network for deriving correlation relations between the categories of windows and doors included in the multiple facility plan views and the positions of the noted pixels.
12. An apparatus comprising a neural network model generated by the signature analysis apparatus included in the facility plan according to any one of claims 1 to 11.
13. A marker analysis method included in a facility plan, which is a method performed by a marker analysis device included in a facility plan, comprising:
a step of acquiring a plurality of facility plan views;
detecting rectangles included in each of the facility plan views and arcs connected to the rectangles;
a step of specifying window and door regions according to the rectangle and the arc;
labeling pixels of the specific window area as a category of the window, and labeling pixels of the specific door area as a category of the door; and
and a step of inputting the plurality of facility plan views and the data labeled in units of pixels to a neural network model designed based on a predetermined image segmentation algorithm to train a weighted value of the neural network model for deriving a correlation between the type of windows and doors included in the plurality of facility plan views and the positions of the labeled pixels, thereby generating a neural network model for discriminating the positions and types of windows and doors included in the facility plan views from the correlation.
14. A computer program stored on a computer readable storage medium, causing a processor to perform the method of claim 13.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2020-0091781 | 2020-07-23 | ||
KR1020200091781A KR102208694B1 (en) | 2020-07-23 | 2020-07-23 | Apparatus and method for analyzing mark in facility floor plan |
PCT/KR2021/009480 WO2022019675A1 (en) | 2020-07-23 | 2021-07-22 | Symbol analysis device and method included in facility floor plan |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116490901A true CN116490901A (en) | 2023-07-25 |
Family
ID=74239299
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202180059835.2A Withdrawn CN116490901A (en) | 2020-07-23 | 2021-07-22 | Device and method for analyzing marks included in facility plan |
Country Status (6)
Country | Link |
---|---|
US (1) | US20230222829A1 (en) |
EP (1) | EP4184351A1 (en) |
JP (1) | JP2023535084A (en) |
KR (3) | KR102208694B1 (en) |
CN (1) | CN116490901A (en) |
WO (1) | WO2022019675A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102208694B1 (en) * | 2020-07-23 | 2021-01-28 | 주식회사 어반베이스 | Apparatus and method for analyzing mark in facility floor plan |
KR20230174318A (en) | 2022-06-17 | 2023-12-28 | 단국대학교 산학협력단 | Method for calculating an approximate estimate from architectural drawings using an artificial intelligence model |
KR20230174317A (en) | 2022-06-17 | 2023-12-28 | 단국대학교 산학협력단 | Method for generating an artificial intelligence model for recognizing structural members in architectural drawings |
KR20240037716A (en) * | 2022-09-15 | 2024-03-22 | 한양대학교 산학협력단 | Method and apparatus for automatically calculating window set information using artificial neural network |
US11928395B1 (en) * | 2023-04-14 | 2024-03-12 | Hubstar International Limited | Floorplan drawing conversion and analysis for space management |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101638378B1 (en) | 2014-11-28 | 2016-07-11 | 주식회사 어반베이스 | Method and program for modeling 3-dimension structure by 2-dimension floor plan |
KR102145726B1 (en) * | 2016-03-22 | 2020-08-19 | (주) 아키드로우 | Method and apparatus for detection door image using machine learning algorithm |
KR102294421B1 (en) * | 2016-04-08 | 2021-08-26 | (주) 아키드로우 | Apparatus and Method of Processing for Interior Information |
US10346723B2 (en) * | 2016-11-01 | 2019-07-09 | Snap Inc. | Neural network for object detection in images |
EP3506211B1 (en) * | 2017-12-28 | 2021-02-24 | Dassault Systèmes | Generating 3d models representing buildings |
KR102208694B1 (en) * | 2020-07-23 | 2021-01-28 | 주식회사 어반베이스 | Apparatus and method for analyzing mark in facility floor plan |
-
2020
- 2020-07-23 KR KR1020200091781A patent/KR102208694B1/en active IP Right Grant
-
2021
- 2021-01-19 KR KR1020210007497A patent/KR20220012789A/en active IP Right Grant
- 2021-01-19 KR KR1020210007496A patent/KR20220012788A/en active IP Right Grant
- 2021-07-22 WO PCT/KR2021/009480 patent/WO2022019675A1/en active Application Filing
- 2021-07-22 JP JP2023504673A patent/JP2023535084A/en not_active Withdrawn
- 2021-07-22 CN CN202180059835.2A patent/CN116490901A/en not_active Withdrawn
- 2021-07-22 EP EP21846177.0A patent/EP4184351A1/en not_active Withdrawn
-
2023
- 2023-01-19 US US18/156,853 patent/US20230222829A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
KR20220012788A (en) | 2022-02-04 |
WO2022019675A1 (en) | 2022-01-27 |
KR102208694B1 (en) | 2021-01-28 |
KR20220012789A (en) | 2022-02-04 |
US20230222829A1 (en) | 2023-07-13 |
JP2023535084A (en) | 2023-08-15 |
KR102208694B9 (en) | 2022-03-11 |
EP4184351A1 (en) | 2023-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116490901A (en) | Device and method for analyzing marks included in facility plan | |
JP7099509B2 (en) | Computer vision system for digitization of industrial equipment gauges and alarms | |
JP3258122B2 (en) | Image processing device | |
AU2023201677A1 (en) | System for counting quantity of game tokens | |
US11169872B2 (en) | Circuit device, electronic apparatus, and error detection method | |
CN105069453A (en) | Image correction method and apparatus | |
CN115619787B (en) | UV glue defect detection method, system, equipment and medium | |
CN110264523B (en) | Method and equipment for determining position information of target image in test image | |
TW200842734A (en) | Image processing program and image processing device | |
KR101842535B1 (en) | Method for the optical detection of symbols | |
US12087421B2 (en) | AI-based product surface inspecting apparatus and method | |
JPH11306325A (en) | Method and device for object detection | |
Mahardika et al. | Implementation segmentation of color image with detection of color to detect object | |
CN117237637A (en) | Image signal processing system and method | |
CN102682308B (en) | Imaging processing method and device | |
JP2003087562A (en) | Image processor and image processing method | |
Zhang et al. | A combined approach to single-camera-based lane detection in driverless navigation | |
CN105868768A (en) | Method and system for recognizing whether picture carries specific marker | |
CN109523594A (en) | A kind of vision tray characteristic point coordinate location method and system | |
JP4406978B2 (en) | Template matching method | |
KR102082129B1 (en) | Apparatus and method for identifying specific animal species based on image recognition | |
JPH07146937A (en) | Pattern matching method | |
RU2571510C2 (en) | Method and apparatus using image magnification to suppress visible defects on image | |
CN113255657B (en) | Method and device for detecting scratch on bill surface, electronic equipment and machine-readable medium | |
JP2008097319A (en) | Inspection device for display image, and inspection method for display image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20230725 |