CN111210442B - Drawing image positioning and correcting method and device and electronic equipment - Google Patents

Drawing image positioning and correcting method and device and electronic equipment Download PDF

Info

Publication number
CN111210442B
CN111210442B CN202010002108.XA CN202010002108A CN111210442B CN 111210442 B CN111210442 B CN 111210442B CN 202010002108 A CN202010002108 A CN 202010002108A CN 111210442 B CN111210442 B CN 111210442B
Authority
CN
China
Prior art keywords
sub
drawing image
image
point set
transformation matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010002108.XA
Other languages
Chinese (zh)
Other versions
CN111210442A (en
Inventor
黄深能
胡浩
利啟东
张超
黄聿
杨超龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Bozhilin Robot Co Ltd
Original Assignee
Guangdong Bozhilin Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Bozhilin Robot Co Ltd filed Critical Guangdong Bozhilin Robot Co Ltd
Priority to CN202010002108.XA priority Critical patent/CN111210442B/en
Publication of CN111210442A publication Critical patent/CN111210442A/en
Application granted granted Critical
Publication of CN111210442B publication Critical patent/CN111210442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/755Deformable models or variational models, e.g. snakes or active contours
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Abstract

The invention relates to a drawing image positioning and correcting method, a drawing image positioning and correcting device and electronic equipment. The method comprises the following steps: calculating by using a deep learning model of image segmentation to obtain the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image; calculating by adopting a ransac algorithm according to the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image to obtain a transformation matrix between the first drawing image and the second drawing image; and aligning the first drawing image and the second drawing image through the transformation matrix, and then carrying out subtraction operation on the aligned first drawing image and the aligned second drawing image to obtain a difference item between the first drawing image and the second drawing image. According to the method, the positioning and the proofreading between the drawings are carried out according to the transformation matrix of the two drawing images obtained based on the deep learning model, so that the efficiency of the positioning and the proofreading between the drawing images is improved, and the problems of low manual proofreading efficiency and risk of proofreading omission are avoided.

Description

Drawing image positioning and correcting method and device and electronic equipment
Technical Field
The invention relates to the field of images, in particular to a drawing image positioning and correcting method, a drawing image positioning and correcting device and electronic equipment.
Background
In the real estate industry, modifying and correcting drawings is a relatively frequent matter. Because the construction drawings have large space, very many contents and dense annotation lines, much effort is needed to search for the difference items in each area of the drawings in the process of manual proofreading, and when the drawings are more and the states of engineers are not good, the probability of missed detection is greatly improved. In addition, if the floor is built by using the un-corrected complete drawing, the effect different from the original plan is obtained, dispute between a plotter and a building is caused, and the construction progress of the whole floor is slowed down. Generally, manual proofreading is inefficient, the risk of proofreading omission is high, and a large amount of manpower and financial resources are consumed. In addition, in the prior art, most drawings are corrected based on corner information, however, because the drawing texture information is less, the matching accuracy of the corners is lower, the alignment effect is poor, and in addition, the drawing correction method based on the corner information is easy to fall into a local optimal solution, so the iterative computation effect is poor.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus and an electronic device for positioning and calibrating a drawing image to solve the above-mentioned problems.
A first aspect of the present application provides a drawing image positioning and correcting method, including:
calculating by using a deep learning model of image segmentation to obtain the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image;
calculating by adopting a ransac algorithm according to the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image to obtain a transformation matrix between the first drawing image and the second drawing image; and
and aligning the first drawing image and the second drawing image through the transformation matrix, and performing subtraction operation on the aligned first drawing image and the aligned second drawing image to obtain a difference item between the first drawing image and the second drawing image.
Preferably, the first drawing image and the second drawing image are architectural drawing images, the target element is a shear wall, and the calculating by using the deep learning model of image segmentation to obtain the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image includes:
inputting the first drawing image into the deep learning model of the image segmentation to obtain a first shear wall image, and inputting the second drawing image into the deep learning model of the image segmentation to obtain a second shear wall image;
calculating to obtain the mass center of the shear wall in the first drawing image through the image moment of the first shear wall image, and calculating to obtain the mass center of the shear wall in the second drawing image through the image moment of the second shear wall image; and
a first set of points is formed by the centroids of all shear walls in the first shear wall image, and a second set of points is formed by the centroids of all shear walls in the second shear wall image.
Preferably, the inputting the first drawing image into the deep learning model for image segmentation to obtain a first shear wall image includes:
taking the shear wall in the first drawing image as a segmentation target of the first drawing image;
manually dividing and calibrating the first drawing image according to the shear wall in the first drawing image to obtain a first division label;
inputting the first drawing image and the first segmentation label into a deep learning neural network for training to obtain a deep learning model of image segmentation, wherein the deep learning model of image segmentation comprises a coding layer, a decoding layer and a convolutional layer which are sequentially connected; and
and inputting the first drawing image into the deep learning model for image segmentation to be segmented, wherein the obtained segmented image is a first shear wall image.
Preferably, the calculating the centroid of the shear wall in the first drawing image through the image moment of the first shear wall image includes:
by the formula
Figure BDA0002353864990000031
ComputingObtaining the centroid of the first drawing image, wherein V (i, j) is a gray value of the first shear wall image on a pixel point (i, j), i is an abscissa value of the first shear wall image in a pixel coordinate system, j is an ordinate value of the first shear wall image in the pixel coordinate system, and x is 1 Is an abscissa value, y, of the centroid of the shear wall in the first shear wall image 1 And the longitudinal coordinate value of the mass center of the shear wall in the first shear wall image is used.
Preferably, the obtaining of the transformation matrix between the first drawing image and the second drawing image by calculating the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image according to a ransac algorithm includes:
dividing the first point set into four sub-point sets, and dividing the second point set into four sub-point sets, wherein each sub-point set of the first point set corresponds to one sub-point set of the second point set, the four sub-point sets of the first point set are respectively a first sub-point set, a second sub-point set, a third sub-point set and a fourth sub-point set, and the four sub-point sets of the second point set are respectively a fifth sub-point set, a sixth sub-point set, a seventh sub-point set and an eighth sub-point set;
obtaining four sub-transformation matrixes and a weight coefficient of each sub-transformation matrix by using a ransac algorithm according to each sub-point set of the first point set and the corresponding sub-point set of the second point set in sequence; and
and calculating to obtain a transformation matrix between the first drawing image and the second drawing image according to the four sub-transformation matrixes and the weight coefficient of each transformation matrix.
Preferably, the obtaining four sub-transformation matrices and a weight coefficient of each sub-transformation matrix according to each sub-point set of the first point set and a sub-point set of the second point set corresponding to the first point set by using a ransac algorithm in sequence includes:
a) Calculating a first sub-transformation matrix between the first sub-point set and the fifth sub-point set under the current iteration times, and taking the first sub-transformation matrix as the transformation matrix;
b) According to formula B pre =A 1 ·H 1 Calculating to obtain a set of points of the positions of the first sub-point set mapped to the second drawing image under the current iteration times to obtain a first target point set, wherein B pre Is a first set of target points, A 1 Is a first set of sub-points, H 1 Is a first sub-transformation matrix;
c) For each point in the first target point set, sequentially calculating a point in the fifth sub-point set closest to the point and a distance between the point and the closest point, defining the distance between the point and the closest point as the distance of the point, if the distance of the point is smaller than a preset distance threshold, reserving the point in the first target point set, otherwise, deleting the point from the first target point set, randomly finding a point from the fifth sub-point set, adding the point into the first target point set, and accumulating the distances of all the points in the first target point set to obtain a distance sum under the previous iteration;
d) If the distance sum under the current iteration times is smaller than the distance sum under the previous iteration times, the preset distance threshold value is reduced, otherwise, the preset distance threshold value is increased, and the adjusted preset distance threshold value is used for the next iteration;
e) Repeating the steps (a) - (d) to obtain the distance sum corresponding to all points in the first target point set until the specified iteration number is reached or the distance sum under the current iteration number is smaller than a specified value, and taking the distance sum of all points in the first target point set as the distance sum of the first sub-transformation matrix;
f) Respectively calculating the distance sum of a second sub transformation matrix between the second sub point set and the sixth sub point set, the distance sum of a third sub transformation matrix between the third sub point set and the seventh sub point set, and the distance sum of a fourth sub transformation matrix between the fourth sub point set and the eighth sub point set by utilizing the steps (a) - (e), and obtaining the distance sum according to a formula alpha 1 =1-D all1 /max(D all1 ,D all2 ,D all3 ,D all4 ) Calculating the weight coefficient of the first sub-transformation matrix according to a formula alpha 2 =1-D all2 /max(D all1 ,D all2 ,D all3 ,D all4 ) Calculating the weight coefficient of the second sub-transformation matrix according to a formula alpha 3 =1-D all3 /max(D all1 ,D all2 ,D all3 ,D all4 ) Calculating the weight coefficient of the third sub-transformation matrix according to a formula alpha 3 =1-D all4 /max(D all1 ,D all2 ,D all3 ,D all4 ) And calculating the weight coefficient of the fourth sub-transformation matrix.
Preferably, the obtaining of the transformation matrix between the first drawing image and the second drawing image by calculating according to the four sub-transformation matrices and the weight coefficient of each transformation matrix includes:
according to the formula H = α 1 ·H 12 ·H 23 ·H 34 ·H 4 Calculating to obtain a transformation matrix between the first drawing image and the second drawing image, wherein H 1 、H 2 、H 3 And H 4 The alpha is the first sub-transformation matrix, the second sub-transformation matrix, the third sub-transformation matrix and the fourth sub-transformation matrix respectively 1 Is a weight coefficient of the first sub-transform matrix, alpha 2 Is the weight coefficient of the second sub-transform matrix, alpha 3 Is the weight coefficient of the third sub-transform matrix, alpha 4 The weight coefficients of the fourth sub-transformation matrix; collection
And (c) substituting the transformation matrix into the step (a), and repeating the steps (a) - (f) until the distance sum of all the sub-transformation matrices under the specified iteration number or the current iteration number is less than the specified value.
Preferably, the method further comprises:
according to the formula
Figure BDA0002353864990000051
Adjusting the preset distance threshold, wherein D all Is the sum of the distances at the current iteration number, D all_last For the sum of distances at the previous iteration number, D thesh And the preset distance threshold value is obtained.
A second aspect of the present application provides a drawing image positioning and collating apparatus, the apparatus comprising:
the centroid determining module is used for calculating the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image by using the deep learning model of image segmentation;
the transformation matrix calculation module is used for calculating a transformation matrix between the first drawing image and the second drawing image by adopting a ransac algorithm according to the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image; and
and the checking module is used for aligning the first drawing image with the second drawing image through the transformation matrix and then carrying out subtraction operation on the aligned first drawing image and the aligned second drawing image to obtain a difference item between the first drawing image and the second drawing image.
A third aspect of the present application provides an electronic device, which includes a processor and a memory, where the processor is configured to implement the above method for positioning and calibrating a drawing image when executing a computer program stored in the memory.
According to the scheme, the centroid of the target elements of the two drawing images is calculated according to a deep learning model for image segmentation, the transformation matrix between the two drawing images is calculated according to the target centroid of the two drawing images by adopting a ransac algorithm, and positioning and checking between the drawings are carried out based on the transformation matrix of the two drawing images, so that the efficiency of positioning and checking between the drawing images is improved, and the problems of low manual checking efficiency, risk of checking omission and large consumption of manpower and financial resources are solved. Meanwhile, when the transformation matrix between two drawings is solved, a multi-task iteration solving method is adopted and the range of the preset distance threshold value in the ransac algorithm is dynamically adjusted, so that the problem of poor iterative calculation effect caused by the fact that a local optimal solution is trapped in the solving process of the transformation matrix is solved.
Drawings
Fig. 1 is a flowchart of a method for positioning and calibrating a drawing image according to an embodiment of the present invention.
FIG. 2 is a block diagram of an apparatus for positioning and calibrating drawing images according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of an electronic device according to an embodiment of the invention.
Description of the main elements
Figure BDA0002353864990000061
Figure BDA0002353864990000071
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a detailed description of the present invention will be given below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention, and the described embodiments are merely a subset of the embodiments of the present invention, rather than a complete embodiment. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Preferably, the drawing image positioning and correcting method is applied to one or more electronic devices. The electronic device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The electronic device may be a desktop computer, a notebook computer, a tablet computer, a cloud server, or other computing device. The device can be in man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
Example 1
FIG. 1 is a flowchart of a method for positioning and calibrating a drawing image according to an embodiment of the present invention. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs.
Referring to fig. 1, the drawing image positioning and correcting method specifically includes the following steps:
step S1, calculating by using a deep learning model of image segmentation to obtain the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image.
In this embodiment, the first drawing image and the second drawing image are both architectural drawing images, and the target element is a shear wall. The step of calculating and obtaining the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image by using the deep learning model of image segmentation comprises the following steps:
(S11) inputting the first drawing image into the deep learning model for image segmentation to obtain a first shear wall image, and inputting the second drawing image into the deep learning model for image segmentation to obtain a second shear wall image;
(S12) calculating the mass center of the shear wall in the first drawing image according to the image moment of the first shear wall image, and calculating the mass center of the shear wall in the second drawing image according to the image moment of the second shear wall image; and
(S13) forming a first point set by the mass centers of all the shear walls in the first shear wall image, and forming a second point set by the mass centers of all the shear walls in the second shear wall image.
In this embodiment, the inputting the first drawing image into the deep learning model for image segmentation to obtain a first shear wall image includes: taking the shear wall in the first drawing image as a segmentation target of the first drawing image; manually dividing and calibrating the first drawing image according to the shear wall in the first drawing image to obtain a first division label; inputting the first drawing image and the first segmentation label into a deep learning neural network for training to obtain a deep learning model of image segmentation, wherein the deep learning model of image segmentation comprises a coding layer, a decoding layer and a convolutional layer which are sequentially connected; and inputting the first drawing image into the deep learning model for image segmentation to be segmented, wherein the obtained segmented image is a first shear wall image.
In this embodiment, the inputting the second drawing image into the deep learning model for image segmentation to obtain a second shear wall image includes: taking the shear wall in the second drawing image as a segmentation target of the second drawing image; manually dividing and calibrating the second drawing image according to the shear wall in the second drawing image to obtain a second division label; inputting the second drawing image and the second segmentation label into a deep learning neural network for training to obtain a deep learning model of the image segmentation; and inputting the second drawing image into the deep learning model for image segmentation to be segmented, and obtaining a segmented image which is a second shear wall image.
In the present embodiment, the image moment is a 1 st order moment. The step of calculating the centroid of the shear wall in the first drawing image through the image moment of the first shear wall image comprises: the centroid of the first drawing image is calculated by the following formula (1).
Figure BDA0002353864990000091
Wherein V (i, j) is a gray value of the first shear wall image on a pixel point (i, j), and i is a horizontal axis of the first shear wall image on a pixel coordinate systemA coordinate value j is a longitudinal coordinate value of the first shear wall image in a pixel coordinate system, x 1 Is an abscissa value, y, of the centroid of the shear wall in the first shear wall image 1 And the longitudinal coordinate value of the mass center of the shear wall in the first shear wall image is shown.
In this embodiment, the obtaining of the centroid of the shear wall in the second drawing image by calculating the image moment of the second shear wall image includes: and (3) calculating the centroid of the first drawing image through the following formula (2).
Figure BDA0002353864990000092
Wherein U (i, j) is a gray value of the second shear wall image on a pixel point (i, j), i is an abscissa value of the second shear wall image in a pixel coordinate system, j is an ordinate value of the second shear wall image in the pixel coordinate system, and x is a gray value of the second shear wall image in the pixel coordinate system 2 Is an abscissa value, y, of the centroid of the shear wall in the second shear wall image 2 And the longitudinal coordinate value of the mass center of the shear wall in the second shear wall image is shown.
And S2, calculating by adopting a ransac algorithm according to the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image to obtain a transformation matrix between the first drawing image and the second drawing image.
In this embodiment, the obtaining of the transformation matrix between the first drawing image and the second drawing image by calculating the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image by using a ransac algorithm includes:
(S21) dividing the first set of points into four sets of sub-points, and dividing the second set of points into four sets of sub-points, wherein each set of sub-points of the first set of points corresponds to one set of sub-points of the second set of points;
(S22) obtaining four sub-transformation matrixes and a weight coefficient of each sub-transformation matrix by utilizing a ransac algorithm according to each sub-point set of the first point set and the corresponding sub-point set of the second point set in sequence; and
and (S23) calculating to obtain a transformation matrix between the first drawing image and the second drawing image according to the four sub-transformation matrixes and the weight coefficient of each transformation matrix.
In a specific embodiment, the dividing the first point set into four sub-point sets and the dividing the second point set into four sub-point sets, where each sub-point set of the first point set corresponds to one sub-point set of the second point set includes: randomly dividing the first point set into a first sub-point set, a second sub-point set, a third sub-point set and a fourth sub-point set, and randomly dividing the second point set into 4 parts which are respectively a fifth sub-point set, a sixth sub-point set, a seventh sub-point set and an eighth sub-point set, wherein the number of each sub-point set in the first point set is the same as that of the sub-point sets in the second point set, the first sub-point set corresponds to the fifth sub-point set, the second sub-point set corresponds to the sixth sub-point set, and the third sub-point set corresponds to the seventh sub-point set; the fourth set of sub-points corresponds to the eighth set of sub-points.
In this embodiment, the sequentially obtaining four sub-transformation matrices and a weight coefficient of each sub-transformation matrix according to each sub-point set of the first point set and a sub-point set of the second point set corresponding to the first point set by using a ransac algorithm includes:
a) Calculating a first sub-transformation matrix between the first sub-point set and the fifth sub-point set under the current iteration times, and taking the first sub-transformation matrix as the transformation matrix;
b) According to formula B pre =A 1 ·H 1 Calculating to obtain a set of points of the positions of the first sub-point set mapped to the second drawing image under the current iteration number, and obtaining a first target point set, wherein B pre Is a first set of target points, A 1 Is a first set of sub-points, H 1 A first sub-transformation matrix;
c) For each point in the first target point set, sequentially calculating a point in the fifth target point set closest to the point and a distance between the point and the closest point, defining the distance between the point and the closest point as the distance of the point, if the distance of the point is smaller than a preset distance threshold, reserving the point in the first target point set, otherwise, deleting the point from the first target point set, randomly finding a point from the fifth target point set, adding the point into the first target point set, and accumulating the distances of all the points in the first target point set to obtain a distance sum under the previous iteration times;
d) If the distance sum under the current iteration times is smaller than the distance sum under the previous iteration times, the preset distance threshold value is reduced, otherwise, the preset distance threshold value is increased, and the adjusted preset distance threshold value is used for the next iteration, wherein the distance sum under the current iteration times is smaller than the distance sum under the previous iteration times, and the formula is used according to the formula
Figure BDA0002353864990000111
Adjusting a preset distance threshold, wherein D all Is the sum of the distances at the current iteration number, D all_last Is the sum of the distances at the previous iteration number, D thesh Is a preset distance threshold;
e) Repeating the steps (a) - (d) to obtain the distance sum corresponding to all points in the first target point set until the specified iteration number is reached or the distance sum under the current iteration number is smaller than a specified value, and taking the distance sum of all points in the first target point set as the distance sum of the first sub-transformation matrix;
f) Respectively calculating the distance sum of a second sub transformation matrix between the second sub point set and the sixth sub point set, the distance sum of a third sub transformation matrix between the third sub point set and the seventh sub point set, and the distance sum of a fourth sub transformation matrix between the fourth sub point set and the eighth sub point set by utilizing the steps (a) - (e), and obtaining the distance sum according to a formula alpha 1 =1-D all1 /max(D all1 ,D all2 ,D all3 ,D all4 ) Calculating the weight coefficient of the first sub-transform matrix according to formula alpha 2 =1-D all2 /max(D all1 ,D all2 ,D all3 ,D all4 ) Calculating the weight coefficient of the second sub-transformation matrix according to the formula alpha 3 =1-D all3 /max(D all1 ,D all2 ,D all3 ,D all4 ) Calculating the weight coefficient of the third sub-transformation matrix according to the formula alpha 3 =1-D all4 /max(D all1 ,D all2 ,D all3 ,D all4 ) Calculate the fourth sub-transformationThe weight coefficients of the matrix.
In this embodiment, the obtaining of the transformation matrix between the first drawing image and the second drawing image by calculating according to the four sub-transformation matrices and the weight coefficient of each transformation matrix includes: according to the formula H = α 1 ·H 12 ·H 23 ·H 34 ·H 4 Calculating to obtain a transformation matrix between the first drawing image and the second drawing image, wherein H 1 、H 2 、H 3 And H 4 Respectively a first sub-transformation matrix, a second sub-transformation matrix, a third sub-transformation matrix and a fourth sub-transformation matrix, the alpha 1 Is the weight coefficient of the first sub-transform matrix, alpha 2 Is the weight coefficient of the second sub-transform matrix, alpha 3 Is the weight coefficient of the third sub-transform matrix, alpha 4 The weight coefficient of the fourth sub transformation matrix; and (c) substituting the transformation matrix into the step (a), and repeating the steps (a) - (f) until the distance sum of all the sub-transformation matrices under the specified iteration number or the current iteration number is less than the specified value.
And S3, aligning the first drawing image and the second drawing image through the transformation matrix, and performing subtraction operation on the aligned first drawing image and the aligned second drawing image to obtain a difference item between the first drawing image and the second drawing image.
In this embodiment, the method further includes: and displaying the aligned first drawing image and the aligned second drawing image, and the difference item between the first drawing image and the second drawing image.
According to the scheme, the centroid of the target elements of the two drawing images is calculated according to the deep learning model for image segmentation, the transformation matrix between the two drawing images is calculated according to the target centroid of the two drawing images by adopting a ransac algorithm, and positioning and proofreading between the drawings are carried out based on the transformation matrix of the two drawing images, so that the efficiency of positioning and proofreading between the drawing images is improved, and the problems of low manual proofreading efficiency, risk of proofreading omission and consumption of a large amount of manpower and financial resources are solved. Meanwhile, when the transformation matrix between two drawings is solved, a multi-task iteration solving method is adopted and the range of the preset distance threshold value in the ransac algorithm is dynamically adjusted, so that the problem of poor iterative calculation effect caused by the fact that a local optimal solution is trapped in the solving process of the transformation matrix is solved.
Example 2
FIG. 2 is a block diagram of an image positioning and calibrating apparatus 30 according to an embodiment of the present invention.
In some embodiments, the drawing image positioning and correction device 30 is implemented in an electronic device. The drawing image positioning and collating device 30 may include a plurality of functional modules composed of program code segments. The program code for each of the program segments in the drawing image positioning and correcting apparatus 30 may be stored in a memory and executed by at least one processor to perform the drawing image positioning and correcting functions.
In this embodiment, the drawing image positioning and correcting device 30 may be divided into a plurality of functional modules according to the functions performed by the drawing image positioning and correcting device. Referring to fig. 2, the drawing image positioning and correcting apparatus 30 may include a centroid determining module 301, a transformation matrix calculating module 302, a correcting module 303, and a display module 304. The module referred to herein is a series of computer program segments capable of being executed by at least one processor and capable of performing a fixed function and is stored in memory. In some embodiments, the functionality of the modules will be described in greater detail in subsequent embodiments.
The centroid determining module 301 obtains the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image by using the deep learning model of image segmentation.
In this embodiment, the first drawing image and the second drawing image are both architectural drawing images, and the target element is a shear wall. The centroid determining module 301 obtains the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image by using the deep learning model of image segmentation, including:
(S11) inputting the first drawing image into the deep learning model for image segmentation to obtain a first shear wall image, and inputting the second drawing image into the deep learning model for image segmentation to obtain a second shear wall image;
(S12) calculating the mass center of the shear wall in the first drawing image according to the image moment of the first shear wall image, and calculating the mass center of the shear wall in the second drawing image according to the image moment of the second shear wall image; and
(S13) forming a first point set by the mass centers of all the shear walls in the first shear wall image, and forming a second point set by the mass centers of all the shear walls in the second shear wall image.
In this embodiment, the inputting the first drawing image into the deep learning model for image segmentation to obtain a first shear wall image includes: taking the shear wall in the first drawing image as a segmentation target of the first drawing image; manually dividing and calibrating the first drawing image according to the shear wall in the first drawing image to obtain a first division label; inputting the first drawing image and the first segmentation label into a deep learning neural network for training to obtain a deep learning model of image segmentation, wherein the deep learning model of image segmentation comprises a coding layer, a decoding layer and a convolutional layer which are sequentially connected; and inputting the first drawing image into the deep learning model for image segmentation to be segmented, wherein the obtained segmented image is a first shear wall image.
In this embodiment, the inputting the second drawing image into the deep learning model for image segmentation to obtain a second shear wall image includes: taking the shear wall in the second drawing image as a segmentation target of the second drawing image; manually dividing and calibrating the second drawing image according to the shear wall in the second drawing image to obtain a second division label; inputting the second drawing image and the second segmentation label into a deep learning neural network for training to obtain a deep learning model of the image segmentation; and inputting the second drawing image into the deep learning model for image segmentation to be segmented, and obtaining a segmented image which is a second shear wall image.
In the present embodiment, the image moment is a 1 st order moment. The step of calculating the centroid of the shear wall in the first drawing image through the image moment of the first shear wall image comprises: and calculating the centroid of the first drawing image through the following formula (1).
Figure BDA0002353864990000141
Wherein V (i, j) is a gray value of the first shear wall image on a pixel point (i, j), i is an abscissa value of the first shear wall image in a pixel coordinate system, j is an ordinate value of the first shear wall image in the pixel coordinate system, and x 1 Is an abscissa value, y, of the centroid of the shear wall in the first shear wall image 1 And the longitudinal coordinate value of the mass center of the shear wall in the first shear wall image is shown.
In this embodiment, the obtaining of the centroid of the shear wall in the second drawing image by calculating the image moment of the second shear wall image includes: and (3) calculating the centroid of the first drawing image through the following formula (2).
Figure BDA0002353864990000151
Wherein U (i, j) is a gray value of the second shear wall image on a pixel point (i, j), i is an abscissa value of the second shear wall image in a pixel coordinate system, j is an ordinate value of the second shear wall image in the pixel coordinate system, and x is a gray value of the second shear wall image in the pixel coordinate system 2 Is an abscissa value, y, of the centroid of the shear wall in the second shear wall image 2 And the longitudinal coordinate value of the mass center of the shear wall in the second shear wall image is obtained.
The transformation matrix calculation module 302 calculates a transformation matrix between the first drawing image and the second drawing image by using a ransac algorithm according to the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image.
In this embodiment, the calculating module 302 of the transformation matrix according to the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image by using a ransac algorithm to calculate the transformation matrix between the first drawing image and the second drawing image includes:
(S21) dividing the first set of points into four sets of sub-points, and dividing the second set of points into four sets of sub-points, wherein each set of sub-points of the first set of points corresponds to one set of sub-points of the second set of points;
(S22) obtaining four sub-transformation matrixes and a weight coefficient of each sub-transformation matrix by utilizing a ransac algorithm according to each sub-point set of the first point set and the corresponding sub-point set of the second point set in sequence; and
and (S23) calculating to obtain a transformation matrix between the first drawing image and the second drawing image according to the four sub-transformation matrixes and the weight coefficient of each transformation matrix.
In a specific embodiment, the dividing the first point set into four sub-point sets and the dividing the second point set into four sub-point sets, where each sub-point set of the first point set corresponds to one sub-point set of the second point set includes: randomly dividing the first point set into a first sub-point set, a second sub-point set, a third sub-point set and a fourth sub-point set, and randomly dividing the second point set into 4 parts which are respectively a fifth sub-point set, a sixth sub-point set, a seventh sub-point set and an eighth sub-point set, wherein the number of each sub-point set in the first point set is the same as that of the sub-point sets in the second point set, the first sub-point set corresponds to the fifth sub-point set, the second sub-point set corresponds to the sixth sub-point set, and the third sub-point set corresponds to the seventh sub-point set; the fourth set of sub-points corresponds to the eighth set of sub-points.
In this embodiment, the sequentially obtaining four sub-transformation matrices and a weight coefficient of each sub-transformation matrix according to each sub-point set of the first point set and a sub-point set of the second point set corresponding to the first point set by using a ransac algorithm includes:
a) Calculating a first sub-transformation matrix between the first sub-point set and the fifth sub-point set under the current iteration times, and taking the first sub-transformation matrix as the transformation matrix;
b) According to formula B pre =A 1 ·H 1 Calculating to obtain a point set of positions of the first sub-point set mapped to the second drawing image under the current iteration times to obtain a first target point set, wherein B pre Is a first set of target points, A 1 Is a first set of sub-points, H 1 Is a first sub-transformation matrix;
c) For each point in the first target point set, sequentially calculating a point in the fifth sub-point set closest to the point and a distance between the point and the closest point, defining the distance between the point and the closest point as the distance of the point, if the distance of the point is smaller than a preset distance threshold, reserving the point in the first target point set, otherwise, deleting the point from the first target point set, randomly finding a point from the fifth sub-point set, adding the point into the first target point set, and accumulating the distances of all the points in the first target point set to obtain the distance sum under the previous iteration times;
d) If the distance sum under the current iteration times is smaller than the distance sum under the previous iteration times, the preset distance threshold value is reduced, otherwise, the preset distance threshold value is increased, and the adjusted preset distance threshold value is used for the next iteration, wherein the distance sum under the current iteration times is smaller than the distance sum under the previous iteration times, and the formula is used according to the formula
Figure BDA0002353864990000161
Adjusting a preset distance threshold, wherein D all Is the sum of the distances at the current iteration number, D all_last For the sum of distances at the previous iteration number, D thesh Is a preset distance threshold;
e) Repeating the steps (a) - (d) to obtain the distance sum corresponding to all points in the first target point set until the specified iteration number is reached or the distance sum under the current iteration number is smaller than a specified value, and taking the distance sum of all points in the first target point set as the distance sum of the first sub-transformation matrix;
f) Respectively calculating the distance sum of a second sub transformation matrix between the second sub point set and the sixth sub point set and the third sub transformation between the third sub point set and the seventh sub point set by utilizing the steps (a) - (e)The distance sum of the transformation matrix and the distance sum of the fourth sub-transformation matrix between the fourth sub-point set and the eighth sub-point set are calculated according to the formula alpha 1 =1-D all1 /max(D all1 ,D all2 ,D all3 ,D all4 ) Calculating the weight coefficient of the first sub-transformation matrix according to the formula alpha 2 =1-D all2 /max(D all1 ,D all2 ,D all3 ,D all4 ) Calculating the weight coefficient of the second sub-transformation matrix according to the formula alpha 3 =1-D all3 /max(D all1 ,D all2 ,D all3 ,D all4 ) Calculating the weight coefficient of the third sub-transformation matrix according to the formula alpha 3 =1-D all4 /max(D all1 ,D all2 ,D all3 ,D all4 ) And calculating the weight coefficient of the fourth sub-transformation matrix.
In this embodiment, the obtaining of the transformation matrix between the first drawing image and the second drawing image by calculating according to the four sub-transformation matrices and the weight coefficient of each transformation matrix includes: according to the formula H = α 1 ·H 12 ·H 23 ·H 34 ·H 4 Calculating to obtain a transformation matrix between the first drawing image and the second drawing image, wherein H 1 、H 2 、H 3 And H 4 Respectively a first sub-transformation matrix, a second sub-transformation matrix, a third sub-transformation matrix and a fourth sub-transformation matrix, the alpha 1 Is the weight coefficient of the first sub-transform matrix, alpha 2 Is the weight coefficient of the second sub-transform matrix, alpha 3 Is the weight coefficient of the third sub-transform matrix, alpha 4 The weight coefficient of the fourth sub transformation matrix; and (c) substituting the transformation matrix into the step (a), and repeating the steps (a) - (f) until the distance sum of all the sub-transformation matrices under the specified iteration number or the current iteration number is less than the specified value.
The checking module 303 aligns the first drawing image and the second drawing image through the transformation matrix, and performs subtraction operation on the aligned first drawing image and second drawing image to obtain a difference item between the first drawing image and the second drawing image.
In this embodiment, the display module 304 is configured to display the aligned first drawing image and the aligned second drawing image, and a difference item between the first drawing image and the second drawing image.
According to the scheme, the centroid of the target elements of the two drawing images is calculated according to the deep learning model for image segmentation, the transformation matrix between the two drawing images is calculated according to the target centroid of the two drawing images by adopting a ransac algorithm, and positioning and proofreading between the drawings are carried out based on the transformation matrix of the two drawing images, so that the efficiency of positioning and proofreading between the drawing images is improved, and the problems of low manual proofreading efficiency, risk of proofreading omission and consumption of a large amount of manpower and financial resources are solved. Meanwhile, when the transformation matrix between two drawings is solved, a multi-task iteration solving method is adopted and the range of the preset distance threshold value in the ransac algorithm is dynamically adjusted, so that the problem of poor iterative calculation effect caused by the fact that a local optimal solution is trapped in the solving process of the transformation matrix is solved.
Example 3
Fig. 3 is a schematic diagram of an electronic device 6 according to an embodiment of the invention.
The electronic device 6 comprises a memory 61, a processor 62 and a computer program 63 stored in the memory 61 and executable on the processor 62. The processor 62, when executing the computer program 63, implements the steps in the above-described embodiment of the drawing image positioning and correcting method, such as the steps S1 to S3 shown in fig. 1. Alternatively, the processor 62 executes the computer program 63 to implement the functions of the modules/units in the above-mentioned embodiment of the drawing image positioning and checking apparatus, such as the modules 301 to 304 in fig. 2.
Illustratively, the computer program 63 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 62 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 63 in the electronic device 6. For example, the computer program 63 may be divided into a centroid determining module 301, a transformation matrix calculating module 302, a proofreading module 303 and a display module 304 in fig. 2, and the specific functions of each module are described in embodiment 2.
In this embodiment, the electronic device 6 may be a server, a computer, or the like. It will be appreciated by those skilled in the art that the schematic diagram is merely an example of the electronic device 6, and does not constitute a limitation of the electronic device 6, and may include more or less components than those shown, or some components may be combined, or different components, for example, the electronic device 6 may further include an input-output device, a network access device, a bus, etc.
The Processor 62 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor 62 may be any conventional processor or the like, the processor 62 being the control center for the electronic device 6, with various interfaces and lines connecting the various parts of the overall electronic device 6.
The memory 61 may be used for storing the computer programs 63 and/or modules/units, and the processor 62 may implement various functions of the electronic device 6 by running or executing the computer programs and/or modules/units stored in the memory 61 and calling data stored in the memory 61. The memory 61 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the stored data area may store data (such as audio data, a phonebook, etc.) created according to the use of the electronic device 6, and the like. In addition, the memory 61 may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The integrated modules/units of the electronic device 6, if implemented in the form of software functional modules and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and which, when executed by a processor, may implement the steps of the above-described embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
In the several embodiments provided in the present invention, it should be understood that the disclosed electronic device and method may be implemented in other manners. For example, the above-described embodiments of the electronic device are merely illustrative, and for example, the division of the modules is only one logical functional division, and there may be other divisions when the actual implementation is performed.
In addition, each functional module in each embodiment of the present invention may be integrated into the same processing module, or each module may exist alone physically, or two or more modules may be integrated into the same module. The integrated module can be realized in a hardware mode, and can also be realized in a mode of hardware and a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is to be understood that the word "comprising" does not exclude other modules or steps, and the singular does not exclude the plural. Several modules or electronic devices recited in the electronic device claims may also be implemented by one and the same module or electronic device by means of software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (8)

1. A drawing image positioning and correcting method is characterized by comprising the following steps:
calculating to obtain the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image by using a deep learning model of image segmentation;
calculating by adopting a ransac algorithm according to the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image to obtain a transformation matrix between the first drawing image and the second drawing image; and
aligning the first drawing image and the second drawing image through the transformation matrix, and performing subtraction operation on the aligned first drawing image and the aligned second drawing image to obtain a difference item between the first drawing image and the second drawing image;
the first drawing image and the second drawing image are building drawing images, the target element is a shear wall, and the calculating of the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image by using the deep learning model of image segmentation comprises the following steps:
inputting the first drawing image into the deep learning model of the image segmentation to obtain a first shear wall image, and inputting the second drawing image into the deep learning model of the image segmentation to obtain a second shear wall image;
calculating to obtain the centroid of the shear wall in the first drawing image through the image moment of the first shear wall image, and calculating to obtain the centroid of the shear wall in the second drawing image through the image moment of the second shear wall image; and
forming a first point set by the mass centers of all the shear walls in the first shear wall image, and forming a second point set by the mass centers of all the shear walls in the second shear wall image;
the step of calculating the centroid of the shear wall in the first drawing image through the image moment of the first shear wall image comprises:
by the formula
Figure FDA0003980167180000011
Figure FDA0003980167180000012
Figure FDA0003980167180000013
Figure FDA0003980167180000014
Figure FDA0003980167180000015
Calculating to obtain a centroid of the first drawing image, wherein V (i, j) is a gray value of the first shear wall image on a pixel point (i, j), i is an abscissa value of the first shear wall image in a pixel coordinate system, j is an ordinate value of the first shear wall image in the pixel coordinate system, and x is an ordinate value of the first shear wall image in the pixel coordinate system 1 Is an abscissa value, y, of the centroid of the shear wall in the first shear wall image 1 And the longitudinal coordinate value of the mass center of the shear wall in the first shear wall image is used.
2. The drawing image positioning and correcting method of claim 1, wherein the inputting the first drawing image into the deep learning model for image segmentation to obtain a first shear wall image comprises:
taking the shear wall in the first drawing image as a segmentation target of the first drawing image;
manually dividing and calibrating the first drawing image according to the shear wall in the first drawing image to obtain a first division label;
inputting the first drawing image and the first segmentation label into a deep learning neural network for training to obtain a deep learning model of image segmentation, wherein the deep learning model of image segmentation comprises a coding layer, a decoding layer and a convolutional layer which are sequentially connected; and
and inputting the first drawing image into the deep learning model for image segmentation to be segmented, wherein the obtained segmented image is a first shear wall image.
3. The drawing image positioning and correcting method according to claim 1, wherein the obtaining of the transformation matrix between the first drawing image and the second drawing image by using a ransac algorithm according to the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image comprises:
dividing the first point set into four sub-point sets, and dividing the second point set into four sub-point sets, wherein each sub-point set of the first point set corresponds to one sub-point set of the second point set, the four sub-point sets of the first point set are respectively a first sub-point set, a second sub-point set, a third sub-point set and a fourth sub-point set, and the four sub-point sets of the second point set are respectively a fifth sub-point set, a sixth sub-point set, a seventh sub-point set and an eighth sub-point set;
obtaining four sub-transformation matrixes and a weight coefficient of each sub-transformation matrix by using a ransac algorithm according to each sub-point set of the first point set and the corresponding sub-point set of the second point set in sequence; and
and calculating to obtain a transformation matrix between the first drawing image and the second drawing image according to the four sub-transformation matrixes and the weight coefficient of each transformation matrix.
4. The method as claimed in claim 3, wherein said obtaining four sub-transform matrices and the weight coefficient of each sub-transform matrix by using a ransac algorithm according to each sub-point set of the first point set and the corresponding sub-point set of the second point set in turn comprises:
a) Calculating a first sub-transformation matrix between the first sub-point set and the fifth sub-point set under the current iteration times, and taking the first sub-transformation matrix as the transformation matrix;
b) According to the formula B pre =A 1 ·H 1 Calculating to obtain a set of points of the positions of the first sub-point set mapped to the second drawing image under the current iteration times to obtain a first targetSet of points, wherein B pre Is a first set of target points, A 1 Is a first set of sub-points, H 1 Is a first sub-transformation matrix;
c) For each point in the first target point set, sequentially calculating a point in the fifth sub-point set closest to the point and a distance between the point and the closest point, defining the distance between the point and the closest point as the distance of the point, if the distance of the point is smaller than a preset distance threshold, reserving the point in the first target point set, otherwise, deleting the point from the first target point set, randomly finding a point from the fifth sub-point set, adding the point into the first target point set, and accumulating the distances of all the points in the first target point set to obtain a distance sum under the previous iteration;
d) If the distance sum under the current iteration times is smaller than the distance sum under the previous iteration times, the preset distance threshold value is reduced, otherwise, the preset distance threshold value is increased, and the adjusted preset distance threshold value is used for the next iteration;
e) Repeating the steps (a) - (d) to obtain the distance sum corresponding to all points in the first target point set until the specified iteration number is reached or the distance sum under the current iteration number is smaller than a specified value, and taking the distance sum of all points in the first target point set as the distance sum of the first sub-transformation matrix;
f) Respectively calculating the distance sum of a second sub transformation matrix between the second sub point set and the sixth sub point set, the distance sum of a third sub transformation matrix between the third sub point set and the seventh sub point set, and the distance sum of a fourth sub transformation matrix between the fourth sub point set and the eighth sub point set by utilizing the steps (a) - (e), and obtaining the distance sum according to a formula alpha 1 =1-D all1 /max(D all1 ,D all2 ,D all3 ,D all4 ) Calculating the weight coefficient of the first sub-transformation matrix according to a formula alpha 2 =1-D all2 /max(D all1 ,D all2 ,D all3 ,D all4 ) Calculating the weight coefficient of the second sub-transformation matrix according to a formula alpha 3 =1-D all3 /max(D all1 ,D all2 ,D all3 ,D all4 ) Calculate out theWeight coefficient of the third sub-transform matrix according to formula α 3 =1-D all4 /max(D all1 ,D all2 ,D all3 ,D all4 ) And calculating the weight coefficient of the fourth sub-transformation matrix.
5. The drawing image positioning and correcting method according to claim 4, wherein the calculating the transformation matrix between the first drawing image and the second drawing image according to the four sub-transformation matrices and the weight coefficient of each transformation matrix comprises:
according to the formula H = α 1 ·H 12 ·H 23 ·H 34 ·H 4 Calculating to obtain a transformation matrix between the first drawing image and the second drawing image, wherein H 1 、H 2 、H 3 And H 4 The alpha is the first sub-transformation matrix, the second sub-transformation matrix, the third sub-transformation matrix and the fourth sub-transformation matrix respectively 1 Is a weight coefficient of the first sub-transform matrix, alpha 2 Is the weight coefficient of the second sub-transform matrix, alpha 3 For the weight coefficients of said third sub-transform matrix, α 4 The weight coefficients of the fourth sub transformation matrix; collection
And (c) substituting the transformation matrix into the step (a), and repeating the steps (a) - (f) until the distance sum of all the sub-transformation matrices under the specified iteration number or the current iteration number is less than the specified value.
6. The drawing image positioning and correcting method of claim 4, further comprising:
according to the formula
Figure FDA0003980167180000031
Adjusting the preset distance threshold, wherein D all Is the sum of the distances at the current iteration number, D all_last For the sum of distances at the previous iteration number, D thesh And the preset distance threshold value is obtained.
7. A drawing image positioning and correcting device, which is used for realizing the drawing image positioning and correcting method according to any one of claims 1-6, and comprises the following components:
the centroid determining module is used for calculating the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image by using the deep learning model of image segmentation;
the transformation matrix calculation module is used for calculating a transformation matrix between the first drawing image and the second drawing image by adopting a ransac algorithm according to the centroid of the target element of the first drawing image and the centroid of the target element of the second drawing image; and
and the checking module is used for aligning the first drawing image and the second drawing image through the transformation matrix, and then performing subtraction operation on the aligned first drawing image and the aligned second drawing image to obtain a difference item between the first drawing image and the second drawing image.
8. An electronic device, comprising a processor and a memory, wherein the processor is configured to implement the drawing image positioning and checking method according to any one of claims 1 to 6 when executing the computer program stored in the memory.
CN202010002108.XA 2020-01-02 2020-01-02 Drawing image positioning and correcting method and device and electronic equipment Active CN111210442B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010002108.XA CN111210442B (en) 2020-01-02 2020-01-02 Drawing image positioning and correcting method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010002108.XA CN111210442B (en) 2020-01-02 2020-01-02 Drawing image positioning and correcting method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111210442A CN111210442A (en) 2020-05-29
CN111210442B true CN111210442B (en) 2023-02-03

Family

ID=70787183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010002108.XA Active CN111210442B (en) 2020-01-02 2020-01-02 Drawing image positioning and correcting method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111210442B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111932491B (en) * 2020-06-23 2022-02-08 联宝(合肥)电子科技有限公司 Component detection method, device and storage medium
CN114727064B (en) * 2022-04-02 2022-11-25 清华大学 Construction safety macroscopic monitoring system and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9811906B1 (en) * 2016-07-15 2017-11-07 Siemens Healthcare Gmbh Method and data processing unit for segmenting an object in a medical image
CN108460552A (en) * 2018-01-26 2018-08-28 中国地质大学(武汉) A kind of steel warehouse control system based on machine vision and PLC
CN108615236A (en) * 2018-05-08 2018-10-02 上海商汤智能科技有限公司 A kind of image processing method and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9811906B1 (en) * 2016-07-15 2017-11-07 Siemens Healthcare Gmbh Method and data processing unit for segmenting an object in a medical image
CN108460552A (en) * 2018-01-26 2018-08-28 中国地质大学(武汉) A kind of steel warehouse control system based on machine vision and PLC
CN108615236A (en) * 2018-05-08 2018-10-02 上海商汤智能科技有限公司 A kind of image processing method and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
深度图像手势分割及HOG-SVM手势识别方法研究;LE Vanbang等;《计算机应用与软件》;20161215(第12期);全文 *

Also Published As

Publication number Publication date
CN111210442A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
US8725734B2 (en) Sorting multiple records of data using ranges of key values
CN111210442B (en) Drawing image positioning and correcting method and device and electronic equipment
CN110751620B (en) Method for estimating volume and weight, electronic device, and computer-readable storage medium
CN112906865B (en) Neural network architecture searching method and device, electronic equipment and storage medium
WO2022193872A1 (en) Method and apparatus for determining spatial relationship, computer device, and storage medium
CN110163095B (en) Loop detection method, loop detection device and terminal equipment
CN115035017A (en) Cell density grouping method, device, electronic apparatus and storage medium
CN107729944B (en) Identification method and device of popular pictures, server and storage medium
CN111815748A (en) Animation processing method and device, storage medium and electronic equipment
US20220207892A1 (en) Method and device for classifing densities of cells, electronic device using method, and storage medium
CN110717405A (en) Face feature point positioning method, device, medium and electronic equipment
CN113610856B (en) Method and device for training image segmentation model and image segmentation
CN112784818B (en) Identification method based on grouping type active learning on optical remote sensing image
CN113468972B (en) Handwriting track segmentation method for handwriting recognition of complex scene and computer product
CN114187598B (en) Handwriting digital recognition method, handwriting digital recognition equipment and computer readable storage medium
CN114821140A (en) Image clustering method based on Manhattan distance, terminal device and storage medium
CN113791425A (en) Radar P display interface generation method and device, computer equipment and storage medium
CN114881913A (en) Image defect detection method and device, electronic equipment and storage medium
CN113869455A (en) Unsupervised clustering method and device, electronic equipment and medium
CN112381458A (en) Project evaluation method, project evaluation device, equipment and storage medium
CN113553884A (en) Gesture recognition method, terminal device and computer-readable storage medium
CN111178630A (en) Load prediction method and device
CN115205423B (en) Handwriting method and system based on electronic equipment
CN114782355B (en) Gastric cancer digital pathological section detection method based on improved VGG16 network
CN115035322A (en) Image feature extraction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant