CN115034577A - Electromechanical product neglected loading detection method based on virtual-real edge matching - Google Patents

Electromechanical product neglected loading detection method based on virtual-real edge matching Download PDF

Info

Publication number
CN115034577A
CN115034577A CN202210556536.6A CN202210556536A CN115034577A CN 115034577 A CN115034577 A CN 115034577A CN 202210556536 A CN202210556536 A CN 202210556536A CN 115034577 A CN115034577 A CN 115034577A
Authority
CN
China
Prior art keywords
matching
virtual
image
edge
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210556536.6A
Other languages
Chinese (zh)
Inventor
杜福洲
吕能斌
王瑶伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Beijing Xinghang Electromechanical Equipment Co Ltd
Original Assignee
Beihang University
Beijing Xinghang Electromechanical Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University, Beijing Xinghang Electromechanical Equipment Co Ltd filed Critical Beihang University
Priority to CN202210556536.6A priority Critical patent/CN115034577A/en
Publication of CN115034577A publication Critical patent/CN115034577A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The application provides an electromechanical product neglected loading detection method based on virtual-real edge matching. The content comprises the following steps: calibrating a camera, acquiring a multi-view virtual view of the three-dimensional model based on virtual space imaging, and constructing a template by extracting edge gradients; inputting an image to be detected, and identifying, roughly positioning and optimizing positioning of an assembly part based on virtual and real edge matching; mapping the matched template contour to an image, and searching virtual and real corresponding points based on the region and the edge; calculating the shape similarity and the matching rate of the virtual and real point sets based on shape context matching; and (4) comprehensively judging neglected loading through part identification score, similarity and matching rate. The neglected loading detection is carried out by a model driving and vision method, so that the dependence on real sample data is reduced, and the detection efficiency is improved.

Description

Electromechanical product neglected loading detection method based on virtual-real edge matching
Technical Field
The application relates to the technical field of assembly manufacturing, in particular to a neglected loading detection method for an electromechanical product based on virtual-real edge matching.
Background
Towards aerospace product assembling process, because product structure is complicated, the detection element type is various, the detection environment is changeable, to the neglected loading quality problem in the assembling process, often adopt traditional artifical visual mode to detect, detection efficiency is low and relies on inspection personnel professional level. The research and application of the visual detection technology improve the level of automatic detection, and gradually develop into an important means for improving the detection of the assembly quality of products. The assembly quality abnormity detection based on vision is characterized in that target identification and positioning are the key points, and corresponding methods can be roughly divided into two types: based on conventional image processing methods and based on deep learning methods. The method has the advantages that the calculation speed is high, but a complex algorithm is often required to be designed and parameters are adjusted due to the lack of stability and robustness of a complex environment. The deep learning-based method needs a large number of samples for training and detects neglected loading by establishing a neural network model, however, for single piece/small batch production mode products and the situation that an industrial field does not support data collection in a large amount, the deep learning model cannot achieve a good detection effect.
Disclosure of Invention
In view of the defects or shortcomings in the prior art, the invention aims to provide an electromechanical product neglected loading detection method based on virtual-real edge matching, which solves the problem of electromechanical product neglected loading detection quality under the condition of no/few sample data and reduces the dependence on real scene data in the detection process, and is implemented by the following technical scheme, and comprises the following steps:
s101, calibrating a camera, acquiring a multi-view virtual view of the three-dimensional model based on virtual space imaging, and constructing a template by extracting edge gradients.
Calibrating a camera by a camera calibration method, solving internal parameter K, applying the internal parameter K to a virtual imaging process based on OpenGL, and assembling a part three-dimensional model and a virtual camera by establishingAnd obtaining the virtual view under multiple viewpoints at different positions and postures. The virtual imaging process is the operation of 3D points of the three-dimensional model, and the three-dimensional model is subjected to homogeneous coordinates in OpenGL space
Figure BDA0003634024590000011
Homogeneous coordinate conversion to view volume
Figure BDA0003634024590000012
The modeling is shown in the formula.
Figure BDA0003634024590000013
G represents a conversion matrix from a model coordinate system to a global coordinate system, T 'is a conversion relation between the global coordinate system and a camera coordinate system, the model coordinate system is a model design coordinate system, the global coordinate system is constructed by taking a model geometric center as an original point, a three-dimensional model is converted into representation under the camera coordinate system through the matrixes T' and G, view and projection conversion is carried out by utilizing a projection correlation matrix P (K), and then a final imaging image can be obtained through cutting and viewport conversion. Where P (K) is a matrix associated with camera intrinsic parameters, as shown.
Figure BDA0003634024590000021
Wherein Z is n And Z f Are the near and far clipping planes of OpenGL, and w and h are the width and height of the image.
Setting of pose of three-dimensional model and virtual camera by longitude
Figure BDA0003634024590000022
Latitude delta, radius r, and virtual camera z around itself c The angle θ of the axis rotation is expressed, and as shown in fig. 2, the view under multiple viewpoints is obtained by setting the four parameters, and image data is provided for template construction. During template construction, the method is completed by extracting edge gradients, and comprises the following main steps:
1) and performing Gaussian filtering on the image, extracting gradients of three channels of the RGB image by using a Sobel operator, and calculating the gradient direction in the channel with the maximum gradient amplitude.
Figure BDA0003634024590000023
Figure BDA0003634024590000024
In the above formula, C represents a certain channel of the image, ori (·) represents the gradient direction, and x represents the pixel position.
2) And extracting candidate characteristic points. And reserving pixels with gradient amplitudes larger than a certain threshold value as characteristic points, and sampling the reserved characteristic point set.
3) And carrying out gradient direction vectorization on the filtered feature point set, wherein the gradient direction is unified to 0-180 degrees and is divided into n ranges, and the ranges are respectively expressed by quantization values.
4) And storing the characteristic point descriptor. Each feature point is characterized by x and y coordinates and the quantization value. The template images under multiple visual angles are sequentially subjected to the operations and finally stored as a template file.
S102, inputting an image to be detected, and identifying, roughly positioning and positioning optimizing the assembly parts based on virtual and real edge matching.
The input image to be examined is subjected to gradient direction vectorization as described in S101, and on that basis, gradient direction expansion is performed, and the gradient directions in a certain neighborhood are collectively expressed. In the template matching process, the similarity between the template image and the corresponding input image area is sequentially calculated in a sliding window mode, and acceleration is performed by establishing a pyramid and linear storage optimization. The similarity evaluation function used is as follows:
Figure BDA0003634024590000031
where ∈ represents the similarity between the template image and the detection region, ori (O, r) represents the gradient direction at the position r of the template image O, ori (I, T) represents the gradient direction of the point T in the input image I, T ═ O (p) represents the template, and p represents the template feature point set.
Figure BDA0003634024590000032
Representing the neighborhood centered at c + r and τ as the radius.
And calculating the score of each region in the matching process, extracting target potential regions by setting a threshold, judging that a target exists in the regions larger than the threshold, and sorting the regions with the targets according to the scores and taking the largest region as a positioning result if the positioning fails. And on the basis of completing the coarse positioning, performing positioning optimization based on a point-to-plane ICP algorithm, sequentially performing sub-pixel edge extraction, ICP target function establishment and conversion matrix solving of two groups of point sets on the image to be detected, and fine adjustment is performed on the positioning of the assembly part.
And S103, mapping the matched template contour to an image, and searching virtual and real corresponding points based on the region and the edge characteristics.
Extracting the outline of the matched template part, mapping the outline to an image, establishing a search line along the normal of the outline, constructing a foreground/background color statistical histogram through the search line, and searching the corresponding point pair of the virtual and real edges. Firstly, the corresponding point pair is searched and defined based on the method of the region
Figure BDA0003634024590000033
And
Figure BDA0003634024590000034
consider phi for pixel x - The probability of belonging to the foreground and the background in the neighborhood, where P f (x) And P b (x) For foreground to background prior probability calculated by colour histogram, phi - Indicating the step length of pixel point x extending along the search line to the background direction, and similarly, pixel x is at phi + The front/background probabilities of the regions are respectively
Figure BDA0003634024590000035
And
Figure BDA0003634024590000036
Φ + the step length of pixel point x extending along the search line to the foreground direction. And then calculating the probability that the pixel x belongs to the contour, the foreground and the background as follows:
P(x|C)=P(Φ - |B)P(Φ + |F) (7)
P(x|F)=P(Φ - |F)P(Φ + |F) (8)
P(x|B)=P(Φ - |B)P(Φ + |B) (9)
if P (x | C) > P (x | F) and P (x | C) > P (x | B), the pixel is considered to be the corresponding contour point. Further, corresponding point pairs are filtered through edge information, Canny edge detection is carried out on the image, whether the position of a pixel point x is an edge or not is calculated, if the position of the pixel point x is the edge, the pixel point x is reserved, otherwise, the pixel point x is not considered, and therefore a virtual-real two-group point set is established.
And S104, calculating the similarity and the matching rate based on the shape context matching.
Mixing deficiency and excess p i ∈P、q j E, expressing two groups of point sets belonging to Q by using a histogram under a logarithmic polar coordinate system, and establishing a matching cost function as follows:
Figure BDA0003634024590000041
wherein h is i (k),h j (k) Are respectively a point p i ,q j K represents the established histogram index, similarity scores are obtained by calculating all points and establishing a one-to-one matching relation, the values are established based on the cost matrix, and the smaller the value is, the more similar the representation is. Further, point pairs with Euclidean distances of the shape matching points larger than a certain threshold value are regarded as mismatching points, and then the ratio of the successful matching points to all the matching points is calculated to obtain the matching rate.
And S105, comprehensively judging neglected loading through part identification scores, similarity and matching rate.
In the neglected loading judging process, as shown in fig. 3, the matching score of the target in the image is calculated based on edge matching, and if the matching score is greater than a threshold value, a potential target is considered to exist; calculating whether the similarity is smaller than a threshold value or not based on the shape matching, and if so, determining that a similarity condition is met; calculating the matching rate, and if the matching rate is greater than a threshold value, considering that the assembly is correct; if the three conditions are met, the assembly is normal, otherwise, the assembly is detected as neglected assembly. If the assembly is correct and the neglected loading is 0, the judgment condition is as follows:
Figure BDA0003634024590000042
O th 、S th 、R th respectively representing a target recognition threshold, a shape similarity threshold, and a matching rate threshold. Thus, the whole neglected loading detection process is completed.
The invention provides an electromechanical product neglected loading detection method based on virtual-real edge matching, which has the following beneficial effects compared with the prior art:
the method is based on machine vision, reduces the strength of manual detection, improves the detection efficiency, and reduces the situations of false detection and missed detection. The part identification and positioning are carried out based on the edge gradient characteristics, the robustness is realized on parts without/with few textures, further, the comprehensive constraint based on the region and the edge is utilized when the virtual and real matching corresponding points are established, and the accuracy of searching the virtual and real corresponding points is improved. The missing package judging process is completed through three indexes of part identification, shape similarity and matching rate, and the missing package detection rate and accuracy are improved. In addition, the method does not need a real sample to participate in template construction, and has applicability to assembly scenes without/with few samples.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a schematic view of virtual space imaging according to the present invention;
FIG. 3 is a flow chart of the missing-loading determination of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention.
The invention provides an electromechanical product neglected loading detection method based on virtual-real edge matching, which comprises the following steps:
s101, calibrating a camera, acquiring a multi-view virtual view of the three-dimensional model based on virtual space imaging, and constructing a template by extracting edge gradients.
In the embodiment of the invention, a camera is calibrated based on a Zhang friend camera calibration method, an internal parameter K is solved and applied to a virtual imaging process based on OpenGL, and a virtual view under multiple viewpoints is obtained by establishing different positions and postures of a three-dimensional model of an assembly part and a virtual camera. The virtual imaging process is the operation of 3D points of the three-dimensional model, and the three-dimensional model is subjected to homogeneous coordinates in OpenGL space
Figure BDA0003634024590000051
Homogeneous coordinate conversion to view volume
Figure BDA0003634024590000052
The modeling is shown in the formula.
Figure BDA0003634024590000053
G represents a conversion matrix from a model coordinate system to a global coordinate system, T 'is a conversion relation between the global coordinate system and a camera coordinate system, the model coordinate system is a model design coordinate system, the global coordinate system is constructed by taking a model geometric center as an original point, a three-dimensional model is converted into representation under the camera coordinate system through the matrixes T' and G, view and projection transformation is carried out by utilizing a projection correlation matrix P (K), and then a final imaging image can be obtained through cutting and viewport transformation. Where P (K) is a matrix related to camera parameters, as shown below.
Figure BDA0003634024590000054
Wherein Z is n And Z f Are the near and far clipping planes of OpenGL, and w and h are the width and height of the image.
Setting of pose of three-dimensional model and virtual camera by longitude
Figure BDA0003634024590000055
Latitude delta, radius r, and virtual camera z around itself c The angle θ of the axis rotation is expressed, and as shown in fig. 2, the view under multiple viewpoints is obtained by setting the four parameters, and image data is provided for template construction.
In the embodiment, the view points are actually detected according to the assembly parts, and the view point distribution, the view point number and the visual range of the parts in the assembly scene are considered, so that four parameters related to the pose of the virtual camera are set.
After obtaining a multi-viewpoint three-dimensional model imaging view, template construction is carried out, and the method is realized based on edge gradient characteristics and mainly comprises the following steps:
1) and performing Gaussian filtering on the image, extracting gradients of three channels of the RGB image by using a Sobel operator, and calculating the gradient direction in the channel with the maximum gradient amplitude.
Figure BDA0003634024590000061
Figure BDA0003634024590000062
Where C denotes a certain channel of the image, ori (·) denotes a gradient direction, and x denotes a pixel position.
2) And extracting candidate characteristic points. And reserving pixels with gradient amplitudes larger than a certain threshold value as characteristic points, and sampling the reserved characteristic point set.
3) And carrying out gradient direction vectorization on the filtered feature point set, wherein the gradient direction is unified to 0-180 degrees and is divided into n ranges, and the ranges are respectively expressed by quantization values.
4) And storing the characteristic point descriptor. Each feature point is respectively characterized by x and y coordinates and the quantization value of the feature point. The template images under multiple visual angles are sequentially subjected to the operations and finally stored as template files.
In the embodiment of the invention, the number of the required characteristic points is set according to the image resolution and the part size, and sampling is carried out through the Euclidean distance between the characteristic points when the candidate characteristic points are filtered. When the gradient direction is quantized, the gradient direction is divided into 8 ranges and is represented by quantized values of 1 to 8.
S102, inputting an image to be detected, and identifying, roughly positioning and positioning optimizing the assembly parts based on virtual and real edge matching.
The input image to be detected is subjected to gradient direction vectorization as described in S101, and gradient direction expansion is performed, that is, the input image is collectively represented by the gradient directions in a certain neighborhood, and in this embodiment, a 3 × 3 neighborhood is adopted. In the template matching stage, the similarity between the template image and the corresponding input image area is sequentially calculated in a sliding window mode, and acceleration is performed by building a pyramid and linear storage optimization. The matching similarity evaluation function is as follows:
Figure BDA0003634024590000063
where ∈ represents the similarity between the template image and the detection region, ori (O, r) represents the gradient direction at the position r of the template image O, ori (I, T) represents the gradient direction of the point T in the input image I, T ═ O (p) represents the template, and p represents the template feature point set.
Figure BDA0003634024590000064
Representing the neighborhood centered at c + r and τ as the radius.
And calculating the score of each region in the matching process, extracting target potential regions by setting a threshold, judging that a target exists in the regions larger than the threshold, and sorting the regions with the targets according to the scores and taking the largest region as a positioning result if the positioning fails. And on the basis of completing the coarse positioning, performing positioning optimization based on a point-to-plane ICP algorithm, sequentially performing sub-pixel edge extraction, ICP target function establishment and conversion matrix solving of two groups of point sets on the image to be detected, and fine adjustment is performed on the positioning of the assembly part.
In the embodiment of the invention, the part matching identification threshold is determined according to the material, appearance characteristics and shielding conditions of the part.
S103, mapping the matched template contour to an image, and searching virtual and real corresponding points based on the region and the edge features.
Extracting the outline of the matched template part, mapping the outline to an image, establishing a search line along the normal of the outline, constructing a foreground/background color statistical histogram through the search line, and searching the corresponding point pair of the virtual and real edges. Firstly, the corresponding point pair is searched and defined based on the method of the region
Figure BDA0003634024590000071
And
Figure BDA0003634024590000072
consider phi for pixel x - Neighborhood is the probability of belonging to the foreground and background, where P f (x) And P b (x) For foreground to background prior probability calculated by colour histogram, phi - Indicating the step length of pixel point x extending along the search line to the background direction, and similarly, pixel x is at phi + The front/background probabilities of the regions are respectively
Figure BDA0003634024590000073
And
Figure BDA0003634024590000074
Φ + the step length of pixel point x extending along the search line to the foreground direction. In this example,. phi - And phi + The extension step is 3 pixels. And then calculating the probability that the pixel x belongs to the contour, the foreground and the background as follows:
P(x|C)=P(Φ - |B)P(Φ + |F) (7)
P(x|F)=P(Φ - |F)P(Φ + |F) (8)
P(x|B)=P(Φ - |B)P(Φ + |B) (9)
if P (x | C) > P (x | F) and P (x | C) > P (x | B), the pixel is considered as the corresponding contour point. Further, corresponding point pairs are filtered through edge information, Canny edge detection is carried out on the image, whether the position x of a pixel point is an edge or not is calculated, if the position x of the pixel point is the edge, the edge is reserved, otherwise, the point is not considered, and therefore a virtual-real two-group point set is established.
In this embodiment, the number of the search lines to be established is determined according to the number of the contour points, and the establishment of the color statistical histogram is determined according to the appearance characteristics of the part.
And S104, calculating the similarity and the matching rate based on the shape context matching.
Mixing deficiency and excess p i ∈P、q j E, expressing two groups of point sets belonging to Q by using a histogram under a logarithmic polar coordinate system, and establishing a matching cost function as follows:
Figure BDA0003634024590000075
wherein h is i (k),h j (k) Are respectively a point p i ,q j K represents the established histogram index, all points are calculated, a one-to-one matching relation is established, a similarity score is obtained, the value is established based on a cost matrix, and the smaller the value is, the more similar the representation is. Further, point pairs with Euclidean distances of the shape matching points larger than a certain threshold value are regarded as mismatching points, and then the ratio of the successful matching points to all the matching points is calculated to obtain the matching rate.
In this embodiment, the calculation method of the euclidean distance threshold for determining the mis-matching point is as follows: 1/10 for the largest distance in the set of matching points.
And S105, comprehensively judging neglected loading through part identification score, similarity and matching rate.
In the neglected loading judging process, as shown in fig. 3, the matching score of the target in the image is calculated based on edge matching, and if the matching score is greater than a threshold value, a potential target is considered to exist; calculating whether the similarity is smaller than a threshold value or not based on the shape matching, and if so, determining that a similarity condition is met; calculating the matching rate, and if the matching rate is greater than a threshold value, determining that the assembly is correct; if the three conditions are met simultaneously, the assembly is normal, otherwise, the neglected loading is detected. If the assembly is correct and the neglected loading is 0, the judgment condition is as follows:
Figure BDA0003634024590000081
O th 、S th 、R th respectively representing a part identification threshold, a shape similarity threshold and a matching rate threshold. Thus, the whole neglected loading detection process is completed.
In this embodiment, the part identification threshold, the shape similarity threshold, and the matching rate threshold are determined according to the size and appearance characteristics of the assembled part.
The foregoing is only a preferred embodiment of the present invention and is not intended to be limiting, it being understood that those skilled in the art may make modifications, decorations or changes without departing from the principle of the invention, or may combine the above technical features in a suitable manner; such modifications, variations, combinations, or adaptations of the invention using its teachings or techniques without such modifications as would normally occur to one skilled in the art to which the invention pertains are deemed to lie within the scope and spirit of the invention.

Claims (6)

1. A missing installation detection method of an electromechanical product based on virtual and real edge matching is characterized by comprising the following steps:
calibrating a camera, acquiring a multi-view virtual view of the three-dimensional model based on virtual space imaging, and constructing a template by extracting edge gradients;
inputting an image to be detected, and identifying, roughly positioning and optimizing positioning of an assembly part based on virtual and real edge matching;
mapping the matched template contour to an image, and searching virtual and real corresponding points based on the region and the edge;
calculating similarity and matching rate based on shape context matching;
and (4) comprehensively judging neglected loading through part identification score, similarity and matching rate.
2. The method for detecting the neglected loading of the electromechanical product based on the virtual-real edge matching as claimed in claim 1, wherein: when the virtual space imaging is carried out to obtain the multi-view virtual view image of the three-dimensional model, the multi-view virtual view image is obtained by setting different poses of the camera and the three-dimensional model and using longitude
Figure FDA0003634024580000011
Latitude delta, radius r, and virtual camera z around itself c The angle of rotation of the shaft is represented by theta. When the template is constructed, based on the edge gradient characteristics, the method comprises the following steps:
1) and performing Gaussian filtering on the image, extracting gradients of three channels of the RGB image by using a Sobel operator, and calculating the gradient direction in the channel with the maximum gradient amplitude.
Figure FDA0003634024580000012
Figure FDA0003634024580000013
In the above formula, C represents a certain channel of the image, ori (·) represents the gradient direction, and x represents the pixel position.
2) And extracting candidate characteristic points. And reserving pixels with gradient amplitudes larger than a certain threshold value as characteristic points, and sampling the reserved characteristic point set.
3) And carrying out gradient direction vectorization on the filtered feature point set, wherein the gradient direction is unified to 0-180 degrees and is divided into n ranges, and the ranges are respectively expressed by quantization values.
4) And storing the characteristic point descriptor. Each feature point is characterized by x and y coordinates and the quantization value. The template images under multiple visual angles are sequentially subjected to the operations and finally stored as template files.
3. The method for detecting the neglected loading of the electromechanical product based on the virtual-real edge matching as claimed in claim 1, wherein: when the assembled parts are identified, the input image to be detected is subjected to gradient direction vectorization, the gradient direction expansion is carried out, and the gradient directions in a certain neighborhood are jointly expressed. In the template matching process, the similarity between the template image and the corresponding input image area is sequentially calculated in a sliding window mode, and acceleration is performed by establishing a pyramid and linear storage optimization. The constructed matching similarity evaluation function is as follows:
Figure FDA0003634024580000014
where ∈ represents the similarity between the template image and the detection region, ori (O, r) represents the gradient direction at the position r of the template image O, ori (I, T) represents the gradient direction of the point T in the input image I, T ═ O (p) represents the template, and p represents the template feature point set.
Figure FDA0003634024580000021
The above expression represents the neighborhood region centered at c + r and τ as the radius.
And calculating the score of each region in the matching process, extracting target potential regions by setting a threshold, judging that a target exists in the regions larger than the threshold, and sorting the regions with the targets according to the scores and taking the largest region as a positioning result if the positioning fails. And on the basis of completing the coarse positioning, performing positioning optimization based on a point-to-plane ICP algorithm, and sequentially performing sub-pixel edge extraction, ICP target function establishment and conversion matrix solving of two groups of point sets on the image to be detected.
4. The method for detecting the neglected loading of the electromechanical product based on the virtual-real edge matching as claimed in claim 1, wherein: extracting the matched template part outer contour, mapping to the image, establishing a search line along the contour normal,and constructing a foreground/background color statistical histogram through the search lines, and searching the corresponding point pairs of the virtual and real edges. Firstly, the corresponding point pair is searched and defined based on the method of the region
Figure FDA0003634024580000022
And
Figure FDA0003634024580000023
Figure FDA0003634024580000024
consider phi for pixel x - The probability of belonging to the foreground and the background in the neighborhood, where P f (x) And P b (x) For foreground to background prior probability calculated by colour histogram, phi - Indicating the step length of pixel point x extending along the search line to the background direction, and similarly, pixel x is at phi + The front/background probabilities of the regions are respectively
Figure FDA0003634024580000025
And
Figure FDA0003634024580000026
Φ + the step length of pixel point x extending along the search line to the foreground direction. And then calculating the probability that the pixel x belongs to the contour, the foreground and the background as follows:
P(x|C)=P(Φ - |B)P(Φ + |F) (5)
P(x|F)=P(Φ - |F)P(Φ + |F) (6)
P(x|B)=P(Φ - |B)P(Φ + |B) (7)
if P (x | C) > P (x | F) and P (x | C) > P (x | B), the pixel is considered to be the corresponding contour point. And further, filtering corresponding point pairs through edge information, performing Canny edge detection on the image, calculating whether the position x of the pixel point is an edge, if so, reserving, otherwise, not considering, and establishing a virtual-real two-group point set.
5. The method for detecting the neglected loading of the electromechanical product based on the virtual-real edge matching as claimed in claim 1, wherein: mixing deficiency and excess p i ∈P、q j E, expressing two groups of point sets belonging to Q by using a histogram under a logarithmic polar coordinate system, and establishing a matching cost function as follows:
Figure FDA0003634024580000027
wherein h is i (k),h j (k) Are respectively a point p i ,q j K represents the established histogram index, similarity scores are obtained by calculating all points and establishing a one-to-one matching relation, the values are established based on the cost matrix, and the smaller the value is, the more similar the representation is. Further, point pairs with Euclidean distances of the shape matching points larger than a certain threshold value are regarded as mismatching points, and then the ratio of the successful matching points to all the matching points is calculated to obtain the matching rate.
6. The method for detecting the neglected loading of the electromechanical product based on the virtual-real edge matching as claimed in claim 1, wherein: calculating a matching score of the target in the image based on edge matching, and if the matching score is larger than a threshold value, considering that a potential target exists; calculating whether the similarity is smaller than a threshold value or not based on the shape matching, and if so, determining that a similarity condition is met; calculating the matching rate, and if the matching rate is greater than a threshold value, determining that the assembly is correct; if the three conditions are met simultaneously, the assembly is normal, otherwise, the neglected loading is detected. If the assembly is correct and the neglected loading is 0, the judgment condition is as follows:
Figure FDA0003634024580000031
O th 、S th 、R th respectively representing a target recognition threshold, a shape similarity threshold, and a matching rate threshold. And judging the neglected loading condition of the part by utilizing the comprehensive constraint of the three indexes.
CN202210556536.6A 2022-05-09 2022-05-09 Electromechanical product neglected loading detection method based on virtual-real edge matching Pending CN115034577A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210556536.6A CN115034577A (en) 2022-05-09 2022-05-09 Electromechanical product neglected loading detection method based on virtual-real edge matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210556536.6A CN115034577A (en) 2022-05-09 2022-05-09 Electromechanical product neglected loading detection method based on virtual-real edge matching

Publications (1)

Publication Number Publication Date
CN115034577A true CN115034577A (en) 2022-09-09

Family

ID=83120317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210556536.6A Pending CN115034577A (en) 2022-05-09 2022-05-09 Electromechanical product neglected loading detection method based on virtual-real edge matching

Country Status (1)

Country Link
CN (1) CN115034577A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116486126A (en) * 2023-06-25 2023-07-25 合肥联宝信息技术有限公司 Template determination method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116486126A (en) * 2023-06-25 2023-07-25 合肥联宝信息技术有限公司 Template determination method, device, equipment and storage medium
CN116486126B (en) * 2023-06-25 2023-10-27 合肥联宝信息技术有限公司 Template determination method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN106651752B (en) Three-dimensional point cloud data registration method and splicing method
CN109064502B (en) Multi-source image registration method based on combination of deep learning and artificial design features
CN107392929B (en) Intelligent target detection and size measurement method based on human eye vision model
CN107248159A (en) A kind of metal works defect inspection method based on binocular vision
CN109211198B (en) Intelligent target detection and measurement system and method based on trinocular vision
CN111723721A (en) Three-dimensional target detection method, system and device based on RGB-D
CN111652292B (en) Similar object real-time detection method and system based on NCS and MS
CN104167003A (en) Method for fast registering remote-sensing image
CN113470090A (en) Multi-solid-state laser radar external reference calibration method based on SIFT-SHOT characteristics
CN110008833B (en) Target ship detection method based on optical remote sensing image
CN114693661A (en) Rapid sorting method based on deep learning
CN111009005A (en) Scene classification point cloud rough registration method combining geometric information and photometric information
CN112288758B (en) Infrared and visible light image registration method for power equipment
CN112364881B (en) Advanced sampling consistency image matching method
CN107862319A (en) A kind of heterologous high score optical image matching error elimination method based on neighborhood ballot
CN110490915B (en) Point cloud registration method based on convolution-limited Boltzmann machine
CN115034577A (en) Electromechanical product neglected loading detection method based on virtual-real edge matching
CN108182700B (en) Image registration method based on two-time feature detection
CN113989308A (en) Polygonal target segmentation method based on Hough transform and template matching
CN109658523A (en) The method for realizing each function operation instruction of vehicle using the application of AR augmented reality
CN111415378B (en) Image registration method for automobile glass detection and automobile glass detection method
TW202225730A (en) High-efficiency LiDAR object detection method based on deep learning through direct processing of 3D point data to obtain a concise and fast 3D feature to solve the shortcomings of complexity and time-consuming of the current voxel network model
CN117496401A (en) Full-automatic identification and tracking method for oval target points of video measurement image sequences
CN110717910B (en) CT image target detection method based on convolutional neural network and CT scanner

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination