CN116625249A - Workpiece automatic detection method and device based on 2D and 3D vision and related medium thereof - Google Patents

Workpiece automatic detection method and device based on 2D and 3D vision and related medium thereof Download PDF

Info

Publication number
CN116625249A
CN116625249A CN202310624896.XA CN202310624896A CN116625249A CN 116625249 A CN116625249 A CN 116625249A CN 202310624896 A CN202310624896 A CN 202310624896A CN 116625249 A CN116625249 A CN 116625249A
Authority
CN
China
Prior art keywords
information
data
workpiece
feature
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310624896.XA
Other languages
Chinese (zh)
Inventor
丁克
丁兢
淳豪
李翔
马洁
王丰
叶闯
庞旭芳
林锦辉
胡财荣
刘芊伟
陆俊君
张敏
王凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Xianyang Technology Co ltd
Original Assignee
Foshan Xianyang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Xianyang Technology Co ltd filed Critical Foshan Xianyang Technology Co ltd
Priority to CN202310624896.XA priority Critical patent/CN116625249A/en
Publication of CN116625249A publication Critical patent/CN116625249A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a 2D and 3D vision-based automatic workpiece detection method, a device and a related medium thereof, wherein the method comprises the steps of respectively acquiring 2D information and 3D information of a workpiece according to control instructions; preprocessing the 2D information and the 3D information to obtain an information processing result; the information processing result comprises standard workpiece data and workpiece data to be detected; extracting features of the workpiece data to be detected by using a feature detection algorithm to obtain feature information data; performing feature comparison on the feature information data and the standard workpiece data to obtain a comparison deviation value; judging whether the contrast deviation value is within a set error threshold value, if so, judging that the workpiece is qualified; if not, judging as the unqualified workpiece, and prompting that the manual rechecking is needed. According to the application, after the 2D information and the 3D information are processed, the feature extraction is performed by using the feature detection algorithm, so that the visual detection system can describe the geometric form and the surface feature of the workpiece at the same time.

Description

Workpiece automatic detection method and device based on 2D and 3D vision and related medium thereof
Technical Field
The application relates to the technical field of information detection, in particular to a 2D and 3D vision-based workpiece automatic detection method and device and a related medium thereof.
Background
In modern manufacturing, there is an increasing demand for automatic inspection and quality control of workpieces. Conventional workpiece inspection methods typically use a single 2D vision system or 3D vision system, but these methods have certain limitations in some cases; the 2D vision system can only provide surface information of the workpiece, and is difficult to capture the three-dimensional shape and depth information of the workpiece, so that the detection accuracy of some complex workpieces or workpieces with geometric changes is limited; the 3D vision system can acquire three-dimensional shape and depth information of the workpiece, but for some workpieces with surface features such as textures, colors and the like, the effect of feature extraction and contrast analysis is poor. Thus, there is a need for a solution that can describe both the geometry and the surface characteristics of a workpiece.
Disclosure of Invention
The embodiment of the application provides a 2D and 3D vision-based workpiece automatic detection method and device and a related medium thereof, and aims to solve the problems that a workpiece detection system is single and the geometric form and the surface characteristics of a workpiece cannot be simultaneously described in the prior art.
In a first aspect, an embodiment of the present application provides a method for automatically detecting a workpiece based on 2D and 3D vision, including:
respectively acquiring 2D information and 3D information of the workpiece according to the control instruction;
preprocessing the 2D information and the 3D information to obtain an information processing result; the information processing result comprises standard workpiece data and workpiece data to be detected;
extracting features of the workpiece data to be detected by using a feature detection algorithm to obtain feature information data;
performing feature comparison on the feature information data and the standard workpiece data to obtain a comparison deviation value;
judging whether the contrast deviation value is within a set error threshold value, if so, judging that the workpiece is qualified; if not, judging as the unqualified workpiece, and prompting that the manual rechecking is needed.
In a second aspect, an embodiment of the present application provides an automatic workpiece detection device based on 2D and 3D vision, including:
the information acquisition unit is used for respectively acquiring 2D information and 3D information of the workpiece according to the control instruction;
the information processing unit is used for preprocessing the 2D information and the 3D information to obtain an information processing result; the information processing result comprises standard workpiece data and workpiece data to be detected;
the information extraction unit is used for carrying out feature extraction on the workpiece data to be detected by utilizing a feature detection algorithm to obtain feature information data;
the information comparison unit is used for comparing the characteristic information data with the standard workpiece data in characteristics to obtain a comparison deviation value;
the information judging unit is used for judging whether the contrast deviation value is within a set error threshold value, if so, judging that the workpiece is qualified; if not, judging as the unqualified workpiece, and prompting that the manual rechecking is needed.
In a third aspect, an embodiment of the present application provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the 2D and 3D vision-based workpiece automatic detection method of the first aspect when the computer program is executed.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, where the computer readable storage medium stores a computer program, where the computer program when executed by a processor implements the 2D and 3D vision-based workpiece automatic detection method of the first aspect.
The embodiment of the application provides a workpiece automatic detection method based on 2D and 3D vision, which comprises the steps of respectively acquiring 2D information and 3D information of a workpiece according to a control instruction; preprocessing the 2D information and the 3D information to obtain an information processing result; the information processing result comprises standard workpiece data and workpiece data to be detected; extracting features of the workpiece data to be detected by using a feature detection algorithm to obtain feature information data; performing feature comparison on the feature information data and the standard workpiece data to obtain a comparison deviation value; judging whether the contrast deviation value is within a set error threshold value, if so, judging that the workpiece is qualified; if not, judging as the unqualified workpiece, and prompting that the manual rechecking is needed. According to the application, after the 2D information and the 3D information are processed, the feature extraction is performed by using the feature detection algorithm, so that the visual detection system can describe the geometric form and the surface feature of the workpiece at the same time.
The embodiment of the application also provides a 2D and 3D vision-based workpiece automatic detection device, computer equipment and a storage medium, which have the same beneficial effects.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a 2D and 3D vision-based workpiece automatic detection method according to an embodiment of the present application;
fig. 2 is another flow chart of a 2D and 3D vision-based workpiece automatic detection method according to an embodiment of the present application;
fig. 3 is a schematic block diagram of a workpiece automatic detection device based on 2D and 3D vision according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Referring to fig. 1, fig. 1 is a schematic flow chart of automatic workpiece detection based on 2D and 3D vision according to an embodiment of the present application, which specifically includes: steps S101 to S105.
S101, respectively acquiring 2D information and 3D information of a workpiece according to a control instruction;
s102, preprocessing the 2D information and the 3D information to obtain an information processing result; the information processing result comprises standard workpiece data and workpiece data to be detected;
s103, carrying out feature extraction on the workpiece data to be detected by utilizing a feature detection algorithm to obtain feature information data;
s104, carrying out feature comparison on the feature information data and the standard workpiece data to obtain a comparison deviation value;
s105, judging whether the contrast deviation value is within a set error threshold value, if so, judging that the workpiece is qualified; if not, judging as the unqualified workpiece, and prompting that the manual rechecking is needed.
As shown in fig. 2, in step S101, 2D information and 3D information of a workpiece are acquired according to a control instruction input by a user; acquiring 2D information of the workpiece by using 2D vision equipment such as a camera or an image collector, wherein the 2D information can be a surface image or an image sequence of the workpiece; the 3D information of the workpiece is acquired using a 3D vision device, such as a laser scanner or a structured light scanner, and may be point cloud data of the workpiece. The 2D information and the 3D information are used for subsequent feature extraction, comparison analysis and the like, and the accuracy and the efficiency of automatic detection of the workpiece can be improved by combining the 2D information and the 3D information.
In one embodiment, before the step S101, the method includes:
judging whether an automatic detection system is used for the first time, if so, respectively acquiring an IP address and a port of a camera shooting a workpiece, and establishing TCP connection between the automatic detection system and the camera by utilizing the IP address and the port; if not, the TCP connection between the automatic detection system and the camera is automatically carried out.
In this embodiment, when the automatic detection system is started, the system determines whether the user uses the device for the first time by checking the system storage setting and the configuration information, if the user determines that the device uses the device for the first time, the automatic detection system may require the user to provide the camera IP address and port information of the workpiece to be shot, the user may input the camera IP address and port through the configuration interface or other interaction modes, and the system may use the network communication protocol and library functions, such as Socket programming, to establish TCP connection according to the IP address and port information provided by the user, so that the automatic detection system may perform data communication with the camera. If the camera IP address and the port information stored before the automatic detection system is automatically used without re-input by a user, the system can automatically establish TCP connection according to the stored configuration information, so that the operation flow is simplified, and the use convenience is improved.
In one embodiment, the step S101 includes:
activating the camera to operate by utilizing the control instruction to respectively obtain a 2D image and a 3D image; mapping the texture information on the 2D image to the 3D image, and carrying out pixel point association synchronization to obtain image synchronization information; respectively adjusting the image brightness, contrast, tone and saturation of the image synchronous information to obtain color correction parameters; extracting corner points and edge features on the 2D image and the 3D image respectively, and carrying out feature alignment by utilizing a local feature description matching algorithm to obtain feature alignment results; performing transformation matrix calculation of rotation, translation and affine transformation according to the feature alignment result to obtain feature alignment parameters; compensating and superposing the color correction parameters and the characteristic alignment parameters to obtain an image processing result; wherein the image processing result includes the 2D information and 3D information.
In this embodiment, according to the received control instruction, the camera is activated by using the automatic detection system, so that the camera starts to operate, and a 2D image and a 3D image of the workpiece are respectively acquired; the 2D image can be captured by an image sensor of a camera, and the 3D image can be obtained by a laser scanning method or a structured light scanning method and the like; mapping texture information on the 2D image to the 3D image, and carrying out pixel point association synchronization to obtain image synchronization information, so as to provide a basis for subsequent feature alignment and image processing; adjusting the brightness, contrast, tone and saturation of the images to obtain color correction parameters, and improving the quality and consistency of the images by adjusting the brightness, contrast, tone and saturation of the images, so as to ensure the consistency of the color information among the images, thereby facilitating the subsequent feature extraction and analysis; extracting corner points and edge features from the 2D image and the 3D image respectively, and performing feature alignment by utilizing a local feature description matching algorithm to obtain feature alignment results, wherein the alignment of the 2D image and the 3D image can be realized by extracting and matching the corner points and the edge features in the workpiece image; the transformation matrixes of rotation, translation and affine transformation of the feature alignment result are calculated respectively to obtain feature alignment parameters, and the rotation, translation and affine transformation relationship between the 2D and 3D images can be described through the feature alignment parameters obtained by calculating the feature alignment result; finally, by superposing and compensating the color correction parameters and the characteristic alignment parameters, the 2D information and the 3D information which are subjected to image processing can be obtained, so that the image processing device has better accuracy and consistency, and a reliable data basis is provided for subsequent workpiece judgment.
In step S102, preprocessing the obtained 2D information and 3D information, where the purpose of preprocessing is to eliminate noise, enhance image features, adjust the size and resolution of an image, so as to better extract and analyze features of a workpiece, and obtain an information processing result after preprocessing; the information processing result comprises standard workpiece data and workpiece data to be detected. The standard workpiece data are characteristic description and reference data of known qualified workpieces and are used for comparing and judging the known qualified workpieces with the workpieces to be detected so as to realize accuracy and qualification judgment of the workpieces.
In one embodiment, the step S102 includes:
respectively carrying out image denoising and image enhancement processing on the 2D information to obtain 2D preprocessing data; performing color space conversion processing on the 2D preprocessing data, and performing 2D feature extraction to obtain first detection data; respectively carrying out filtering treatment and downsampling on the 3D information to obtain 3D preprocessing data; carrying out data registration processing on the 3D preprocessing data, and carrying out 3D feature extraction to obtain second detection data; and carrying out feature fusion on the first detection data and the second detection data to obtain the workpiece data to be detected.
In this implementation, the image denoising process may employ a filtering algorithm or a denoising algorithm to reduce noise interference in the image; the image enhancement process may enhance features and details of the image by adjusting parameters such as contrast, brightness, sharpness, etc. of the image. After image denoising and image enhancement processing, 2D preprocessing data are obtained, the 2D preprocessing data are optimized image data, the quality and the usability are better, the 2D preprocessing data are subjected to color space conversion and are converted into a proper color space, then 2D feature extraction is carried out, and feature information such as angular points, edges and textures in an image is extracted to obtain first detection data. Filtering the obtained 3D information to remove noise and smooth curved surfaces, wherein the filtering can adopt filtering algorithms such as average filtering, gaussian filtering and the like, and then downsampling is carried out to reduce the density and the number of points of the 3D data so as to reduce the computational complexity; and carrying out data registration on the 3D information subjected to filtering and downsampling processing, aligning a plurality of 3D data to eliminate deviation caused by different visual angles and deformation, carrying out 3D feature extraction, and extracting feature information such as the shape, curvature, normal line and the like of the surface of the workpiece to obtain second detection data. And carrying out feature fusion on the first detection data and the second detection data, combining the 2D and 3D feature information to obtain final workpiece data to be detected, wherein the feature fusion can adopt methods such as weighted fusion, feature fusion algorithm and the like so as to comprehensively utilize the advantages of the 2D and 3D information and improve the accuracy and reliability of automatic detection of the workpiece.
In step S103, the workpiece data to be detected includes 2D and 3D information after image processing and feature fusion, which has better consistency; the characteristic detection algorithm is used for extracting the characteristics of the workpiece data to be detected, and can be a traditional computer vision algorithm, such as Harris corner detection, SIFT, SURF and the like, or an algorithm based on deep learning, such as a Convolutional Neural Network (CNN) and the like, wherein the characteristic detection algorithm can identify key characteristic points in an image, and the characteristic detection algorithm is used for extracting characteristic information in the workpiece data to be detected and is used for subsequent characteristic comparison and workpiece judgment; of course, by selecting a proper characteristic detection algorithm, the method can adapt to different types of workpieces and detection requirements, and improves the applicability and flexibility of the system.
In one embodiment, the step S103 includes:
respectively constructing the characteristics of corner points, edges, textures and color histograms of the workpiece data to be detected by adopting a three-dimensional point cloud recognition algorithm so as to acquire a first numerical descriptor; respectively carrying out local pixel intensity analysis, gradient direction detection and texture feature extraction according to the first numeric descriptor so as to obtain a second numeric descriptor; performing feature matching by using the second numeric descriptor to search similar feature descriptors in the second numeric descriptor, and finally obtaining a feature matching result after successful matching; and solving the perspective transformation gesture according to the feature matching result to evaluate the rotation and translation of the camera, so as to obtain the feature information data.
In the embodiment, a three-dimensional point cloud recognition algorithm is adopted to perform feature extraction on workpiece data to be detected, and the three-dimensional point cloud recognition algorithm is a feature extraction algorithm aiming at point cloud data and can recognize features such as angular points, edges, textures, color histograms and the like in point cloud; performing feature construction of corner points, edges, textures and color histograms on workpiece data to be detected through a three-dimensional point cloud recognition algorithm to obtain a first numeric descriptor, wherein the first numeric descriptor is a series of numeric descriptors obtained by feature construction and is used for describing key features of the workpiece; according to the first numerical descriptor, carrying out local pixel intensity analysis, gradient direction detection and texture feature extraction on the workpiece data to be detected to obtain a second numerical descriptor, wherein the second numerical descriptor further extracts local feature information of the workpiece, so that features of the workpiece can be described in more detail, the second numerical descriptor is utilized for carrying out feature matching, similar feature descriptors in the second numerical descriptor are searched for obtaining feature matching results, and a matching algorithm such as nearest neighbor matching, RANSAC algorithm, PPF algorithm and the like can be used for finding feature points similar to known features in the workpiece data to be detected; according to the feature matching result, solving perspective transformation gesture for evaluating rotation and translation of the camera, wherein the perspective transformation gesture solving can use gesture estimation algorithm such as PnP algorithm and the like to determine the gesture of the workpiece in the camera coordinate system; and according to the solving result of the perspective transformation gesture, final characteristic information data is obtained, the characteristic information data can help an automatic detection system to evaluate the characteristics and gesture of the workpiece more accurately, and the precision and efficiency of automatic detection of the workpiece are improved.
In step S104, the feature information data includes feature descriptors extracted from the workpiece data to be detected, for describing key features of the workpiece, and the standard workpiece data is predetermined workpiece data for comparison, and has known feature information; feature information data and standard workpiece data are subjected to feature comparison, the degree of difference between the features of the workpiece data to be detected and the features of the standard workpiece data can be estimated based on a similarity measurement method, such as Euclidean distance, correlation coefficient and the like, and a comparison deviation value of the features can be obtained through comparison; the contrast deviation value represents the degree of difference between the workpiece data to be detected and the standard workpiece data.
In one embodiment, the step S104 includes:
carrying out relative difference statistics on the characteristic information data and the size data of the standard workpiece data to obtain a first difference value; performing relative difference statistics on the angle data of the characteristic information data and the standard workpiece data to obtain a second difference value; performing relative difference statistics on the characteristic information data and the shape data of the standard workpiece data to obtain a third difference value; performing relative difference statistics on the characteristic information data and the color data of the standard workpiece data to obtain a fourth difference value; and respectively carrying out weighted average on the first difference value, the second difference value, the third difference value and the fourth difference value to obtain the contrast deviation value.
In this embodiment, the feature information data includes feature descriptors extracted from the workpiece data to be detected, and the dimension data, angle data, shape data and color data of the feature information data and the standard workpiece data are sequentially subjected to relative difference statistics, where the difference statistics may be based on statistical methods, such as average, variance, correlation, etc., to calculate the difference degrees of the feature information data and the standard workpiece data in different aspects, and by means of the difference statistics, we may obtain a first difference value, a second difference value, a third difference value and a fourth difference value, which respectively represent differences in terms of dimension, angle, shape and color; and carrying out weighted average on the first difference value, the second difference value, the third difference value and the fourth difference value to obtain a contrast deviation value, wherein the weighted average can allocate corresponding weight to each difference value according to the importance of different factors.
In step S105, the contrast deviation value represents the degree of difference between the workpiece data to be detected and the standard workpiece data; in the implementation process, an error threshold value is required to be preset, the error threshold value represents the allowable maximum difference degree, and the workpiece is considered to be a disqualified workpiece when exceeding the error threshold value; judging the qualification of the workpiece according to the comparison of the contrast deviation value and the error threshold value, and judging the workpiece to be qualified if the contrast deviation value is within the set error threshold value; if the comparison deviation value exceeds the set error threshold value, judging that the workpiece is a disqualified workpiece; for the condition that the workpiece is judged to be unqualified, the system can carry out corresponding prompt, the prompt needs to be manually rechecked, and the prompt can be realized through modes of interface display, sound reminding and the like, so that operators can timely carry out necessary manual rechecking operation, timely guide manual intervention and ensure the reliability and stability of product quality.
In an embodiment, the 2D information includes a gray scale image and a color image for providing two-dimensional information of the workpiece; the 3D information comprises point cloud data and a three-dimensional model and is used for providing three-dimensional information of the workpiece.
In this embodiment, the 2D information mainly includes a gray scale image and a color image, where the first aspect of the gray scale image is that brightness information of the surface of the workpiece may be obtained, the change of brightness may reflect the height fluctuation or texture feature of the surface of the workpiece, and a Canny edge detection algorithm may be used to extract contour edge information of the surface, so as to obtain the shape and geometric feature of the surface. The second aspect of gray image is that texture information of the surface of the workpiece can be obtained, and a gray co-occurrence matrix GLCM texture analysis method can be used, which is a statistical method for describing texture features of the gray image, the texture information of the image is captured by calculating the frequency and distribution of gray value pairs between pixels, typically GLCM calculates co-occurrence matrix of pixel pairs in a specific direction, and then represents the texture features of the image by calculating statistical features such as contrast, correlation, energy, entropy, etc. The first aspect of the color image is that richer surface information can be provided, the color change can reflect different materials, coatings or surface states of the surface of the workpiece, color space conversion (RGB to HSV) can be used for separating the color information, and then the color distribution and texture characteristics of the surface can be obtained through methods such as color distribution analysis, color texture extraction and the like. The second aspect of the color image is that texture information of the surface of the workpiece can be obtained, color and texture information in the color image can be combined to extract richer texture features, and a direction gradient histogram (direction gradient histogram HOG is a method for extracting image features and is mainly used for describing and identifying shapes and texture features in the image) can be applied to analyze the texture information in the color image to obtain texture features of the surface. The 3D information mainly comprises point cloud data and a three-dimensional model, and geometric form information of the workpiece can be extracted by processing the acquired point cloud data; the point cloud is subjected to filtering and denoising operations, so that noise and unnecessary points can be removed, and cleaner point cloud data can be obtained; aligning the plurality of point cloud data to obtain a more complete and consistent workpiece geometry; and (3) carrying out surface reconstruction through the point cloud data to generate a smooth three-dimensional model, so that the geometric form of the workpiece is more visual and easy to analyze.
In summary, the application has higher precision and accuracy by comprehensively utilizing the 2D information and the 3D information, can improve the efficiency of workpiece detection, and simultaneously reduces the detection cost, can be widely applied to the fields of industrial production and the like, has better practical application value, is not limited by the shape and the size of the workpiece, and can be applied to workpieces with different shapes and sizes.
Referring to fig. 3 in combination, fig. 3 is a schematic block diagram of a workpiece automatic detection device based on 2D and 3D vision, and the workpiece automatic detection device 300 based on 2D and 3D vision according to an embodiment of the present application includes:
an information acquisition unit 301 configured to acquire 2D information and 3D information of a workpiece according to control instructions, respectively;
an information processing unit 302, configured to pre-process the 2D information and the 3D information to obtain an information processing result; the information processing result comprises standard workpiece data and workpiece data to be detected;
an information extraction unit 303, configured to perform feature extraction on the workpiece data to be detected by using a feature detection algorithm, so as to obtain feature information data;
the information comparison unit 304 is configured to perform feature comparison on the feature information data and the standard workpiece data to obtain a comparison deviation value;
an information judging unit 305, configured to judge whether the contrast deviation value is within a set error threshold, if yes, judge that the workpiece is qualified; if not, judging as the unqualified workpiece, and prompting that the manual rechecking is needed.
In the present embodiment, the information acquisition unit 301 acquires 2D information and 3D information of the workpiece, respectively, according to the control instruction; the information processing unit 302 performs preprocessing on the 2D information and the 3D information to obtain an information processing result; the information processing result comprises standard workpiece data and workpiece data to be detected; the information extraction unit 303 performs feature extraction on the workpiece data to be detected by using a feature detection algorithm to obtain feature information data; the information comparison unit 304 performs feature comparison on the feature information data and the standard workpiece data to obtain a comparison deviation value; the information judging unit 305 judges whether the comparison deviation value is within a set error threshold value, if so, the workpiece is judged to be qualified; if not, judging as the unqualified workpiece, and prompting that the manual rechecking is needed.
In an embodiment, before the information obtaining unit 301, the method includes:
the establishing unit is used for judging whether the automatic detection system is used for the first time, if yes, respectively acquiring the IP address and the port of a camera shooting the workpiece, and establishing TCP connection between the automatic detection system and the camera by utilizing the IP address and the port; if not, the TCP connection between the automatic detection system and the camera is automatically carried out.
In an embodiment, the information obtaining unit 301 includes:
the activating unit is used for activating the camera to operate by utilizing the control instruction to respectively obtain a 2D image and a 3D image;
the mapping unit is used for mapping the texture information on the 2D image onto the 3D image and carrying out pixel point association synchronization to obtain image synchronization information;
the correction unit is used for respectively adjusting the image brightness, contrast, tone and saturation of the image synchronous information to obtain color correction parameters;
the alignment unit is used for extracting corner points and edge features on the 2D image and the 3D image respectively, and performing feature alignment by utilizing a local feature description matching algorithm to obtain feature alignment results;
the transformation unit is used for carrying out transformation matrix calculation of rotation, translation and affine transformation according to the characteristic alignment result to obtain characteristic alignment parameters;
the superposition unit is used for compensating and superposing the color correction parameters and the characteristic alignment parameters to obtain an image processing result; wherein the image processing result includes the 2D information and 3D information.
In one embodiment, the information processing unit 302 includes:
the enhancement unit is used for respectively carrying out image denoising and image enhancement processing on the 2D information to obtain 2D preprocessing data;
the color unit is used for performing color space conversion processing on the 2D preprocessing data and performing 2D feature extraction to obtain first detection data;
the filtering unit is used for respectively carrying out filtering processing and downsampling on the 3D information to obtain 3D preprocessing data;
the registration unit is used for carrying out data registration processing on the 3D preprocessing data and carrying out 3D feature extraction to obtain second detection data;
and the fusion unit is used for carrying out feature fusion on the first detection data and the second detection data to obtain the workpiece data to be detected.
In an embodiment, the information extraction unit 303 includes:
the construction unit is used for respectively carrying out characteristic construction of corner points, edges, textures and color histograms on the workpiece data to be detected by adopting a three-dimensional point cloud recognition algorithm so as to acquire a first numerical descriptor;
the analysis unit is used for respectively carrying out local pixel intensity analysis, gradient direction detection and texture feature extraction according to the first numeric descriptor so as to obtain a second numeric descriptor;
the matching unit is used for carrying out feature matching by using the second numeric descriptor so as to search similar feature descriptors in the second numeric descriptor, and finally obtaining a feature matching result after successful matching;
and the gesture unit is used for solving the perspective transformation gesture according to the characteristic matching result so as to evaluate the rotation and the translation of the camera and obtain the characteristic information data.
In an embodiment, the information comparing unit 304 includes:
the dimension unit is used for carrying out relative difference statistics on the dimension data of the characteristic information data and the standard workpiece data to obtain a first difference value;
the angle unit is used for carrying out relative difference statistics on the angle data of the characteristic information data and the standard workpiece data to obtain a second difference value;
the shape unit is used for carrying out relative difference statistics on the characteristic information data and the shape data of the standard workpiece data to obtain a third difference value;
the color unit is used for carrying out relative difference statistics on the characteristic information data and the color data of the standard workpiece data to obtain a fourth difference value;
and the weighting unit is used for respectively carrying out weighted average on the first difference value, the second difference value, the third difference value and the fourth difference value to obtain the contrast deviation value.
In an embodiment, the 2D information includes a gray scale image and a color image for providing two-dimensional information of the workpiece; the 3D information comprises point cloud data and a three-dimensional model and is used for providing three-dimensional information of the workpiece.
Since the embodiments of the apparatus portion and the embodiments of the method portion correspond to each other, the embodiments of the apparatus portion are referred to the description of the embodiments of the method portion, and are not repeated herein.
The embodiment of the present application also provides a computer readable storage medium having a computer program stored thereon, which when executed can implement the steps provided in the above embodiment. The storage medium may include: a U-disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RandomAccessMemory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The embodiment of the application also provides a computer device, which can comprise a memory and a processor, wherein the memory stores a computer program, and the processor can realize the steps provided by the embodiment when calling the computer program in the memory. Of course, the computer device may also include various network interfaces, power supplies, and the like.
In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the application can be made without departing from the principles of the application and these modifications and adaptations are intended to be within the scope of the application as defined in the following claims.
It should also be noted that in this specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. The automatic workpiece detection method based on 2D and 3D vision is characterized by comprising the following steps of:
respectively acquiring 2D information and 3D information of the workpiece according to the control instruction;
preprocessing the 2D information and the 3D information to obtain an information processing result; the information processing result comprises standard workpiece data and workpiece data to be detected;
extracting features of the workpiece data to be detected by using a feature detection algorithm to obtain feature information data;
performing feature comparison on the feature information data and the standard workpiece data to obtain a comparison deviation value;
judging whether the contrast deviation value is within a set error threshold value, if so, judging that the workpiece is qualified; if not, judging as the unqualified workpiece, and prompting that the manual rechecking is needed.
2. The 2D and 3D vision-based workpiece automatic detection method according to claim 1, comprising, before the 2D information and the 3D information of the workpiece are acquired respectively according to control instructions:
judging whether an automatic detection system is used for the first time, if so, respectively acquiring an IP address and a port of a camera shooting a workpiece, and establishing TCP connection between the automatic detection system and the camera by utilizing the IP address and the port; if not, the TCP connection between the automatic detection system and the camera is automatically carried out.
3. The 2D and 3D vision-based workpiece automatic detection method according to claim 1, wherein the acquiring 2D information and 3D information of the workpiece according to the control instruction respectively includes:
activating the camera to operate by utilizing the control instruction to respectively obtain a 2D image and a 3D image;
mapping the texture information on the 2D image to the 3D image, and carrying out pixel point association synchronization to obtain image synchronization information;
respectively adjusting the image brightness, contrast, tone and saturation of the image synchronous information to obtain color correction parameters;
extracting corner points and edge features on the 2D image and the 3D image respectively, and carrying out feature alignment by utilizing a local feature description matching algorithm to obtain feature alignment results;
performing transformation matrix calculation of rotation, translation and affine transformation according to the feature alignment result to obtain feature alignment parameters;
compensating and superposing the color correction parameters and the characteristic alignment parameters to obtain an image processing result; wherein the image processing result includes the 2D information and 3D information.
4. The method for automatically detecting workpieces based on 2D and 3D vision according to claim 1, wherein the preprocessing the 2D information and the 3D information to obtain information processing results comprises:
respectively carrying out image denoising and image enhancement processing on the 2D information to obtain 2D preprocessing data;
performing color space conversion processing on the 2D preprocessing data, and performing 2D feature extraction to obtain first detection data;
respectively carrying out filtering treatment and downsampling on the 3D information to obtain 3D preprocessing data;
carrying out data registration processing on the 3D preprocessing data, and carrying out 3D feature extraction to obtain second detection data;
and carrying out feature fusion on the first detection data and the second detection data to obtain the workpiece data to be detected.
5. The method for automatically detecting workpieces based on 2D and 3D vision according to claim 1, wherein the feature extraction of the workpiece data to be detected by using a feature detection algorithm to obtain feature information data comprises:
respectively constructing the characteristics of corner points, edges, textures and color histograms of the workpiece data to be detected by adopting a three-dimensional point cloud recognition algorithm so as to acquire a first numerical descriptor;
respectively carrying out local pixel intensity analysis, gradient direction detection and texture feature extraction according to the first numeric descriptor so as to obtain a second numeric descriptor;
performing feature matching by using the second numeric descriptor to search similar feature descriptors in the second numeric descriptor, and finally obtaining a feature matching result after successful matching;
and solving the perspective transformation gesture according to the feature matching result to evaluate the rotation and translation of the camera, so as to obtain the feature information data.
6. The method for automatically detecting workpieces based on 2D and 3D vision according to claim 1, wherein the feature comparison between the feature information data and the standard workpiece data to obtain a comparison deviation value comprises:
carrying out relative difference statistics on the characteristic information data and the size data of the standard workpiece data to obtain a first difference value;
performing relative difference statistics on the angle data of the characteristic information data and the standard workpiece data to obtain a second difference value;
performing relative difference statistics on the characteristic information data and the shape data of the standard workpiece data to obtain a third difference value;
performing relative difference statistics on the characteristic information data and the color data of the standard workpiece data to obtain a fourth difference value;
and respectively carrying out weighted average on the first difference value, the second difference value, the third difference value and the fourth difference value to obtain the contrast deviation value.
7. The 2D and 3D vision-based workpiece automatic detection method according to claim 1, wherein the 2D information includes a gray scale image and a color image for providing two-dimensional information of the workpiece; the 3D information comprises point cloud data and a three-dimensional model and is used for providing three-dimensional information of the workpiece.
8. 2D and 3D vision-based workpiece automatic detection device, characterized by comprising:
the information acquisition unit is used for respectively acquiring 2D information and 3D information of the workpiece according to the control instruction;
the information processing unit is used for preprocessing the 2D information and the 3D information to obtain an information processing result; the information processing result comprises standard workpiece data and workpiece data to be detected;
the information extraction unit is used for carrying out feature extraction on the workpiece data to be detected by utilizing a feature detection algorithm to obtain feature information data;
the information comparison unit is used for comparing the characteristic information data with the standard workpiece data in characteristics to obtain a comparison deviation value;
the information judging unit is used for judging whether the contrast deviation value is within a set error threshold value, if so, judging that the workpiece is qualified; if not, judging as the unqualified workpiece, and prompting that the manual rechecking is needed.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the 2D and 3D vision-based workpiece automatic detection method as claimed in any one of claims 1 to 7 when the computer program is executed by the processor.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the 2D and 3D vision-based workpiece automatic detection method according to any of claims 1 to 7.
CN202310624896.XA 2023-05-30 2023-05-30 Workpiece automatic detection method and device based on 2D and 3D vision and related medium thereof Pending CN116625249A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310624896.XA CN116625249A (en) 2023-05-30 2023-05-30 Workpiece automatic detection method and device based on 2D and 3D vision and related medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310624896.XA CN116625249A (en) 2023-05-30 2023-05-30 Workpiece automatic detection method and device based on 2D and 3D vision and related medium thereof

Publications (1)

Publication Number Publication Date
CN116625249A true CN116625249A (en) 2023-08-22

Family

ID=87636357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310624896.XA Pending CN116625249A (en) 2023-05-30 2023-05-30 Workpiece automatic detection method and device based on 2D and 3D vision and related medium thereof

Country Status (1)

Country Link
CN (1) CN116625249A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118071814A (en) * 2024-04-17 2024-05-24 山东工程职业技术大学 A precision device size measurement method and measuring device based on machine vision
CN118376617A (en) * 2024-06-21 2024-07-23 志豪微电子(惠州)有限公司 IPM module detection device and IPM module detection method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118071814A (en) * 2024-04-17 2024-05-24 山东工程职业技术大学 A precision device size measurement method and measuring device based on machine vision
CN118376617A (en) * 2024-06-21 2024-07-23 志豪微电子(惠州)有限公司 IPM module detection device and IPM module detection method

Similar Documents

Publication Publication Date Title
CN111640157B (en) Checkerboard corner detection method based on neural network and application thereof
CN100494886C (en) Detection Method of Circular Marker Points in 3D Scanning System
CN116625249A (en) Workpiece automatic detection method and device based on 2D and 3D vision and related medium thereof
US11676301B2 (en) System and method for efficiently scoring probes in an image with a vision system
CN111402238A (en) Defect identification system realized through machine vision
CN108470178B (en) A depth map saliency detection method combined with depth reliability evaluation factor
CN105057899A (en) Scanned image recognition method applied to intelligent laser cutting
CN111402330A (en) Laser line key point extraction method based on plane target
CN111192194A (en) A panorama image mosaic method for curtain wall building facade
CN116580005B (en) Guiding method of duct piece mold opening system based on image processing and deep learning
CN108182704A (en) Localization method based on Shape context feature
CN112884746A (en) Character defect intelligent detection algorithm based on edge shape matching
CN116229189A (en) Image processing method, device, equipment and storage medium based on fluorescence endoscope
CN114332079A (en) Plastic lunch box crack detection method, device and medium based on image processing
CN116587280A (en) Robot 3D laser vision disordered grabbing control method, medium and system
Wang et al. Point based registration of terrestrial laser data using intensity and geometry features
CN119006419B (en) Part size online detection method and system based on linear array camera
CN116645418A (en) Screen button detection method and device based on 2D and 3D cameras and relevant medium thereof
CN116934734A (en) Image-based part defect multipath parallel detection method, device and related medium
CN116721582A (en) Three-dimensional machine vision training method, system, computer equipment and storage medium
CN113643290B (en) Straw counting method and device based on image processing and storage medium
Ouamane et al. Multimodal 3D and 2D face authentication approach using extended LBP and statistic local features proposed
CN112749713A (en) Big data image recognition system and method based on artificial intelligence
CN116580022B (en) Workpiece size detection method, device, computer equipment and storage medium
CN119394183B (en) Handheld three-dimensional scanning system based on visible light

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination