CN116563131A - Method for detecting assembly state of tiny parts of complex electromechanical product based on vision - Google Patents

Method for detecting assembly state of tiny parts of complex electromechanical product based on vision Download PDF

Info

Publication number
CN116563131A
CN116563131A CN202310213374.0A CN202310213374A CN116563131A CN 116563131 A CN116563131 A CN 116563131A CN 202310213374 A CN202310213374 A CN 202310213374A CN 116563131 A CN116563131 A CN 116563131A
Authority
CN
China
Prior art keywords
image
target
images
class
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310213374.0A
Other languages
Chinese (zh)
Inventor
夏梦铭
刘庭煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202310213374.0A priority Critical patent/CN116563131A/en
Publication of CN116563131A publication Critical patent/CN116563131A/en
Pending legal-status Critical Current

Links

Classifications

    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention belongs to the field of electromechanical product assembly, and particularly relates to a vision-based detection method for the assembly state of tiny parts of a complex electromechanical product. The method comprises the following steps: and (3) image acquisition: shooting a plurality of images, wherein the images cover the complex electromechanical products completely; performing inclination correction on an original image and performing image stitching; acquiring an image of the assembly unit; the size of the first type of target object is 10-50 mm, the first type of target object is a parent part of the second type of target object, the image of the cut assembly unit is input into a trained target detection model, and the state of the first type of target object is detected; and extracting images of the first class of target objects, extracting images of the second class of target objects according to the position information of the second class of target objects on the images of the first class of target objects, and detecting the assembly state of the second class of target objects, wherein the second class of target objects are tiny parts with the diameter smaller than 10mm. The invention reduces the misloading and neglected loading of parts on the complex electromechanical product and improves the assembly quality.

Description

Method for detecting assembly state of tiny parts of complex electromechanical product based on vision
Technical Field
The invention belongs to the field of electromechanical product assembly, and particularly relates to a vision-based detection method for the assembly state of tiny parts of a complex electromechanical product.
Background
The mechanical and electrical products are automatically produced in many working procedures, but under the environment that mechanical arms are difficult to operate, such as complex and narrow mechanical arms, manual operation is needed, so that the condition of missing or wrong assembly is difficult to avoid, and therefore, the error-proofing detection in the assembly process is very important. The traditional manual visual inspection mode is easily influenced by subjective factors of detection personnel, reliability and stability of results cannot be guaranteed, and then assembly quality of products is influenced. In recent years, image recognition technology has been gradually applied to assembly quality inspection, but current inspection methods are mostly used for products or structural members with small size and simple structure. For complex electromechanical products with a size of 3-5 meters or more, the existing assembly state detection method cannot quickly and accurately detect parts on an assembly body. Taking a radar array surface as an example, the size span of parts on the array surface is large, the plane size of an assembly unit is about 30cm multiplied by 40cm, and the apparent diameter of the assembled filter fixing piece is only 6-10 mm. The larger-sized targets can be identified by the existing more sophisticated algorithms, while identification of smaller-sized targets is always a target detection difficulty.
Disclosure of Invention
The invention aims to provide a vision-based detection method for the assembly state of tiny parts of a complex electromechanical product, which reduces the misloading and neglected loading of the parts on the complex electromechanical product and improves the assembly quality.
The technical solution for realizing the purpose of the invention is as follows: a vision-based detection method for the assembly state of tiny parts of a complex electromechanical product comprises the following steps:
step (1): and (3) image acquisition: shooting a plurality of images, wherein the images cover the complex electromechanical products completely;
step (2): performing inclination correction on the original image acquired in the step (1), and performing image stitching;
step (3): acquisition of an image of the assembly unit: inputting the complete product image into a trained target detection model, obtaining the image position of an assembly unit, and cutting out the partial image;
step (4): obtaining a first type of target object image: the size of the first type of target object is 10-50 mm, which is the father-level part of the second type of target object, the image of the assembly unit cut out in the step (3) is input into a trained target detection model, and the state of the first type of target object is detected;
step (5): detecting the image assembly state of a second type of target object: and extracting images of the first class of target objects, extracting images of the second class of target objects according to the position information of the second class of target objects on the images of the first class of target objects, and detecting the assembly state of the second class of target objects, wherein the second class of target objects are tiny parts with the diameter smaller than 10mm.
Further, in the step (1), a plurality of image acquisition devices are adopted for image acquisition, and the distances between the plurality of image acquisition devices and a plane to be acquired are kept consistent; the images acquired by the adjacent acquisition equipment have an overlapping area, and the overlapping area accounts for 1/6-1/5 of the image.
Further, according to the target detection precision P v Determining parameters of acquisition equipment and target detection precision P v Number P of pixels on image for object to be detected n Ratio to actual dimension h, target detection accuracy P v Greater than 3 pixels per millimeter.
Further, the specific method for tilt correction in the step (2) is as follows:
step (21): detecting straight lines in an image through Hough transformation based on an original image;
step (22): calculating the inclination angle of each straight line and calculating the average value of the inclination angles;
step (23): and rotating the original image through the obtained average value of the inclination angles, so that the outer frame of the complex electromechanical product is in an approximately horizontal or vertical state on the image.
Further, the specific method for image stitching in the step (2) is as follows:
step (24): extracting image feature points: adopting SURF as a feature point extraction algorithm;
step (25): matching image characteristic points: further screening characteristic points in the two pictures to obtain better image matching points;
step (26): image registration: obtaining a matching point set of two images to be spliced through the step (25), and converting the two images to the same coordinate;
searching an optimal homography matrix H by adopting a RANSAC algorithm, wherein the matrix size is 3 multiplied by 3, and the equation is as follows:
wherein (x, y) represents the angular point position of the target image, wherein (x ', y') is the angular point position of the scene image, s is a scale parameter, the RANSAC algorithm randomly extracts 4 samples from the matched data set and ensures that the 4 samples are not collinear, a homography matrix is calculated, then all data are tested by using the model, the number of data points meeting the model and the projection error, namely a cost function, are calculated, and if the model is an optimal model, the corresponding cost function is minimum;
the calculation formula of the cost function is as follows:
step (27): and (3) image fusion: and transforming the image by using a homography matrix obtained by image registration, and then splicing the transformed image.
Further, training of the target detection model in the step (3) specifically includes:
acquiring a plurality of complete images of complex electromechanical products;
labeling a complete image of a complex electromechanical product, wherein the labeling content is the image position of an assembly unit in the form of (Px, py, m and n), px and Py are coordinates of the central point of the assembly unit on the image, and m and n are the proportion of the horizontal and vertical dimensions of a labeling frame to the horizontal and vertical dimensions of the image respectively;
based on the marked complete image of the complex electromechanical product, training to obtain an assembly unit target detection model.
Further, training the target detection model in the step (4) specifically includes:
the cut assembly unit image is used for carrying out image annotation on the first type of target object, and the annotation content is the image position of the first type of target object; the form is (Px, py, m, n), wherein Px, py are coordinates of a central point of a first type target object on an image, and m and n are the proportion of the horizontal and longitudinal dimensions of a marking frame to the horizontal and longitudinal directions of the image respectively;
and training to obtain a first type of target object detection model based on the marked assembly unit image.
Furthermore, the target detection models all adopt convolutional neural networks as network frameworks.
Further, the detecting the assembly state of the second class of target objects further includes:
acquiring a plurality of images of correct installation, incorrect installation and uninstalled second class target objects as training samples;
extracting key point information of a second class of target objects; training a classification model, and classifying the image of the position of the second class target object according to the key point information; the extraction of the key point information refers to extracting image features according to pixels, and the implementation mode is as follows:
performing size normalization processing on the second class target object image;
the gradient direction histogram of the local area of the image, namely HOG characteristics, is calculated and counted to serve as key point information; firstly dividing an image into small connected areas, then extracting gradient or edge direction histograms of all pixel points in the connected area units, and finally combining the histograms to form key point information; the calculation formula of the directional gradient is as follows:
G x (x,y)=H(x+1,y)-H(x-1,y)
G y (x,y)=H(x,y+1)-H(x,y-1)
where Gx (x, y) represents the horizontal gradient of the pixel (x, y), gy (x, y) represents the vertical gradient of the pixel (x, y). H (x, y) represents the result of gamma regularization treatment of the pixel points (x, y);
the implementation mode for classifying the images according to the key point information is as follows:
after the image key point information is extracted, classifying the image by using a support vector machine, defining a basic model of the support vector machine as a linear classifier with the largest interval on a feature space, and finally converting the problem into a convex quadratic programming problem to solve.
The application of the method is used for a large-scale radar array, wherein the first type of target object is a filter, and the second type of target object is a filter fixing piece, namely a fastening screw and a positioning screw.
Compared with the prior art, the invention has the remarkable advantages that:
the invention only needs to collect the image of the complex electromechanical product once, and is simple and convenient to operate; in addition, the product image is subjected to inclination correction through correction processing, so that the problem of false detection caused by the placement deviation of the acquisition equipment is reduced; the target to be detected is identified through the convolutional neural network, so that the identification speed and accuracy are improved; the second type of detection targets are detected through the inter-part relation, so that the detection capability of small-size targets is improved.
Drawings
FIG. 1 is a flow chart of the detection of the assembly state of the micro parts of the array surface of the large-scale radar of the invention.
FIG. 2 is a diagram of an exemplary complex electromechanical product to be tested according to the present invention.
FIG. 3 is a diagram showing an example of the assembly unit to be tested according to the present invention.
Fig. 4 is a correctly assembled image of a part to be inspected according to the present invention.
FIG. 5 is a view of an image of a false assembly of a part to be inspected in accordance with the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a method for detecting an assembly state of a micro part of a complex electromechanical product based on vision according to an embodiment of the present application includes: taking partial pictures of a plurality of complex electromechanical products, ensuring complete coverage of the whole product, and correcting the pictures; splicing the corrected local photos of the large radar array surface into a complex electromechanical product; sending the complete product image into a pre-trained target detection model, detecting an assembly unit, extracting the partial image, and performing the next operation; sending the cut assembly unit image into a pre-trained target detection model, detecting a second type of target object in the pre-trained target detection model, extracting the partial image, and performing the next operation; and extracting images of the second class target objects according to the relative positions of the second class target objects and the first class target objects, and detecting the assembly state of the second class target objects.
In this embodiment, the complex electromechanical product to be detected is exemplified by a large-scale radar array, and an example diagram is shown in fig. 2. In the assembly process, the clamping tool (number 2 in the figure) places the radar array surface (number 1 in the figure) perpendicular to the ground, and the object to be detected is a radar assembly unit (number 3 in the figure) and parts inside the radar assembly unit.
The internal structure of the radar mounting unit is shown in fig. 3, and targets to be detected in the radar mounting unit are a filter (No. 4 in the figure), a fastening screw (No. 5 in the figure) and a positioning screw (No. 6 in the figure).
The first type of detection targets are filters, and the second type of detection targets are filter fixing pieces, namely fastening screws and positioning screws. Fig. 4 shows a sample image of the correct installation of the fasteners by the example filter, and fig. 5 shows a sample image of the incorrect installation of the fasteners by the example filter, with the red box indicating the incorrect installation of the parts in that location.
In a specific implementation process, the acquisition of the original image of the radar array further comprises:
determining the pixel, interface, sensor type and the like of the acquisition equipment according to the detection precision;
when multi-device acquisition is adopted, the distances between a plurality of image acquisition devices and a plane to be acquired are kept consistent.
In a specific implementation process, the inclination correction method for the original image comprises the following steps:
detecting straight lines in the image through Hough transformation based on the original image;
calculating the inclination angle of each straight line and calculating the average value of the inclination angles;
the original image is rotated by the calculated tilt angle average. Thereby obtaining a corrected image.
In a specific implementation process, the image stitching operation includes:
and extracting image characteristic points. The extracted features are used for matching the images, and the features are considered to be selected as the basis of registration according to the characteristics of the images to be registered;
and matching the image characteristic points. The purpose of image feature point matching is to further screen feature points in two pictures, and more excellent image matching points are obtained;
and (5) image registration. Obtaining a matching point set of two images to be spliced through the operation of the previous step, and then converting the two images to the same coordinate;
and (5) image fusion. The method comprises the steps of transforming the image by a homography matrix obtained by image registration, and then splicing the transformed image.
In the implementation process, three aspects are mainly considered in the selection of the feature points:
firstly, selecting features existing in two images to be registered, wherein only features which can be extracted from the two images can be used for matching the features of the images;
second, it is effective to be able to extract a sufficient number of features from both images at the same time;
third, the selected features must have good uniqueness and facilitate the next feature matching operation.
In the implementation process, as many sample images as possible should be acquired for training to ensure the accuracy of the model. After enough samples are ensured, the output of the network is consistent with the expected value by adjusting the structure and the weight of the network. The scale of the sample data determines the accuracy of the neural network training results while ensuring the quality and distribution balance of the sample data. The larger the sample data size, the higher the accuracy.
In a specific implementation process, the detecting the assembly state of the fastening screw and the positioning screw further comprises:
and acquiring a plurality of images of correct installation and incorrect installation of the tensioning fixing screws and the positioning screws as training samples.
And extracting key point information of the fastening screw and the positioning screw.
Training a classification model, and classifying images of positions of the fastening screw and the positioning screw according to key point information.
In a specific implementation process, the extraction of the key point information refers to extracting image features according to pixels, and one possible implementation manner is as follows:
by calculating and counting the gradient direction histograms of the local areas of the image, in particular, the image is first divided into small connected areas, and then the gradient or edge direction histograms of the pixels in these connected area units. Finally, the histograms are combined to form key point information.
In a specific implementation process, one possible implementation manner of classifying the images according to the key point information is as follows:
after the image key point information of the fastening screw and the positioning screw is extracted, the image is classified by using a support vector machine, a basic model of the support vector machine is defined as a linear classifier with the largest interval on a feature space, and finally, the problem is converted into a convex quadratic programming problem to be solved. The images are distinguished through the classifier, so that the aim of detecting the assembly state of the parts is fulfilled.

Claims (10)

1. The method for detecting the assembly state of the micro parts of the complex electromechanical product based on vision is characterized by comprising the following steps:
step (1): and (3) image acquisition: shooting a plurality of images, wherein the images cover the complex electromechanical products completely;
step (2): performing inclination correction on the original image acquired in the step (1), and performing image stitching;
step (3): acquisition of an image of the assembly unit: inputting the complete product image into a trained target detection model, obtaining the image position of an assembly unit, and cutting out the partial image;
step (4): obtaining a first type of target object image: the size of the first type of target object is 10-50 mm, which is the father-level part of the second type of target object, the image of the assembly unit cut out in the step (3) is input into a trained target detection model, and the state of the first type of target object is detected;
step (5): detecting the image assembly state of a second type of target object: and extracting images of the first class of target objects, extracting images of the second class of target objects according to the position information of the second class of target objects on the images of the first class of target objects, and detecting the assembly state of the second class of target objects, wherein the second class of target objects are tiny parts with the diameter smaller than 10mm.
2. The method of claim 1, wherein in step (1), a plurality of image acquisition devices are used for image acquisition, and the distances between the plurality of image acquisition devices and a plane to be acquired are kept consistent; the images acquired by the adjacent acquisition equipment have an overlapping area, and the overlapping area accounts for 1/6-1/5 of the image.
3. The method according to claim 2, wherein the target detection accuracy P is based on v Determining parameters of acquisition equipment and target detection precision P v Number P of pixels on image for object to be detected h Ratio to actual dimension h, target detection accuracy P v Greater than 3 pixels per millimeter.
4. A method according to claim 3, wherein the tilt correction in step (2) is performed by:
step (21): detecting straight lines in an image through Hough transformation based on an original image;
step (22): calculating the inclination angle of each straight line and calculating the average value of the inclination angles;
step (23): and rotating the original image through the obtained average value of the inclination angles, so that the outer frame of the complex electromechanical product is in an approximately horizontal or vertical state on the image.
5. The method of claim 4, wherein the image stitching in step (2) is performed as follows:
step (24): extracting image feature points: adopting SURF as a feature point extraction algorithm;
step (25): matching image characteristic points: further screening characteristic points in the two pictures to obtain better image matching points;
step (26): image registration: obtaining a matching point set of two images to be spliced through the step (25), and converting the two images to the same coordinate;
searching an optimal homography matrix H by adopting a RANSAC algorithm, wherein the matrix size is 3 multiplied by 3, and the equation is as follows:
wherein (x, y) represents the angular point position of the target image, wherein (x ', y') is the angular point position of the scene image, s is a scale parameter, the RANSAC algorithm randomly extracts 4 samples from the matched data set and ensures that the 4 samples are not collinear, a homography matrix is calculated, then all data are tested by using the model, the number of data points meeting the model and the projection error, namely a cost function, are calculated, and if the model is an optimal model, the corresponding cost function is minimum;
the calculation formula of the cost function is as follows:
step (27): and (3) image fusion: and transforming the image by using a homography matrix obtained by image registration, and then splicing the transformed image.
6. The method of claim 5, wherein the training of the object detection model in step (3) is specifically:
acquiring a plurality of complete images of complex electromechanical products;
labeling a complete image of a complex electromechanical product, wherein the labeling content is the image position of an assembly unit in the form of (Px, py, m and n), px and Py are coordinates of the central point of the assembly unit on the image, and m and n are the proportion of the horizontal and vertical dimensions of a labeling frame to the horizontal and vertical dimensions of the image respectively;
based on the marked complete image of the complex electromechanical product, training to obtain an assembly unit target detection model.
7. The method of claim 6, wherein the training of the target detection model in step (4) is specifically:
the cut assembly unit image is used for carrying out image annotation on the first type of target object, and the annotation content is the image position of the first type of target object; the form is (Px, py, m, n), wherein Px, py are coordinates of a central point of a first type target object on an image, and m and n are the proportion of the horizontal and longitudinal dimensions of a marking frame to the horizontal and longitudinal directions of the image respectively;
and training to obtain a first type of target object detection model based on the marked assembly unit image.
8. The method of claim 7, wherein the object detection models each employ a convolutional neural network as a network skeleton.
9. The method of claim 8, wherein the detecting of the assembly state of the second type of target object further comprises:
acquiring a plurality of images of correct installation, incorrect installation and uninstalled second class target objects as training samples;
extracting key point information of a second class of target objects; training a classification model, and classifying the image of the position of the second class target object according to the key point information; the extraction of the key point information refers to extracting image features according to pixels, and the implementation mode is as follows:
performing size normalization processing on the second class target object image;
the gradient direction histogram of the local area of the image, namely HOG characteristics, is calculated and counted to serve as key point information; firstly dividing an image into small connected areas, then extracting gradient or edge direction histograms of all pixel points in the connected area units, and finally combining the histograms to form key point information; the calculation formula of the directional gradient is as follows:
G x (x,y)=H(x+1,y)-H(x-1,y)
G y (x,y)=H(x,y+1)-H(x,y-1)
where Gx (x, y) represents the horizontal gradient of the pixel (x, y), gy (x, y) represents the vertical gradient of the pixel (x, y). H (x, y) represents the result of gamma regularization treatment of the pixel points (x, y);
the implementation mode for classifying the images according to the key point information is as follows:
after the image key point information is extracted, classifying the image by using a support vector machine, defining a basic model of the support vector machine as a linear classifier with the largest interval on a feature space, and finally converting the problem into a convex quadratic programming problem to solve.
10. Use of the method according to any of claims 1-9 for large radar surfaces, the first type of object being a filter and the second type of object being a filter mount, i.e. a fastening screw and a set screw.
CN202310213374.0A 2023-03-07 2023-03-07 Method for detecting assembly state of tiny parts of complex electromechanical product based on vision Pending CN116563131A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310213374.0A CN116563131A (en) 2023-03-07 2023-03-07 Method for detecting assembly state of tiny parts of complex electromechanical product based on vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310213374.0A CN116563131A (en) 2023-03-07 2023-03-07 Method for detecting assembly state of tiny parts of complex electromechanical product based on vision

Publications (1)

Publication Number Publication Date
CN116563131A true CN116563131A (en) 2023-08-08

Family

ID=87497184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310213374.0A Pending CN116563131A (en) 2023-03-07 2023-03-07 Method for detecting assembly state of tiny parts of complex electromechanical product based on vision

Country Status (1)

Country Link
CN (1) CN116563131A (en)

Similar Documents

Publication Publication Date Title
CN106651942B (en) Three-dimensional rotating detection and rotary shaft localization method based on characteristic point
US6917702B2 (en) Calibration of multiple cameras for a turntable-based 3D scanner
Prescott et al. Line-based correction of radial lens distortion
CN110570477B (en) Method, device and storage medium for calibrating relative attitude of camera and rotating shaft
CN109584307B (en) System and method for improving calibration of intrinsic parameters of a camera
CN112950667B (en) Video labeling method, device, equipment and computer readable storage medium
US20150262346A1 (en) Image processing apparatus, image processing method, and image processing program
US9767383B2 (en) Method and apparatus for detecting incorrect associations between keypoints of a first image and keypoints of a second image
CN104933398B (en) vehicle identification system and method
KR101558467B1 (en) System for revising coordinate in the numerical map according to gps receiver
US8699786B2 (en) 3D model generating apparatus, method and CRM by line pattern imaging
US20210090230A1 (en) System and method for efficiently scoring probes in an image with a vision system
CN111161295B (en) Dish image background stripping method
CN109214254B (en) Method and device for determining displacement of robot
US11843865B2 (en) Method and device for generating vehicle panoramic surround view image
CN110189375A (en) A kind of images steganalysis method based on monocular vision measurement
CN104966283A (en) Imaging layered registering method
CN112233186A (en) Equipment air tightness detection camera self-calibration method based on image perception
CN117218633A (en) Article detection method, device, equipment and storage medium
CN111260735A (en) External parameter calibration method for single-shot LIDAR and panoramic camera
JP2007200364A (en) Stereo calibration apparatus and stereo image monitoring apparatus using the same
CN116563131A (en) Method for detecting assembly state of tiny parts of complex electromechanical product based on vision
CN110969135A (en) Vehicle logo recognition method in natural scene
CN111815560B (en) Photovoltaic power station fault detection method and device, portable detection equipment and storage medium
CN112257667A (en) Small ship detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination