CN114218692A - Similar part identification system, medium and method based on deep learning and model simulation - Google Patents
Similar part identification system, medium and method based on deep learning and model simulation Download PDFInfo
- Publication number
- CN114218692A CN114218692A CN202111395775.XA CN202111395775A CN114218692A CN 114218692 A CN114218692 A CN 114218692A CN 202111395775 A CN202111395775 A CN 202111395775A CN 114218692 A CN114218692 A CN 114218692A
- Authority
- CN
- China
- Prior art keywords
- image
- simulation
- pose
- model
- identification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/17—Mechanical parametric or variational design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Hardware Design (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Computational Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a similar part identification system, medium and method based on deep learning and model simulation, wherein the method comprises the following steps: the method comprises the steps of obtaining a static pose of a three-dimensional model of a part, generating a simulation image, obtaining a minimum bounding box coordinate of the part image, generating a similar part identification network training set, carrying out YOLO4 neural network training, constructing an optimal identification visual angle set according to a test set result, controlling a camera to collect a field part image, judging whether the pose of the similar part under the camera visual angle is adjusted through a motion turntable according to the confidence coefficient of the identification result, and then re-identifying. The method generates the training set of the YOLO4 algorithm by utilizing the simulation image of the three-dimensional model, realizes the self-generation of the training set, thereby shortening the time for manufacturing the training sample, effectively solves the problem that the distinguishable features of the part are easy to shield under a single visual angle in a mode of moving the turntable, and has high-precision recognition result and practical value.
Description
Technical Field
The invention relates to identification of similar parts, in particular to a similar part identification system, medium and method based on deep learning and model simulation.
Background
The industrial production often has the classification identification demand of multiple varieties, multiple batches, high similar parts. Machine vision techniques have long been of interest. As an important component of machine vision, image classification and recognition technology is continuously researched and developed, so that improvement and innovation are continuously generated in theory, and important popularization and breakthrough are achieved in practical application. For example, patent document CN112132783A discloses a part identification method based on digital image processing technology. At present, the application of machine vision technology to classify and sort target products is realized in various industries. However, the conventional machine vision algorithm is difficult to deal with the problems of light source, noise, focusing and the like in a complex industrial environment.
By applying the deep learning target recognition algorithm in the machine vision system, the problem that a plurality of traditional vision algorithms are difficult to solve can be solved. Tiandajie and the like (Tiandajie, Chengliangyu, national stand. silicon chip subfissure detection and identification based on deep learning optimization S SD algorithm. machine tool and hydraulic pressure, 2019, 47(1): 36-40, 60.) provide an optimized SSD detection method, and a DenseNet structure is used for replacing a VGG network structure, so that the robustness of the SSD detection algorithm on small targets is optimized, the problem of silicon chip subfissure detection is solved, but the detection speed is slow, and the instantaneity cannot be met. The currently commonly used deep learning algorithm comprises fast R-CNN, SS D, YOLO series and the like, wherein YOLOv4 is one of the most elegant target detection algorithms, a plurality of previous research results are gathered, the optimal balance of detection precision and efficiency is achieved, and classification detection of similar parts is carried out based on YOLOv 4.
However, training sets required by deep learning models often contain thousands of sample image information, the manual labeling mode has the advantage of high accuracy, the time and the labor are consumed, meanwhile, in industrial production, customized multi-variety part classification and identification tasks are often required, all possible parts are manufactured into the training sets at one time, the model identification accuracy is reduced, and actual requirements in industrial production are difficult to meet.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the defects of the prior art, the system, the medium and the method for identifying the similar parts based on deep learning and model simulation are provided, the training set is generated automatically, so that the time for manufacturing the training sample is shortened, the classification and identification tasks of the similar parts in different batches are quickly responded, and the high-precision part identification result is obtained.
The technical scheme is as follows: a similar part identification method based on deep learning and model simulation comprises the following steps:
step 1: simulating the free falling body of the rigid body part by using a Bullet physical engine, and acquiring the static pose of the three-dimensional model of the part to be identified in a virtual space;
step 2: simulating static postures of the parts observed from different viewing angles under light by using an OpenGL library, and storing simulation images of all the parts at different viewing angles;
and step 3: acquiring the minimum bounding box coordinate of the part in the simulation image by using an OpenCV (open circuit vehicle) library according to the color discrimination of the part in the simulation image and the surrounding environment;
and 4, step 4: making an image data set by using the simulation image, the part name and the bounding box position information, and taking the image data set as a training set of a YOLO4 algorithm for training;
and 5: sampling and extracting similar part simulation images under all the visual angles according to a certain interval angle to serve as an algorithm test set, and storing the optimal identification visual angle image of each similar part according to a test result;
step 6: collecting an image of a part to be detected on site, and inputting the image into a trained YOLO4 algorithm;
and 7: if the confidence coefficient of the recognition result is greater than a preset value, outputting the recognition result; otherwise, calculating an affine transformation matrix between the current image and the optimal identification view angle image of the similar part, converting the calculation result into a rotation angle, controlling the motion rotary table to adjust the position and the pose of the part through the upper computer, repeating the step 6, and outputting the identification result.
Further, step 1 comprises the steps of:
step 1.1: importing a three-dimensional model of a part to be identified and setting an initial height h0;
Step 1.2: setting parameters including gravity, elasticity and rigid body attributes in a virtual space by using functions in a Bullet library;
step 1.3: controlling the part model to simulate a free falling body, judging whether the pose is a conventional placing pose in part detection after the part model is static, and resetting the initial height to repeat the steps if the pose is not the conventional placing pose in part detection;
step 1.4: repeating the operation until the static poses M of all the parts are obtainediWhere i ═ 1, 2, …, m denote the sequence of all parts.
Further, step 2 comprises the steps of:
step 2.1: importing the static pose of the part obtained in the step 1;
step 2.2: simulating parts by using an OpenGL library to recognize field illumination, and setting a virtual camera view angle;
step 2.3: rotating the static position of the part around the virtual Z axis in the model space at intervals of 1 degree, and storing the part simulation image P under the view angle of the virtual camera at the momentijI is 1, 2, …, m, j is 1, 2, …, n, wherein i represents a sequence of parts, and j represents a sequence of simulated images generated for each part;
step 2.4: and repeating the operation until the simulation images of all the parts are obtained.
Further, step 3 comprises the steps of:
step 3.1: utilizing the distinction between the color of the three-dimensional model of the part and the background color to binarize the simulation image and extracting the edge characteristics of the binarized image;
step 3.2: solving the minimum bounding box of the part in the simulation image after the operation of the step 3.1 by utilizing an OpenCV (open computer vision library) library, and returning the pixel coordinate (X) at the upper left corner of the bounding boxijt,Yijt) To the pixel coordinate (X) of the lower right cornerijd,Yijd) I is 1, 2, …, m, j is 1, 2, …, n, wherein i represents a sequence of parts and j represents a sequence of simulated images generated for each part.
Further, step 4 comprises the steps of:
step 4.1: placing the generated simulation image into a JPEGImages folder by using a VOC data set format;
step 4.2: randomly distributing a training set, a test set and a verification set catalog to generate ImageSets folders;
step 4.3: automatically generating an XML document according to the simulation image information, the part name and the bounding box position information, and putting the XML document into an options folder;
step 4.4: generating labels document information according to the corresponding XML document;
step 4.5: the steps are circulated until all training set information is collected;
step 4.6: the YOLO4 algorithm was substituted for network training.
Further, step 5 comprises the steps of:
step 5.1: sampling and extracting the simulation image P generated in the step 2 every 5 degreesikAs test set, i ═ 1, 2, …, m, k ∈ j;
step 5.2: the test set in step 5.1 is brought into the trained YOLO4 algorithm;
step 5.3: according to the confidence coefficient T of the test result in the step 5.2ikAnd saving and identifying the optimal visual angle image Pz of each part, wherein z belongs to i.
Further, step 7 comprises the steps of:
step 7.1: judging the confidence T of the recognition result of the current part output by the YOLO4 algorithm0If it is greater than 0.95, if the confidence level T is higher than0If the recognition result is more than 0.95, the recognition result is directly output;
step 7.2: if the current part is recognized as a resultConfidence T0Less than 0.95, it is taken into contact with the best view angle image P of the part0Performing affine transformation calculation and outputting affine transformation matrix M0;
Step 7.3: transforming the affine into matrix M0Conversion to the angle theta of the in-plane motion turntable to be adjusted0;
Step 7.4: and (6) controlling the motion turntable through the upper computer to drive the part to adjust the pose to the optimal visual angle, and repeating the operation in the step 6 to output the identification result.
A computer readable storage medium comprising one or more programs for execution by one or more processors, the one or more programs including instructions for performing any of the methods described above.
A similar part identification system based on deep learning and model simulation, which is suitable for the method, comprises:
a static pose acquisition module: the three-dimensional model pose simulation system is used for simulating the pose of a three-dimensional model when parts are naturally placed;
a part rendering image module: the system comprises a static pose acquisition module, a simulation image generation module and a visual angle collection module, wherein the static pose acquisition module is used for acquiring a part model which is naturally placed and generating a simulation image close to a real environment and an optimal visual angle collection;
an image dataset module: the simulation system is used for automatically identifying the position of a part in a simulation image and automatically generating an image data set; and is used for automatically generating a data set required by the training of the YOLO4 algorithm;
moving the turntable module: the upper computer controls the moving turntable, and the moving turntable is used for adjusting parts difficult to identify to an identifiable optimal visual angle, so that secondary identification is facilitated;
an output module: for feeding back the recognition result to the system.
Has the advantages that: compared with the prior art, the invention has the following remarkable advantages:
(1) the method utilizes the simulation image of the three-dimensional model to generate the training set of the YOLO4 algorithm, thereby realizing the self-generation of the training set and greatly shortening the time for manufacturing the training sample;
(2) the method can quickly deal with the classification and identification tasks of similar parts in different batches, so that the labor cost and the process response time are greatly reduced;
(3) the method effectively solves the problem that the part can be distinguished and features are easy to shield under a single visual angle in a mode of moving the rotary table;
(4) the experimental result shows that compared with the traditional template matching part identification method, the YOLO4 algorithm is used under the condition of 10000 iterations at a single view angle, the identification accuracy is 65%, 78% and 96%, and the high-precision similar part identification result can be obtained by using the method.
Drawings
FIG. 1 is a flow chart of the steps of the method of the present invention;
FIG. 2 is an illustration of similar parts in the present invention;
FIG. 3 is a diagram of the hardware for the system of the present invention.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings.
As shown in fig. 1, the present invention provides a similar part identification based on deep learning and three-dimensional model simulation:
step 1: simulating a free falling process of the rigid body part by using a Bullet physical engine so as to obtain a static pose of a three-dimensional model of the part to be identified in a virtual space;
step 2: simulating static postures of the parts observed from different viewing angles under light by using an OpenGL library, and storing simulation images of all the parts at different viewing angles;
and step 3: acquiring the minimum bounding box coordinate of the part in the simulation image by using an OpenCV (open circuit vehicle) library according to the color discrimination of the part in the simulation image and the surrounding environment;
and 4, step 4: making an image data set by using the simulation image, the part name and the bounding box position information, and taking the image data set as a training set of a YOLO4 algorithm for training;
and 5: sampling and extracting similar part simulation images under all the visual angles according to a certain interval angle to serve as an algorithm test set, and storing the optimal identification visual angle image of each similar part according to a test result;
step 6: collecting an image of a part to be detected on site, and inputting the image into a trained YOLO4 algorithm;
and 7: if the confidence coefficient of the recognition result is greater than a preset value, outputting the recognition result; and otherwise, calculating an affine transformation matrix between the current image and the optimal identification view angle image of the similar part, converting the calculation result into a rotation angle, controlling the motion turntable to adjust the position and the attitude of the part through the upper computer, and repeating the operation in the step (6) to output the identification result.
The following further describes the specific implementation steps of the similar part identification method provided by the present invention.
Step 1: and acquiring the static pose of the three-dimensional model of the part to be identified in the virtual space.
Step 1.1, adding a rigid body ground in a virtual space, setting a world coordinate origin on the surface of the rigid body ground, importing a three-dimensional model of a part to be identified, and setting an initial height h0;
Step 1.2, setting parameters such as gravity, elasticity, rigid body attributes and the like in a virtual space by using a related function in a Bullet library, and initializing the space;
step 1.3, setting a part three-dimensional model to freely fall in a virtual space to move, judging whether the pose is a conventional placing pose when detecting the part after the part three-dimensional model is static, coinciding a world coordinate center in the virtual space with a projection centroid of the three-dimensional model on the rigid body ground, and resetting the initial height to repeat the steps if the pose does not accord with the conventional placing rule;
step 2: and generating part simulation images at different viewing angles.
Step 2.1, importing the static pose of the part obtained in the step 1;
2.2, simulating parts by using an OpenGL library to identify field illumination, and setting a virtual camera view angle;
step 2.3, rotating the static pose of the part around the virtual Z axis in the model space every 1 degree, and storing the part simulation image P under the visual angle of the virtual camera at the momentijI is 1, 2, …, m, j is 1, 2, …, n, wherein i represents a sequence of parts, and j represents a sequence of simulated images generated for each part;
Step 2.4, repeating the operation until obtaining the simulation images of all the parts;
and step 3: the minimum bounding box of the part in the simulation image is obtained.
3.1, binarizing the simulation image by distinguishing the color of the three-dimensional model of the part from the background color, and extracting the edge characteristics of the binarized image;
step 3.2, solving the minimum bounding box of the part in the simulation image operated in the step 3.1 by utilizing an OpenCV (open computer vision library) library, and returning the pixel coordinate (X) at the upper left corner of the bounding boxijt,Yijt) To the pixel coordinate (X) of the lower right cornerijd,Yijd) I is 1, 2, …, m, j is 1, 2, …, n, wherein i represents a sequence of parts and j represents a sequence of simulated images generated for each part.
And 4, step 4: an image data set was created using the simulation image, part name, and bounding box position information, and model training was performed as a training set for the YOLO4 algorithm.
Step 4.1, using a VOC data set format, putting the generated part rendering into a JPEGImages folder;
step 4.2, randomly distributing training set, test set and verification set catalogues to generate ImageSets folders;
step 4.3, automatically generating an XML document according to the simulation image information, the part name and the bounding box position information, and putting the XML document into an options folder;
step 4.4, generating labels document information according to the corresponding XML document;
4.5, circulating the steps until all training set information is collected;
step 4.6, carrying out network training by substituting a YOLO4 algorithm;
and 5: sampling and extracting similar part simulation images under all the visual angles according to a certain interval angle to serve as an algorithm test set, and storing the optimal identification visual angle image of each similar part according to a test result:
step 5.1, sampling and extracting the simulation image P generated in the step 2 every 5 degreesikAs test set, i ═ 1,2、…、m,k∈j;
Step 5.2, bringing the test set in the step 5.1 into a trained YOLO4 algorithm;
step 5.3, according to the confidence T of the test result in the step 5.2ikAnd saving and identifying the optimal visual angle image Pz of each part, wherein z belongs to i.
Step 6, collecting the image of the part to be measured on site, and inputting the image into a trained YOLO4 algorithm;
step 7, judging that the confidence coefficient of the recognition result is greater than a preset value; if so, outputting a recognition result; otherwise, calculating an affine transformation matrix between the current image and the optimal identification view angle image of the similar part, converting the calculation result into a rotation angle, controlling the motion turntable by the upper computer to adjust the position and the attitude of the part, and repeating the operation in the step 6 to output the identification result.
Step 7.1, judging the confidence coefficient T of the recognition result of the current part output by the YOLO4 algorithm0If it is greater than 0.95, if the confidence level T is higher than0If the recognition result is more than 0.95, the recognition result is directly output;
step 7.2, if the confidence coefficient T of the identification result of the current part0Less than 0.95, the optimal visual angle image P of the part is compared with the optimal visual angle image P of the part0Performing affine transformation calculation by using an affine transformation API in an OpenCV open source library, and outputting an affine transformation matrix M0;
7.3, when the part placing position is approximately positioned at the center of the rotary table, translating the vector tx0,ty0Neglecting, transform the affine matrix M0Conversion to the angle theta of the in-plane motion turntable to be adjusted0;
7.4, controlling the motion turntable through the upper computer to drive the part to adjust the pose to the optimal visual angle, and repeating the operation in the step 6 to output an identification result;
experiment and verification: for the classification recognition tasks of 10 pairs of similar parts, comparison is performed: the traditional template matching part identification method uses a YOLO4 algorithm under a single view angle and uses the method provided by the embodiment; the superparameter of the YOLO4 model in this embodiment is set as follows: the learning rate was set to 0.001, the maximum number of iterations was 10000, and the number of batch images was 256. The recognition accuracy rates of 10 pairs of similar parts are 65%, 78% and 96%, respectively, and the phase recognition result with the highest precision can be obtained by using the method provided by the embodiment.
The training of the algorithm is completed under a Windows10 platform, a VS2017 platform is used, a Darknet model framework is called by C + + and open source libraries such as OpenCV, OpenGL and Bullet are used for realizing, a CPU of a computer used for training is Kurui i5 series, a memory is 16GB, and a video card is 1060MQ video card. The embodiment mainly aims at the classification and identification tasks of the highly similar parts shown in fig. 2, and the main distinguishing areas of the highly similar parts often appear in local details, so that distinguishable information is easy to be lost due to the observation angle.
FIG. 3 is a schematic diagram of the hardware for the system of the present invention; mainly include the industry camera: the upper computer is used for controlling and taking charge of collecting the on-site part images; a support: for supporting system hardware architecture; parts to be identified: manually placing the movable rotary table on a movable rotary table; moving the rotary table: the device is controlled by an upper computer and is used for bearing the part to be identified and adjusting the pose of the part to be identified under the visual angle of the camera. The rotary worktable consists of a transmission system and a control system, and particularly relates to a rotary worktable for communicating with an Arduino Uno R3 development board through a serial port (Rx/Tx) and transmitting information for controlling the rotary angle of the rotary worktable. And then, converting the information through a program compiled on a development board, and controlling the pin corresponding to the pulse signal of the stepping motor to oscillate, wherein each rising edge of the signal is one-time oscillation, the high level is 5v, and the low level is 0 v. And then the 24V direct current driver is matched to finish the high-precision control of the rotary worktable.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The embodiments of the present invention are not described in detail, but are known in the art, and can be implemented by referring to the known techniques.
Claims (9)
1. A similar part identification method based on deep learning and model simulation is characterized by comprising the following steps:
step 1: simulating the free falling body of the rigid body part by using a Bullet physical engine, and acquiring the static pose of the three-dimensional model of the part to be identified in a virtual space;
step 2: simulating static postures of the parts observed from different viewing angles under light by using an OpenGL library, and storing simulation images of all the parts at different viewing angles;
and step 3: acquiring the minimum bounding box coordinate of the part in the simulation image by using an OpenCV (open circuit vehicle) library according to the color discrimination of the part in the simulation image and the surrounding environment;
and 4, step 4: making an image data set by using the simulation image, the part name and the bounding box position information, and taking the image data set as a training set of a YOLO4 algorithm for training;
and 5: sampling and extracting similar part simulation images under all the visual angles according to a certain interval angle to serve as an algorithm test set, and storing the optimal identification visual angle image of each similar part according to a test result;
step 6: collecting an image of a part to be detected on site, and inputting the image into a trained YOLO4 algorithm;
and 7: if the confidence coefficient of the recognition result is greater than a preset value, outputting the recognition result; otherwise, calculating an affine transformation matrix between the current image and the optimal identification view angle image of the similar part, converting the calculation result into a rotation angle, controlling the motion turntable to adjust the position and the pose of the part through the upper computer, repeating the step 6, and outputting the identification result.
2. The method for identifying similar parts based on deep learning and model simulation as claimed in claim 1, wherein the step 1 comprises the following steps:
step 1.1: importing a three-dimensional model of a part to be identified and setting an initial height h0;
Step 1.2: setting parameters including gravity, elasticity and rigid body attributes in a virtual space by using functions in a Bullet library;
step 1.3: controlling the part model to simulate a free falling body, judging whether the pose is a conventional placing pose in part detection after the part model is static, and resetting the initial height to repeat the steps if the pose is not the conventional placing pose in part detection;
step 1.4: repeating the operation until the static poses M of all the parts are obtainediWhere i ═ 1, 2, …, m denote the sequence of all parts.
3. The method for identifying similar parts based on deep learning and model simulation as claimed in claim 1, wherein the step 2 comprises the following steps:
step 2.1: importing the static pose of the part obtained in the step 1;
step 2.2: simulating parts by using an OpenGL library to recognize field illumination, and setting a virtual camera view angle;
step 2.3: rotating the static position of the part around the virtual Z axis in the model space at intervals of 1 degree, and storing the part simulation image P under the view angle of the virtual camera at the momentijI is 1, 2, …, m, j is 1, 2, …, n, wherein i represents a sequence of parts, and j represents a sequence of simulated images generated for each part;
step 2.4: and repeating the operation until the simulation images of all the parts are obtained.
4. The method for identifying similar parts based on deep learning and model simulation as claimed in claim 1, wherein the step 3 comprises the following steps:
step 3.1: utilizing the distinction between the color of the three-dimensional model of the part and the background color to binarize the simulation image and extracting the edge characteristics of the binarized image;
step (ii) of3.2: solving the minimum bounding box of the part in the simulation image after the operation of the step 3.1 by utilizing an OpenCV (open computer vision library) library, and returning the pixel coordinate (X) at the upper left corner of the bounding boxijt,Yijt) To the pixel coordinate (X) of the lower right cornerijd,Yijd) I is 1, 2, …, m, j is 1, 2, …, n, wherein i represents a sequence of parts and j represents a sequence of simulated images generated for each part.
5. The method for identifying similar parts based on deep learning and model simulation as claimed in claim 1, wherein the step 4 comprises the following steps:
step 4.1: placing the generated simulation image into a JPEGImages folder by using a VOC data set format;
step 4.2: randomly distributing a training set, a test set and a verification set catalog to generate ImageSets folders;
step 4.3: automatically generating an XML document according to the simulation image information, the part name and the bounding box position information, and putting the XML document into an options folder;
step 4.4: generating labels document information according to the corresponding XML document;
step 4.5: the steps are circulated until all training set information is collected;
step 4.6: the YOLO4 algorithm was substituted for network training.
6. The method for identifying similar parts based on deep learning and model simulation as claimed in claim 1, wherein the step 5 comprises the following steps:
step 5.1: sampling and extracting the simulation image P generated in the step 2 every 5 degreesikAs test set, i ═ 1, 2, …, m, k ∈ j;
step 5.2: the test set in step 5.1 is brought into the trained YOLO4 algorithm;
step 5.3: according to the confidence coefficient T of the test result in the step 5.2ikAnd saving and identifying the optimal visual angle image Pz of each part, wherein z belongs to i.
7. The method for identifying similar parts based on deep learning and model simulation as claimed in claim 1, wherein the step 7 comprises the following steps:
step 7.1: judging the confidence T of the recognition result of the current part output by the YOLO4 algorithm0If it is greater than 0.95, if the confidence level T is higher than0If the recognition result is more than 0.95, the recognition result is directly output;
step 7.2: if the confidence T of the recognition result of the current part0Less than 0.95, it is taken into contact with the best view angle image P of the part0Performing affine transformation calculation and outputting affine transformation matrix M0;
Step 7.3: transforming the affine into matrix M0Conversion to the angle theta of the in-plane motion turntable to be adjusted0;
Step 7.4: and (6) controlling the motion turntable through the upper computer to drive the part to adjust the pose to the optimal visual angle, and repeating the operation in the step 6 to output the identification result.
8. A computer readable storage medium comprising one or more programs for execution by one or more processors, the one or more programs comprising instructions for performing the method of any of claims 1-7.
9. A similar part identification system based on deep learning and model simulation, the system comprising:
a static pose acquisition module: the three-dimensional model pose simulation system is used for simulating the pose of a three-dimensional model when parts are naturally placed;
a part rendering image module: the system comprises a static pose acquisition module, a simulation image generation module and a visual angle collection module, wherein the static pose acquisition module is used for acquiring a part model which is naturally placed and generating a simulation image close to a real environment and an optimal visual angle collection;
an image dataset module: the simulation system is used for automatically identifying the position of a part in a simulation image and automatically generating an image data set; and is used for automatically generating a data set required by the training of the YOLO4 algorithm;
moving the turntable module: the upper computer controls the moving turntable, and the moving turntable is used for adjusting parts difficult to identify to an identifiable optimal visual angle, so that secondary identification is facilitated;
an output module: for feeding back the recognition result to the system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111395775.XA CN114218692A (en) | 2021-11-23 | 2021-11-23 | Similar part identification system, medium and method based on deep learning and model simulation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111395775.XA CN114218692A (en) | 2021-11-23 | 2021-11-23 | Similar part identification system, medium and method based on deep learning and model simulation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114218692A true CN114218692A (en) | 2022-03-22 |
Family
ID=80697991
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111395775.XA Pending CN114218692A (en) | 2021-11-23 | 2021-11-23 | Similar part identification system, medium and method based on deep learning and model simulation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114218692A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116681698A (en) * | 2023-07-28 | 2023-09-01 | 斯德拉马机械(太仓)有限公司 | Spring automatic assembly quality detection method and system |
CN116740549A (en) * | 2023-08-14 | 2023-09-12 | 南京凯奥思数据技术有限公司 | Vehicle part identification method and system |
-
2021
- 2021-11-23 CN CN202111395775.XA patent/CN114218692A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116681698A (en) * | 2023-07-28 | 2023-09-01 | 斯德拉马机械(太仓)有限公司 | Spring automatic assembly quality detection method and system |
CN116681698B (en) * | 2023-07-28 | 2023-10-10 | 斯德拉马机械(太仓)有限公司 | Spring automatic assembly quality detection method and system |
CN116740549A (en) * | 2023-08-14 | 2023-09-12 | 南京凯奥思数据技术有限公司 | Vehicle part identification method and system |
CN116740549B (en) * | 2023-08-14 | 2023-11-07 | 南京凯奥思数据技术有限公司 | Vehicle part identification method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109816725B (en) | Monocular camera object pose estimation method and device based on deep learning | |
CN109870983B (en) | Method and device for processing tray stack image and system for warehousing goods picking | |
US20180260669A1 (en) | Image processing apparatus, image processing method, template generation apparatus, object recognition processor, and object recognition processing program | |
CN110246127A (en) | Workpiece identification and localization method and system, sorting system based on depth camera | |
CN114218692A (en) | Similar part identification system, medium and method based on deep learning and model simulation | |
CN111612880B (en) | Three-dimensional model construction method based on two-dimensional drawing, electronic equipment and storage medium | |
JP2012532382A (en) | Object recognition using 3D model | |
CN111482967B (en) | Intelligent detection and grabbing method based on ROS platform | |
CN108818537B (en) | Robot industry sorting method based on cloud deep learning | |
CN109816634B (en) | Detection method, model training method, device and equipment | |
Zhang et al. | Texture-less object detection and 6D pose estimation in RGB-D images | |
CN113989944A (en) | Operation action recognition method, device and storage medium | |
Bickel et al. | Detection and classification of symbols in principle sketches using deep learning | |
Buls et al. | Generation of synthetic training data for object detection in piles | |
Oumer et al. | Appearance learning for 3D pose detection of a satellite at close-range | |
Álvarez et al. | Junction assisted 3d pose retrieval of untextured 3d models in monocular images | |
Filax et al. | Data for Image Recognition Tasks: An Efficient Tool for Fine-Grained Annotations. | |
Byambaa et al. | 6D pose estimation of transparent objects using synthetic data | |
CN117769724A (en) | Synthetic dataset creation using deep-learned object detection and classification | |
Piciarelli et al. | An augmented reality system for technical staff training | |
Liang | Mechanical parts pose detection system based on orb key frame matching algorithm | |
Kim et al. | Deep representation of industrial components using simulated images | |
Jiang et al. | 6D pose annotation and pose estimation method for weak-corner objects under low-light conditions | |
Dong et al. | A Method for Target Detection Based on Synthetic Samples of Digital Twins | |
CN111311721A (en) | Image data set processing method, system, storage medium, program and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |