CN112873200A - A degree of depth camera calibration device for industrial robot snatchs field - Google Patents

A degree of depth camera calibration device for industrial robot snatchs field Download PDF

Info

Publication number
CN112873200A
CN112873200A CN202110039639.0A CN202110039639A CN112873200A CN 112873200 A CN112873200 A CN 112873200A CN 202110039639 A CN202110039639 A CN 202110039639A CN 112873200 A CN112873200 A CN 112873200A
Authority
CN
China
Prior art keywords
camera
frames
assembly
real
prior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110039639.0A
Other languages
Chinese (zh)
Inventor
王福杰
任斌
秦毅
郭芳
姜鸣
胡耀华
姚智伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan University of Technology
Original Assignee
Dongguan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan University of Technology filed Critical Dongguan University of Technology
Priority to CN202110039639.0A priority Critical patent/CN112873200A/en
Publication of CN112873200A publication Critical patent/CN112873200A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/1653Programme controls characterised by the control loop parameters identification, estimation, stiffness, accuracy, error analysis
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1687Assembly, peg and hole, palletising, straight line, weaving pattern movement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a depth camera calibration device used in the field of industrial robot grabbing. The detection system comprises a workbench, an angle-adjustable camera, a light source, a vision controller and the like, wherein the vision controller identifies an object in an image acquired from the camera by processing and analyzing the image, and then distinguishes assembly parts and redundant objects; the detection method comprises the following steps: the method comprises the steps of obtaining multi-angle assembly area images, inputting the images into a trained target detection network model after preprocessing the images to perform feature extraction to predict the positions and the types of objects, then judging whether each object belongs to redundancy, marking the positions of the redundancy and giving an alarm. The invention can assist the manual real-time detection of the redundancy in the specific area in a multi-angle manner in the assembly process, has the advantages of high detection accuracy, strong real-time performance, flexible use and the like, can reduce the introduction of the redundancy in the assembly process and enhance the reliability of products.

Description

A degree of depth camera calibration device for industrial robot snatchs field
Technical Field
The invention relates to the technical field of camera depth calibration, in particular to a depth camera calibration device used in the field of industrial robot grabbing.
Background
The surplus refers to all substances existing in the product, which are generated from the outside or from the inside and are not related to the specified state of the product. In the assembly process of large-scale equipment with high reliability and high safety, due to the complex structure of the equipment, the production process and the assembly process are multiple, and the introduction of redundancy is very easy. For example, the worker may bring in screws, washers, hair, rag residues and other objects due to improper operation; processes such as welding, machining, etc. may introduce welding spatter, metal debris, etc. as a surplus. If the redundant materials are left in the equipment, serious potential safety hazards can be left, the normal work of the high-reliability equipment can be influenced, even faults are caused, safety accidents are caused, and the like.
Through long-term development, the current methods for detecting and controlling the redundancy mainly include a visual and aural detection method, an endoscopic detection method, an X-ray fluoroscopy detection method, an ultrasonic detection method, a motra detection method, a particle collision noise detection method and the like, and the detection steps are more and stricter. However, most automatic detection methods are only suitable for objects with complete assembly, and manual inspection is often performed during assembly, so that the problems of large human factors, easiness in missing inspection and the like still exist.
In the mechanical assembly process, a computer identifies an object in the assembly area image information acquired by a detection camera and judges whether redundancy exists. The essential of the vision detection process of the surplus objects is a target detection process, the current mainstream target detection algorithm is a deep learning-based method, and the method is characterized in that a convolutional neural network is used for extracting features and has the characteristics of strong anti-interference capability, high detection accuracy and the like.
Disclosure of Invention
In order to solve the problems in the background art, the invention provides a depth camera calibration device for the field of industrial robot grabbing, which can utilize a plurality of angle-adjustable cameras and a light source to shoot an assembly area in multiple angles, then use a Yolov3 network to perform target detection on the obtained image, identify objects in the assembly area and screen out surplus objects.
The technical scheme adopted by the invention is as follows:
depth camera calibration device used in industrial robot grabbing field
The angle-adjustable camera mounting device comprises a workbench, an angle-adjustable camera, an annular light source, an alarm indicating lamp, a display screen and a visual controller, wherein two vertical sectional materials are fixed on two sides of the workbench, a horizontal sectional material is connected between the two vertical sectional materials, an assembly body is placed on the table top of the workbench, the annular light source is arranged right above the assembly body, the annular light source is mounted on the horizontal sectional material through a light source adjusting rod, the angle-adjustable cameras are arranged above the left side, above the right side and right side of the assembly body, the angle-adjustable cameras above the left side and above the right side of the assembly body are respectively mounted on the two vertical sectional materials, and the angle-adjustable cameras above the assembly body are mounted on the horizontal sectional material; a visual controller is arranged in the control cabinet below the workbench and is respectively connected with a display screen arranged above the workbench surface and an alarm indicator lamp arranged at the top of the workbench.
Each angle-adjustable camera comprises a camera, a camera support and a driving mechanism consisting of a worm and a turbine camera connecting sheet, wherein the camera support is fixed on a vertical section or a horizontal section through a bolt, worm support frames are respectively installed on two sides of the back of the camera support, the worm is connected between the two worm support frames, the turbine camera connecting sheet is connected to the bottom of the worm through meshing, and spiral teeth of the worm are meshed with a gear on the top surface of the turbine camera connecting sheet; the camera support below the worm is provided with an arc-shaped groove, the turbine type camera connecting sheet is connected with the camera on the front side of the camera support through a screw penetrating through the arc-shaped groove, and the turbine type camera connecting sheet rotates around the arc-shaped groove under the driving of the worm, so that the rotation of the camera is driven, and the angle adjustment of the camera is realized.
Hexagonal holes are formed in the two ends of the worm, and the hexagonal holes are rotated through an inner hexagonal wrench to achieve rotation of the worm.
The visual controller displays the detection result through the display screen and controls the alarm indicator lamp to alarm when detecting the redundant materials in the assembly body.
Secondly, adopt above-mentioned a degree of depth camera calibration device for industrial robot snatchs field, including following step:
s1: turning on the annular light source and the cameras, and acquiring images of the assembly body during mechanical assembly by the three cameras from a plurality of angles;
s2: preprocessing and labeling all images acquired by a vision controller in a historical assembly process, and dividing the labeled images into a training set and a verification set;
s3: performing clustering analysis according to the size of an object in the training set image, and setting the number of optimal prior frames and the size of the optimal prior frames through clustering analysis;
s4: inputting the training set subjected to the cluster analysis in the step S3 into a YOLOv3 target detection network for training, and verifying on the verification set to obtain an assembly part and common redundancy identification model capable of identifying the type and the position of an object;
s5: the vision controller preprocesses the detection image received from the camera in real time and inputs the preprocessed detection image into the assembly part and common redundancy identification model in the step S4, and the type and the position of an object in the detection image are obtained through prediction;
s6: and the vision controller judges whether the object in the detection image belongs to the part used by the current assembly body or not according to the prediction result of the step S5, if not, the object is determined to be a surplus object, the position of the surplus object is marked, and an alarm indicator lamp is used for alarming.
The preprocessing operations in steps S2 and S5 include cropping the image so that the image width to height ratio is 1: 1 and linearly scaling the images to uniform the image size to 1024 × 1024.
The step S2 specifically includes: preprocessing all images acquired by a vision controller in a historical assembly process, marking the type number, the center coordinate, the width and the height of each object in the preprocessed images, constructing a data set through all the preprocessed and marked images, and randomly dividing the data set into a training set and a verification set according to the proportion of 4 to 1;
the type number of the object is marked according to the type number in the object list, the object list comprises all types of objects which can be identified in the vision controller and the corresponding type numbers, and the objects which can be identified comprise assembly parts and redundancy which does not belong to the assembly parts.
The step S3 specifically includes: and (4) extracting all object widths and heights in all training set images in the step (S2) as real frames, setting K prior frames as clustering centers, increasing the number K of the prior frames from 1 one by one, and performing Kmeans clustering analysis on all the real frames to obtain the shortest total distance DK corresponding to the K prior frames, wherein the larger the K is, the smaller the DK is, but the larger the calculation amount of the model prediction process is.
The prior frame number K satisfying the following conditions is used as the optimal prior frame number, which indicates that the shortest total distance changes slowly, at this time, the K prior frames are used as the optimal prior frames, and the prior frame size corresponding to each optimal prior frame is used as the optimal prior frame size: i DK-1-DK +1 < minimum distance threshold
Wherein K is the number of prior frames; DK is the shortest total distance obtained by performing Kmeans clustering analysis on all real frames when the number of the prior frames is K;
the process of performing the Kmeans cluster analysis on all the real boxes is as follows: calculating the distance between each real frame and the prior frame closest to the real frame to obtain the intermediate distance of the real frame, summing the intermediate distances of all the real frames to obtain a total distance, adjusting the size of the prior frame for multiple times to enable the total distance to be minimum, taking the total distance as the shortest total distance when the number K of the current prior frames is large, namely the overlapping degree of the real frames and the prior frames is maximum, and calculating the distance in cluster analysis according to the following formula:
distance=1-IOU
the IOU represents the overlapping degree of the real frame and the prior frame, namely the ratio of the overlapping area of the real frame and the prior frame to the area of the union set of the real frame and the prior frame when the centers of the real frame and the prior frame are overlapped. distance is the middle distance of the real frame; the purpose of the cluster analysis is to adjust the prior frame number K and the prior frame size to minimize the total distance, namely, the overlapping degree of the real frame and the prior frame is highest, so that the positioning accuracy of the detection model can be effectively improved.
In step S4, the object position includes an object size and an object center; the optimal prior frame is used for limiting the object size of the object predicted by the target detection network within a limited range, wherein the limited range specifically refers to: and setting a preset multiple, expanding and reducing the size of each prior frame according to the preset multiple to obtain an expanded prior frame and a reduced prior frame, and taking the area between the expanded prior frame and the reduced prior frame as a limited range of the size of the object.
The step S6 specifically includes: the vision controller performs redundancy judgment based on the type of the object in the detected image predicted in step S5: for objects which do not belong to the assembly parts, such as hair, rag residues and metal wires, the visual controller directly judges that the objects are redundant; for objects belonging to the assembly parts, the assembly parts not belonging to the current assembly part list are judged as surplus objects, and the current assembly part list comprises the types of all the assembly parts used in the current assembly and the corresponding type numbers thereof.
The invention has the beneficial effects that:
1) according to the invention, information is obtained through the visual sensor, so that the applicability of an application scene is strong; the convolutional neural network obtained through mass data training extracts features and classification, has high identification accuracy and strong robustness, meets the requirement of real-time detection, and can perform real-time detection on the surplus objects in the region at multiple angles in the mechanical assembly process.
2) The invention can assist the manual real-time detection of the excess in the specific area in the assembly process, has the advantages of high detection accuracy, strong real-time performance, flexible use and the like, can further reduce the introduction of the excess, and enhances the reliability of the product.
Drawings
FIG. 1 is a schematic diagram of the construction of the detection system of the present invention;
FIG. 2 is a schematic view of the structure and operation of the angle-adjustable camera according to the present invention, wherein (a), (b), and (c) are schematic views of the angle-adjustable camera with three different angles;
FIG. 3 is a flow chart of the detection method of the present invention;
FIG. 4 is a flow chart of the present invention for training an assembled part and a common redundancy recognition model.
The system comprises an alarm indicator lamp 1, a display screen 2, a vertical section bar 3, an annular light source 4 and a support thereof, a control cabinet 6, a horizontal section bar 9, an angle-adjustable camera 3, an angle-adjustable camera 11, an assembly 12, a workbench 13, a visual controller 14, a camera 15, an angle-adjustable camera support 16, a section bar 17, a worm 18, a worm support frame 20 and a turbine type camera connecting piece.
Detailed Description
In order that the objects and advantages of the invention will be more clearly understood, the invention is further described below with reference to examples; it should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are only for explaining the technical principle of the present invention, and do not limit the scope of the present invention.
It should be noted that in the description of the present invention, the terms of direction or positional relationship indicated by the terms "upper", "lower", "left", "right", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, which are only for convenience of description, and do not indicate or imply that the device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present invention.
Furthermore, it should be noted that, in the description of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
Referring to fig. 1, the angle-adjustable camera comprises a workbench 12, an angle-adjustable camera, an annular light source 4, an alarm indicator lamp 1, a display screen 2 and a visual controller 13, wherein two vertical sectional materials 3 are fixed on two sides of the workbench 12, a horizontal sectional material 9 is connected between the two vertical sectional materials 3, an assembly body 11 is placed on the table surface of the workbench 12, the annular light source 4 is arranged right above the assembly body 11, the annular light source 4 is arranged on the horizontal sectional material 9 through a light source adjusting rod, the angle-adjustable cameras are arranged above the left, upper right and right of the assembly body 11, the angle-adjustable cameras 10 arranged above the left and upper right of the assembly body 11 are respectively arranged on the two vertical sectional materials 3, and the angle-adjustable camera arranged right above the assembly body 11 is arranged on the horizontal sectional material 9 and right above the annular light source 4; a visual controller 13 is arranged in the control cabinet 6 below the workbench 12, and the visual controller 13 is respectively connected with a display screen 2 arranged above the workbench surface and an alarm indicator lamp 1 arranged at the top of the workbench 12. The vision controller 13 displays the detection result through the display screen 2, and controls the alarm indicator lamp 1 to alarm when detecting the surplus objects in the assembly body 11.
The horizontal section bar 9 is connected with the vertical section bar 3 through an angle piece, and the height of the horizontal section bar 9 can be freely adjusted. The annular light source 4 is used for enabling light in a working area to be uniform, eliminating shadows and improving the stability of the detection system. The ring-shaped light source 4 can freely move in the horizontal and vertical directions by adjusting the light source adjusting rod.
As shown in fig. 2, each angle-adjustable camera 10 includes a camera 14, a camera support 15, and a driving mechanism composed of a worm 17 and a turbine camera connecting plate 20, the camera support 15 is fixed on the vertical profile 3 or the horizontal profile 9 through bolts, worm support frames 18 are installed on both sides of the back of the camera support 15, the worm 17 is connected between the two worm support frames 18, the turbine camera connecting plate 20 is connected to the bottom of the worm 17 through meshing, and the spiral teeth of the worm 17 are meshed with the gear on the top surface of the turbine camera connecting plate 20; the camera support 15 below the worm 17 is provided with an arc-shaped groove, the turbine type camera connecting piece 20 is connected with the camera 14 on the front side of the camera support 15 through a screw penetrating through the arc-shaped groove, and the turbine type camera connecting piece 20 rotates along the arc-shaped groove under the driving of the worm 17, so that the rotation of the camera 14 is driven, and the angle adjustment of the camera 14 is realized. Hexagonal holes are formed in the two ends of the worm 17, and the rotation of the worm 17 is achieved by rotating the hexagonal holes through an inner hexagonal wrench.
The adjustable angle camera can freely rotate in a large angle range, different shooting angles can be conveniently adjusted, the worm gear transmission ratio is large, and the precision is high when the adjustable angle camera rotates.
In the invention, the height and the angle of the annular light source 4 and the camera 14 can be freely adjusted, so that the annular light source can conveniently adapt to different working requirements. The plurality of angle-adjustable cameras 10 are arranged at a plurality of positions of the assembly area, the image information of the assembly area is acquired from different angles in an all-around mode, and the detection program is stricter and more reliable.
The specific embodiment is as follows:
the visual detection method of the excess in the mechanical assembly comprises the following steps:
step 1: and turning on the light source and the cameras, and acquiring image signals of the area where the assembly body is located during mechanical assembly by the plurality of cameras from a plurality of angles, converting the image signals into electric signals of digital quantity and sending the electric signals to the vision controller.
Step 2: the vision controller is connected with a camera through an interface circuit to acquire a large number of images of historical assembly processes, objects in the images are manually identified after preprocessing operation is carried out, the type number, the center coordinate, the width and the height of each object are marked, a data set is constructed, and then the method comprises the following steps of: a scale of 1 randomly divides the data set into a training set and a validation set.
The type number of the object is marked according to the type number in the object list, the object list comprises all types of objects which can be identified in the visual controller and the corresponding type numbers, and the objects which can be identified comprise assembly parts and redundancy which does not belong to the assembly parts; the center coordinates of the object are marked in a coordinate system established with the upper left corner of the image as the origin.
And step 3: performing clustering analysis according to the size of an object in the training set image, and setting the number of optimal prior frames and the size of the optimal prior frames through clustering analysis;
firstly extracting all object widths and heights in all training set images in the step S2 as real frames, then setting K prior frames as clustering centers, increasing the number K of the prior frames one by one from 1, and obtaining the shortest total distance DK corresponding to the K prior frames by performing Kmeans clustering analysis on all the real frames, wherein the larger the K is, the smaller the DK is, but the larger the calculation amount of the model prediction process is.
The prior frame number K satisfying the following conditions is used as the optimal prior frame number, which indicates that the shortest total distance changes slowly, at this time, the K prior frames are used as the optimal prior frames, and the prior frame size corresponding to each optimal prior frame is used as the optimal prior frame size: i DK-1-DK +1 < minimum distance threshold
Wherein K is the number of prior frames; DK is the shortest total distance obtained by performing Kmeans clustering analysis on all real frames when the number of the prior frames is K;
the process of Kmeans clustering analysis for all real boxes is as follows: calculating the distance between each real frame and the prior frame closest to the real frame to obtain the intermediate distance of the real frame, summing the intermediate distances of all the real frames to obtain a total distance, adjusting the size of the prior frame for multiple times to enable the total distance to be minimum, taking the total distance as the shortest total distance when the number K of the current prior frames is large, namely the overlapping degree of the real frames and the prior frames is maximum, and calculating the distance in cluster analysis according to the following formula:
distance=1-IOU
the IOU represents the overlapping degree of the real frame and the prior frame, namely the ratio of the overlapping area of the real frame and the prior frame to the area of the union set of the real frame and the prior frame when the centers of the real frame and the prior frame are overlapped. distance is the middle distance of the real frame; the purpose of the cluster analysis is to adjust the prior frame number K and the prior frame size to minimize the total distance, namely, the overlapping degree of the real frame and the prior frame is highest, so that the positioning accuracy of the detection model can be effectively improved.
And 4, step 4: training a YOLOv3 target detection network by using a training set to obtain an assembled part and common redundancy recognition model, and specifically comprising the following steps of:
4.1) setting training parameters such as batch number, iteration times, learning rate, the number and size of prior frames obtained by clustering in the step 3 and the like;
4.2) carrying out data enhancement operations such as random linear scaling, overturning, brightness adjustment, contrast adjustment, color tone and the like on the image to be input into the network model to improve the generalization capability of the model and the identification precision of small objects;
4.3) extracting a characteristic diagram by using a convolutional neural network Darknet-53 by using a YOLOv3 model, and then carrying out regression analysis on the characteristic diagrams with different sizes to predict the types and positions of objects;
4.4) comparing the prediction result with the labeled real result, and calculating a loss value according to a loss function;
4.5) reversely propagating and updating the model parameters according to the loss value;
4.6) repeating the steps 4.2-4.5, stopping training when the maximum iteration times is reached, and obtaining the average loss value in the whole process
And taking the corresponding model at the minimum time as a final output model, and verifying the comprehensive performance of the model on a verification set.
The trained model mainly recognizes two types of objects:
(1) all possible assembly parts;
(2) hair, rag residues, metal wires and other common excess materials.
In step S4, the object position includes an object size and an object center; and obtaining the final object size according to the size of the optimal prior frame obtained in the step S3: respectively limiting the size of the object obtained by model prediction in a limited range on the basis of the size of each optimal prior frame, outputting the size of the object limited in the limited range as a result, and outputting the size of the object with the highest confidence level in all the result outputs as the final size of the object;
the limiting range specifically means: and setting a preset multiple, expanding and reducing the size of each prior frame according to the preset multiple to obtain an expanded prior frame and a reduced prior frame, and taking the area between the expanded prior frame and the reduced prior frame as a limited range of the size of the object.
And 5: and (4) the vision controller receives the images transmitted back by the camera in real time, inputs the preprocessed images into the assembled parts and the common redundancy identification model in the step (4), and predicts the types and specific positions of the objects in each image. And the GPU which is good at parallel operation is used for operating the target detection network, so that the detection real-time property is ensured.
Step 6: and the vision controller judges whether the object belongs to the surplus according to the predicted object type, marks the position of the surplus and gives an alarm. The judgment of the redundancy is divided into two steps:
6.1) for the identified assembly parts, judging whether each part belongs to the assembly body by the vision controller according to a preset part list, and if not, determining that the parts belong to the assembly body as a surplus; wherein the preset part list comprises the type numbers of all assembly parts used by the current assembly
6.2) for the identified common redundancy such as hair, rag residues, metal wires and the like, the visual controller directly identifies the object as the redundancy.
Wherein, the pretreatment in the step 2 and the step 5 comprises the following steps:
(1) cutting the image to enable the ratio of the width to the height of the image to be 1 to 1;
(2) the image is linearly scaled to a size of 1024 × 1024.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention; various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A depth camera calibration device used in the field of industrial robot grabbing is characterized by comprising a workbench (12), an angle-adjustable camera, an annular light source (4), an alarm indicator lamp (1), a display screen (2) and a visual controller (13), wherein two vertical sectional materials (3) are fixed on two sides of the workbench (12), a horizontal sectional material (9) is connected between the two vertical sectional materials (3), an assembly body (11) is placed on the table top of the workbench (12), the annular light source (4) is arranged right above the assembly body (11), the annular light source (4) is arranged on the horizontal sectional material (9) through a light source adjusting rod, the angle-adjustable cameras are arranged above the left side, above the right side and above the assembly body (11), the angle-adjustable cameras (10) arranged above the left side and above the right side of the assembly body (11) are respectively arranged on the two vertical sectional materials (3), the angle-adjustable camera positioned right above the assembly body (11) is arranged on the horizontal section bar (9) and is positioned right above the annular light source (4); a visual controller (13) is arranged in a control cabinet (6) positioned below the workbench (12), and the visual controller (13) is respectively connected with a display screen (2) arranged above the workbench surface and an alarm indicator lamp (1) arranged at the top of the workbench (12). Each angle-adjustable camera (10) comprises a camera (14), a camera support (15), a driving mechanism consisting of a worm (17) and a turbine type camera connecting piece (20), wherein the camera support (15) is fixed on a vertical section (3) or a horizontal section (9) through a bolt, worm support frames (18) are respectively installed on two sides of the back of the camera support (15), a worm is connected between the two worm support frames (18), the turbine type camera connecting piece (20) is connected to the bottom of the worm (17) through meshing, and spiral teeth of the worm (17) are meshed with gears on the top surface of the turbine type camera connecting piece (20); the camera support (15) below the worm (17) is provided with an arc-shaped groove, the turbine type camera connecting sheet (20) is connected with the camera (14) on the front side of the camera support (15) through a screw penetrating through the arc-shaped groove, and the turbine type camera connecting sheet (20) rotates along the arc-shaped groove under the driving of the worm (17), so that the rotation of the camera (14) is driven, and the angle adjustment of the camera (14) is realized.
2. The depth camera calibration device for the industrial robot grabbing field is characterized in that both ends of the worm (17) are provided with hexagonal holes, and the rotation of the worm (17) is realized by rotating the hexagonal holes through an inner hexagonal wrench.
3. The depth camera calibration device for the industrial robot gripping field as claimed in claim 1, wherein the vision controller (13) displays the detection result through the display screen (2) and controls the alarm indicator lamp (1) to alarm when detecting the surplus in the assembly body (11).
4. A depth camera calibration arrangement for the field of industrial robot gripping according to any of the claims 1-3, characterized by the following steps:
s1: turning on the annular light source (4) and the cameras (14), and acquiring images of the mechanical assembly assembling body (11) by the three cameras (14) from multiple angles;
s2: preprocessing and labeling all images acquired by a vision controller in a historical assembly process, and dividing the labeled images into a training set and a verification set;
s3: performing clustering analysis according to the size of an object in the training set image, and setting the number of optimal prior frames and the size of the optimal prior frames through clustering analysis;
s4: inputting the training set subjected to the cluster analysis in the step S3 into a YOLOv3 target detection network for training, and verifying on the verification set to obtain an assembly part and common redundancy identification model capable of identifying the type and the position of an object;
s5: the vision controller preprocesses the detection image received from the camera in real time and inputs the preprocessed detection image into the assembly part and common redundancy identification model in the step S4, and the type and the position of an object in the detection image are obtained through prediction;
s6: and the vision controller judges whether the object in the detection image belongs to the part used by the current assembly body or not according to the prediction result of the step S5, if not, the object is determined to be a surplus object, the position of the surplus object is marked, and an alarm indicator lamp is used for alarming.
5. The depth camera calibration apparatus for the industrial robot gripping field as claimed in claim 4, wherein the preprocessing operations in the steps S2 and S5 include cropping the image and linearly scaling the image.
6. The depth camera calibration device for the industrial robot gripping field as claimed in claim 4, wherein the step S2 is specifically as follows: preprocessing all images acquired by a vision controller in a historical assembly process, marking the type number, the center coordinate, the width and the height of each object in the preprocessed images, constructing a data set through all the preprocessed and marked images, and randomly dividing the data set into a training set and a verification set according to the proportion of 4 to 1;
the type number of the object is marked according to the type number in the object list, the object list comprises all types of objects which can be identified in the vision controller and the corresponding type numbers, and the objects which can be identified comprise assembly parts and redundancy which does not belong to the assembly parts.
7. The depth camera calibration device for the industrial robot gripping field as claimed in claim 4, wherein the step S3 is specifically as follows: and (4) extracting all object widths and heights in all training set images in the step (S2) as real frames, setting K prior frames as clustering centers, increasing the number K of the prior frames from 1 one by one, and performing Kmeans clustering analysis on all the real frames to obtain the shortest total distance DK corresponding to the K prior frames.
8. The prior frame number K meeting the following conditions is used as the optimal prior frame number, the corresponding K prior frames are used as the optimal prior frames, and the prior frame size corresponding to each optimal prior frame is used as the optimal prior frame size:
i DK-1-DK +1 < minimum distance threshold
Wherein K is the number of prior frames; and DK is the shortest total distance obtained by performing Kmeans clustering analysis on all real frames when the number of the prior frames is K.
9. The depth camera calibration device for the industrial robot gripping field as claimed in claim 7, wherein the process of performing the Kmeans cluster analysis on all the real frames is as follows: calculating the distance between each real frame and the prior frame closest to the real frame to obtain the intermediate distance of the real frame, summing the intermediate distances of all the real frames to obtain a total distance, adjusting the size of the prior frame to minimize the total distance, and taking the total distance as the shortest total distance when the number K of the current prior frames is reached, namely the overlapping degree of the real frames and the prior frames is the highest, wherein the distance calculation formula in the cluster analysis is as follows:
distance=1-IOU
the IOU represents the overlapping degree of the real frame and the prior frame, and the distance is the middle distance of the real frame.
10. The depth camera calibration device for the industrial robot gripping field as claimed in claim 4, wherein the step S6 is specifically as follows: the vision controller performs redundancy judgment based on the type of the object in the detected image predicted in step S5: for objects which do not belong to the assembly parts, such as hair, rag residues and metal wires, the visual controller directly judges that the objects are redundant; for objects belonging to the assembly parts, the assembly parts not belonging to the current assembly part list are judged as surplus objects, and the current assembly part list comprises the types of all the assembly parts used in the current assembly and the corresponding type numbers thereof.
CN202110039639.0A 2021-01-13 2021-01-13 A degree of depth camera calibration device for industrial robot snatchs field Withdrawn CN112873200A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110039639.0A CN112873200A (en) 2021-01-13 2021-01-13 A degree of depth camera calibration device for industrial robot snatchs field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110039639.0A CN112873200A (en) 2021-01-13 2021-01-13 A degree of depth camera calibration device for industrial robot snatchs field

Publications (1)

Publication Number Publication Date
CN112873200A true CN112873200A (en) 2021-06-01

Family

ID=76044890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110039639.0A Withdrawn CN112873200A (en) 2021-01-13 2021-01-13 A degree of depth camera calibration device for industrial robot snatchs field

Country Status (1)

Country Link
CN (1) CN112873200A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113601028A (en) * 2021-08-14 2021-11-05 李涛 Intelligent laser edge trimming device of machining cutter
CN113752680A (en) * 2021-09-10 2021-12-07 金华市鑫辉自动化设备有限公司 Multipurpose gloves thermoprint equipment based on visual detection

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113601028A (en) * 2021-08-14 2021-11-05 李涛 Intelligent laser edge trimming device of machining cutter
CN113752680A (en) * 2021-09-10 2021-12-07 金华市鑫辉自动化设备有限公司 Multipurpose gloves thermoprint equipment based on visual detection

Similar Documents

Publication Publication Date Title
CN110385282B (en) System and method for visually detecting excess based on deep learning in mechanical assembly
CN112873200A (en) A degree of depth camera calibration device for industrial robot snatchs field
CN108765378B (en) Machine vision detection method for workpiece contour flash bulge under guidance of G code
CN105157603B (en) A kind of line laser sensor
CN104977305A (en) Welding quality analysis device based on infrared vision and analysis method thereof
CN110220481B (en) Handheld visual detection equipment and pose detection method thereof
CN115631138A (en) Zirconium alloy plate laser cutting quality monitoring method and device
WO2003093761A1 (en) Method and instrument for measuring bead cutting shape of electric welded tube
EP3775854B1 (en) System for the detection of defects on a surface of at least a portion of a body and method thereof
US20240095949A1 (en) Machine vision detection method, detection apparatus and detection system thereof
Xu et al. Real‐time image capturing and processing of seam and pool during robotic welding process
CN114792323A (en) Steel pipe deformation detection method and system based on image processing
CN107703513A (en) A kind of novel non-contact contact net relative position detection method based on image procossing
EP2295931A1 (en) Device and method for assessing the shape of an object
EP3798622A1 (en) Systems and methods for inspecting pipelines using a robotic imaging system
CN116772723A (en) Weld quality detection method based on structured light imaging
CN107154033A (en) A kind of high ferro contact net rotation ears vertical openings pin missing detection method and system
CN114211168A (en) Method for correcting plane welding seam track based on image subtraction
CN117817223B (en) Welding seam identification method for robot welding
CN111967323A (en) Electric power live working safety detection method based on deep learning algorithm
CN112686838B (en) Rapid detection device and detection method for ship anchor chain flash welding system
US20130345850A1 (en) Procedure for controlling the shape of a complex metal profile obtained by a series of successive bendings of a sheet metal on a panel bender
EP3757939B1 (en) Method and apparatus for checking the production quality of cables that are provided with a protective sheath, in particular electrical cables
CN112051272A (en) High-pressure gas cylinder inner surface defect detection system based on machine vision
CN115255565A (en) Global pattern recognition based narrow gap welding notch edge vision sensing detection method and application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210601