CN111768369A - Steel plate corner point and edge point positioning method, workpiece grabbing method and production line - Google Patents

Steel plate corner point and edge point positioning method, workpiece grabbing method and production line Download PDF

Info

Publication number
CN111768369A
CN111768369A CN202010486614.0A CN202010486614A CN111768369A CN 111768369 A CN111768369 A CN 111768369A CN 202010486614 A CN202010486614 A CN 202010486614A CN 111768369 A CN111768369 A CN 111768369A
Authority
CN
China
Prior art keywords
picture
steel plate
point
points
corner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010486614.0A
Other languages
Chinese (zh)
Other versions
CN111768369B (en
Inventor
曾德天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Shibite Robot Co Ltd
Original Assignee
Hunan Shibite Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Shibite Robot Co Ltd filed Critical Hunan Shibite Robot Co Ltd
Priority to CN202010486614.0A priority Critical patent/CN111768369B/en
Publication of CN111768369A publication Critical patent/CN111768369A/en
Application granted granted Critical
Publication of CN111768369B publication Critical patent/CN111768369B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a steel plate corner point and edge point positioning method, a workpiece grabbing method and a production line. The steel plate corner point and edge point positioning method comprises the following steps: designing a corresponding learning network for detecting corner points and side points aiming at a specific scene; putting preset artificially synthesized scene picture data into a learning network for training so as to reversely propagate optimization parameters and generate an accurate pre-training model; and acquiring corner point pictures and side point pictures of the real scene, putting the picture data of the real scene into a learning network for training, further improving the accuracy and robustness of the model, and generating a final detection model. The technical scheme of the invention is beneficial to improving the positioning precision of the corner points and the side points of the steel plate.

Description

Steel plate corner point and edge point positioning method, workpiece grabbing method and production line
Technical Field
The invention relates to the technical field of corner point and edge point positioning, in particular to a steel plate corner point and edge point positioning method and a workpiece grabbing method.
Background
In the traditional image processing method, a picture generated by photographing a camera is processed, and the position of a steel plate in a robot coordinate system is determined by positioning the positions of corner points and edge points in the picture and converting the coordinate system. However, the positioning accuracy of the corners and the edges is greatly influenced by different illumination intensities, so that the positioning detection of the corners and the edges is not accurate enough in the traditional method, and the detection success rate is only about 60-80%.
Disclosure of Invention
The invention mainly aims to provide a method for positioning corner points and edge points of a steel plate, aiming at improving the accuracy of detecting the corner points and the edge points of the steel plate.
In order to achieve the purpose, the method for positioning the corner points and the side points of the steel plate, provided by the invention, comprises the following steps of:
designing a corresponding learning network for detecting corner points and side points aiming at a specific scene;
putting preset artificially synthesized scene picture data into a learning network for training so as to reversely propagate optimization parameters and generate an accurate pre-training model;
acquiring corner point pictures and side point pictures of a real scene, putting the picture data of the real scene into a learning network for training, and generating a final detection model by using an accurate pre-training model.
Optionally, the step of designing a learning network for detecting corner points and edge points corresponding to a specific scene includes:
the feature extraction module extracts features from bottom to top and from top to bottom through the convolution module, combines the two features, and eliminates the aliasing effect by using the convolution operation again for the combined features;
the region nomination module trains a coarse potential target region nomination module so as to preliminarily locate a rough target region;
the final prediction module is used for further accurately predicting the specific positions of the corner points and the side points by training a network combining convolution and full connection;
and performing end-to-end integral training on the feature extraction module, the region nomination module and the final prediction module, wherein the three modules are used in series during prediction.
Optionally, before the step of training the preset artificially synthesized scene picture data in the learning network to reversely propagate the optimization parameters and generate the accurate pre-training model, the method further includes:
the learning network is pre-trained on the currently published data set to generate a pre-trained model.
Optionally, the step of artificially synthesizing the scene picture data includes:
acquiring a scene picture shot by a camera;
according to the scene pictures, artificially synthesizing a plurality of synthetic pictures;
and marking the corner points or the side points of the synthesized picture.
Optionally, the step of artificially synthesizing a plurality of synthesized pictures according to the scene picture includes:
generating a first composite picture according to the first illumination intensity and the first illumination angle;
generating a second composite picture according to the second illumination intensity and the second illumination angle;
generating a third composite picture according to the third illumination intensity and the third illumination angle;
wherein the first illumination intensity is greater than the second illumination intensity and less than the third illumination intensity;
the first illumination angle is greater than the second illumination angle and less than the third illumination angle.
Optionally, the step of artificially synthesizing a plurality of synthesized pictures according to different working scenarios further includes:
generating a fourth composite picture according to the fourth illumination intensity and the fourth illumination angle;
generating a fifth composite picture according to the fifth illumination intensity and the fifth illumination angle;
wherein the first illumination intensity is greater than the fifth illumination intensity and less than the fourth illumination intensity;
the first illumination angle is smaller than the fifth illumination angle and larger than the fourth illumination angle.
Optionally, the step of generating the corner point picture and the edge point picture of the real scene includes:
collecting a plurality of real corner point pictures and side point pictures;
marking the corresponding corner points on the corner point picture, and marking the corresponding edge points on the edge point picture.
Optionally, the model is preloaded into the video memory before being trained to improve retrieval and training speed.
The invention also provides a workpiece grabbing method, which comprises a steel plate corner point and edge point positioning method, wherein the steel plate corner point and edge point positioning method comprises the following steps:
designing a corresponding learning network for detecting corner points and side points aiming at a specific scene;
putting preset artificially synthesized scene picture data into a learning network for training so as to reversely propagate optimization parameters and generate an accurate pre-training model;
acquiring corner point pictures and side point pictures of a real scene, putting the picture data of the real scene into a learning network for training, and generating a final detection model by an accurate pre-training model.
The invention also provides a workpiece production line, which uses a workpiece grabbing method, wherein the workpiece grabbing method comprises a steel plate corner point and edge point positioning method, and the steel plate corner point and edge point positioning method comprises the following steps:
designing a corresponding learning network for detecting corner points and side points aiming at a specific scene;
putting preset artificially synthesized scene picture data into a learning network for training so as to reversely propagate optimization parameters and generate an accurate pre-training model;
acquiring corner point pictures and side point pictures of a real scene, putting the picture data of the real scene into a learning network for training, and accurately pre-training a model to generate a final detection model.
In the technical scheme of the invention, a learning network for detecting the corresponding corner points and side points is designed aiming at a specific scene; then putting the preset artificially synthesized scene picture data into a learning network for training so as to reversely propagate the optimization parameters and generate an accurate pre-training model; then acquiring corner point pictures and side point pictures of a real scene, putting the picture data of the real scene into a learning network for training, and accurately pre-training a model to generate a final detection model; therefore, a large number of scene pictures are artificially synthesized for learning of the learning network model, so that the learning network model can learn detection conditions in various working scenes, the artificially synthesized pictures well make up for the defect that the number of high-quality learning scene pictures is insufficient (difficult to obtain), the learning network can extract certain common features from two scenes, and the method plays a very important role in enhancing the generalization capability of the corner point and edge point detection model; by generating a large amount of synthetic data and training the data, the accuracy of the model can be as high as more than 90%, and after the real picture is trained, the accurate pre-training model is refined into a final detection model, so that the detection accuracy of the learning network model is further improved greatly.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a schematic flow chart of an embodiment of a steel plate corner and edge point positioning method of the present invention;
fig. 2 to 4 are diagrams illustrating the effect detected by the final detection model.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that all directional indicators (such as up, down, left, right, front, and back) in the embodiments of the present invention are only used to explain the relative position relationship between the components, the motion situation, and the like in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicator is changed accordingly.
In addition, the descriptions related to "first", "second", etc. in the present invention are for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, "and/or" in the whole text includes three schemes, taking a and/or B as an example, including a technical scheme, and a technical scheme that a and B meet simultaneously; in addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
The invention mainly provides a steel plate corner point and edge point positioning method which is mainly applied to a workpiece grabbing method and improves the accuracy of robot identification and positioning of corner points and edge points by improving the accuracy of a model. Thereby being beneficial to improving the precision of the mechanical arm for grabbing the workpiece.
The following mainly describes the specific structure of the steel plate corner and edge positioning method.
Referring to fig. 1 to 4, in the embodiment of the present invention, the method for positioning corner points and edge points of a steel plate includes the following steps:
s100, designing a corresponding learning network for detecting corner points and side points aiming at a specific scene;
s200, putting preset artificially synthesized scene picture data into a learning network for training to reversely propagate optimization parameters to generate an accurate pre-training model;
s300, acquiring corner point pictures and side point pictures of the real scene, putting the picture data of the real scene into a learning network for training, and accurately pre-training a model to generate a final detection model.
Specifically, in this embodiment, there are many forms of corner point and edge point detection networks designed for a specific scene, where the specific scene may be set according to actual requirements, and a learning network for designing corner point and edge point detection is provided below.
The step of designing a learning network for detecting corresponding corner points and side points aiming at a specific scene comprises the following steps:
the feature extraction module extracts features from bottom to top and from top to bottom through the convolution module, combines the two features, and eliminates the aliasing effect by using the convolution operation again for the combined features; the region nomination module trains a coarse potential target region nomination module so as to preliminarily locate a rough target region; the final prediction module is used for further accurately predicting the specific positions of the corner points and the side points by training a network combining convolution and full connection; and performing end-to-end integral training on the feature extraction module, the region nomination module and the final prediction module, wherein the three modules are used in series during prediction.
When the learning network is used specifically, the feature extraction module extracts scenes firstly in a mode of combining the features extracted twice from bottom to top and from top to bottom respectively, and processes the combined features again. The region nomination module trains a coarse potential target region nomination module, and the final prediction module further accurately predicts the specific positions of corner points and side points by training a network combining convolution and full connection. Thus, a preliminary learning network model is formed.
After the learning network is established, the synthesized artificial pictures are used for learning the learning network, namely the artificial cooperation pictures are used for training the learning network. The artificial synthesis of the pictures has many modes, and can be synthesized according to various working conditions (the working conditions can include light intensity, light angle, cleaning degree of the surface of a workpiece and the like) and the pictures of the existing real working conditions. The following illustrates the synthetic method of the artificial picture.
In order to improve the reality of the artificially synthesized scene picture, the step of artificially synthesizing the scene picture data includes:
acquiring a scene picture shot by a camera; according to the scene pictures, artificially synthesizing a plurality of synthetic pictures; and marking the corner points or the side points of the synthesized picture. That is, the camera captures a plurality of pictures of real scenes, and artificially synthesized scene pictures are generated according to the real scene pictures. In the simulation process, a plurality of scene pictures can be synthesized by adjusting the scene conditions, such as one or more parameters of the illumination intensity, the illumination angle and the surface cleanliness of the workpiece, based on the same real scene picture.
Therefore, the artificial synthetic scene picture with the real scene picture as the basis and various different working conditions can be derived by simulating and displaying the working conditions displayed by the real picture and adjusting the working condition parameters in the synthetic process. Therefore, the working scene is greatly enriched, and the robustness of the learning network model and the detection precision of the model are improved. It is worth noting that a large number of scene pictures cannot be obtained before the whole system is not put into actual production, and a good network model cannot be trained even if the data size is not large. The defects of insufficient quantity of the artificial combined pictures are overcome due to the fact that the pictures of the real scene are difficult to acquire. Although the true degree of the synthesized picture is not as good as that of a real picture, some common features in two scenes can be learned and extracted, and the method plays an important role in enhancing the generalization capability of the corner point and edge point detection model. By generating large amounts of synthetic data and training only on these data, the accuracy of the model can be as high as 90%.
Two situations of artificially synthesizing scene pictures are specifically described as follows:
the first condition is as follows: according to the scene picture, the step of artificially synthesizing a plurality of synthesized pictures comprises the following steps:
generating a first composite picture according to the first illumination intensity and the first illumination angle;
generating a second composite picture according to the second illumination intensity and the second illumination angle;
generating a third composite picture according to the third illumination intensity and the third illumination angle;
wherein the first illumination intensity is greater than the second illumination intensity and less than the third illumination intensity;
the first illumination angle is greater than the second illumination angle and less than the third illumination angle.
In this embodiment, a first synthesized picture is synthesized with the first illumination intensity and the first illumination angle, and based on the first synthesized picture, a third synthesized picture whose illumination intensity and illumination angle are both greater than those of the first synthesized picture may be synthesized, or a second synthesized picture whose illumination intensity and illumination angle are both less than those of the first synthesized picture may be synthesized. Of course, in some embodiments, it is also contemplated to increase the cleanliness of the workpiece surface, such as the first cleanliness, the second cleanliness, and the third cleanliness may be set for the first synthetic picture, the second synthetic picture, and the third synthetic picture, respectively. And the first cleanliness is greater than the second cleanliness and less than the third cleanliness. In some embodiments, the texture condition of the surface of the workpiece can be further considered, the texture conditions of a plurality of workpieces are combined into a picture to form a comprehensive texture picture, and the number of integrated textures in the comprehensive texture picture can be processed according to different conditions. For example, a first texture number, a second texture number, and a third texture number may be set corresponding to the first composite picture, the second composite picture, and the third composite picture, respectively. The first texture number is larger than the second texture number and smaller than the third texture number.
Case two: according to different working scenes, the step of artificially synthesizing a plurality of synthetic pictures further comprises the following steps:
generating a fourth composite picture according to the fourth illumination intensity and the fourth illumination angle;
generating a fifth composite picture according to the fifth illumination intensity and the fifth illumination angle;
wherein the first illumination intensity is greater than the fifth illumination intensity and less than the fourth illumination intensity;
the first illumination angle is smaller than the fifth illumination angle and larger than the fourth illumination angle.
In this embodiment, a first composite picture is synthesized from the first illumination intensity and the first illumination angle, and a fourth composite picture whose illumination intensity is greater than that of the first composite picture and whose illumination angle is smaller than that of the first composite picture can be synthesized based on the first composite picture. Or a fifth composite picture with illumination intensity smaller than that of the first composite picture and illumination angle larger than that of the first composite picture can be synthesized. Of course, in some embodiments, it is also contemplated to increase the cleanliness of the workpiece surface, such as the first cleanliness, the fourth cleanliness, and the fifth cleanliness may be provided for the first synthetic picture, the fourth synthetic picture, and the fifth synthetic picture, respectively. And the first cleanliness is greater than the fifth cleanliness and less than the fourth cleanliness. In some embodiments, the texture condition of the surface of the workpiece can be further considered, the texture conditions of a plurality of workpieces are combined into a picture to form a comprehensive texture picture, and the number of integrated textures in the comprehensive texture picture can be processed according to different conditions. For example, a first texture number, a fourth texture number, and a fifth texture number may be set corresponding to the first composite picture, the second composite picture, and the third composite picture, respectively. The first texture number is larger than the fifth texture number and smaller than the fourth texture number.
After the accurate pre-training model is generated, the corner point picture and the side point picture of the real scene are obtained, the picture data of the real scene are put into a learning network for training, and the accurate pre-training model generates a final detection model. In this embodiment, the step of generating the corner point picture and the edge point picture of the real scene includes: collecting a plurality of real corner point pictures and side point pictures; marking the corresponding corner points on the corner point picture, and marking the corresponding edge points on the edge point picture.
And finally, predicting corner points and edge points on a real picture by the whole trained model. Before the system is not produced, a real picture is difficult to obtain, the real picture is more complex, and the background and the interference are complicated and changeable. After a batch of a few real scene pictures are collected and labeled, training is continuously carried out on the previous network, the network is made to contact with real scene data, and parameters in the model are corrected through back propagation, so that the accuracy and robustness of the model in the real scene pictures are further improved.
In this embodiment, a learning network for detecting corner points and edge points corresponding to a specific scene is designed; then putting the preset artificially synthesized scene picture data into a learning network for training so as to reversely propagate the optimization parameters and generate an accurate pre-training model; then acquiring corner point pictures and side point pictures of a real scene, putting the picture data of the real scene into a learning network for training, and accurately pre-training a model to generate a final detection model; therefore, a large number of scene pictures are artificially synthesized for learning of the learning network model, so that the learning network model can learn detection conditions in various working scenes, the defect that the number of high-quality learning scene pictures of the existing model is insufficient is overcome by artificially synthesizing the pictures, certain common features can be extracted from the two scenes by the learning network, and the method plays an important role in enhancing the generalization capability of the corner point and side point detection model; by generating a large amount of synthetic data and training the data, the accuracy of the model can be as high as more than 90%, after the real picture is trained, the accurate pre-training model is refined into the final detection model, the detection accuracy of the learning network model is further greatly improved, and the detection accuracy is improved to nearly 100% by referring to fig. 2 to 4.
In some embodiments, in order to improve the detection accuracy of the learning network, before the step of putting preset artificially synthesized scene picture data into the learning network for training to reversely propagate the optimization parameters, the step of generating an accurate pre-training model further includes: the learning network is pre-trained on the currently published data set to generate a pre-trained model. The learning network is pre-trained on the currently disclosed data set, so that the learning network performs high-quality initialization training, and a better initialization network parameter can be obtained.
In some embodiments, for efficiency of training and detection, the model is preloaded into video memory prior to training the model to increase retrieval and training speed. In this embodiment, in order to accelerate the detection speed, before training, the learning network model may be pre-loaded into the video memory card, or a pre-learned picture may be pre-stored in the video memory card, so as to facilitate the learning network call. Furthermore, besides preloading, a network pruning technology can be used for removing a front network and a rear network of certain branches in the network, carrying out overall performance comparison, removing useless network structures and simplifying the network. Therefore, unnecessary operation is reduced, and the effect of detection acceleration is achieved. In order to improve the overall detection time of the model and enable the overall detection time to meet the requirement of industrial production, the detection time is shortened to be within 3 seconds.
The invention further provides a workpiece grabbing method, which comprises a steel plate corner point and edge point positioning method, the concrete structure of the steel plate corner point and edge point positioning method refers to the embodiments, and the workpiece grabbing method adopts all the technical schemes of all the embodiments, so that the workpiece grabbing method at least has all the beneficial effects brought by the technical schemes of the embodiments, and the details are not repeated.
The workpiece grabbing method comprises the following steps:
calling a camera to photograph the corner points and the edge points of the steel plate, and performing corner point positioning and edge point positioning on the steel plate in the photographed image; acquiring coordinates of corner points and side points of the steel plate in the image;
converting coordinates of corner points and edge points of the steel plate in the image into a robot coordinate system, and matching the steel plate nesting image into the robot coordinate system to obtain the coordinates of the corner points and the edge points of each workpiece;
calculating the maximum rotation angle allowed by the sucker to rotate when the sucker sucks the workpiece according to the maximum size from the central point of the sucker to the edge of the sucker, the corner point coordinates and the side point coordinates of each workpiece and the position of the vacuum column;
comparing the required maximum rotation angle with a preset rotation angle given in the nesting diagram; and determining that the required maximum rotation angle is greater than or equal to a preset rotation angle, and then grabbing the workpiece by the mechanical arm.
Specifically, in this embodiment, the workpiece grabbing method is based on a nesting diagram, which may be provided for a customer or a third mechanism. When the nesting is blanking, the material is not easily discharged or is vacant at the place where the material is discharged, so that great waste is caused, and small materials with different shapes can be nested in the nesting, namely, the material is used for production as much as possible on the limited material area, so that the material utilization rate is improved, and the waste material is reduced. That is, the nesting drawing is a drawing for arranging parts by using the nesting method. In the process of processing the workpiece, firstly, the steel plate is cut into the required workpiece according to the nesting diagram, and then the workpiece on the steel plate is grabbed based on the nesting diagram through the mechanical arm according to the workpiece grabbing method.
Calling a camera to photograph the corner points and the edge points of the steel plate, and performing corner point positioning and edge point positioning on the steel plate in the photographed image; acquiring coordinates of corner points and side points of the steel plate in the image;
in this embodiment, first, the corner points and the edge points of the steel plate are photographed by using a camera, the photographed image is processed, the corner points and the edge points are positioned, and the corner points and the edge points are labeled. The side points and the corner points are marked in various ways, and can be marked manually or by a mature algorithm. The coordinates of the corner points and the edge points of the steel plate in the image are related to a coordinate system established by the image. In this embodiment, an internal reference matrix is provided in the robot, and the corner points and edge points in the image are converted into a coordinate system of the image captured by the camera. The positioning method of the steel plate corner points and edge points is explained in detail in the following embodiments.
Converting coordinates of corner points and edge points of the steel plate in the image into a robot coordinate system, and matching the steel plate nesting image into the robot coordinate system to obtain the coordinates of the corner points and the edge points of each workpiece;
in this embodiment, there are various ways to convert coordinates of corner points and edge points in an image into a robot coordinate system, and an example will be described below. For the corner points and edge points in the image, they are first converted into the camera coordinate system by the internal reference matrix and then into the robot base coordinate system by the external reference matrix, which refers to the following embodiments. After the corner point coordinates and the side point coordinates of the steel plate are determined, the nesting diagram is combined, and because the positions of all the workpieces in the nesting diagram are fixed in the nesting diagram, the robot can acquire the specific positions of all the workpieces on the steel plate, so that the real grabbing positions of all the workpieces are obtained.
The step of converting the coordinates of the corner points and the edge points of the steel plate in the image to a robot coordinate system comprises: placing a marking plate, calling a camera to photograph the marking plate, and calculating a conversion matrix R _ c _ to _ m from the camera to the marking plate;
binding a laser pen at the tail end of a mechanical arm sucker, walking three points on a marking plate, and respectively recording (x, y) values of the three points on a PLC display disc; wherein the first point represents an origin, the second point represents a point in the X direction, and the third point represents a point in the Y direction;
calculating unit vectors e in X and Y directions by subtraction and normalization with the origin coordinatesxAnd eyAnd cross product is made to the two unit vectors to obtain the unit vector e in the Z directionz;ex、ey、 ezAnd the recorded origin coordinate oo(x, y, z) constitutes a transformation matrix R _ m _ to _ f of the marking plate coordinate system to the robot coordinate system:
Figure BDA0002519331750000101
the external reference matrix R _ c _ to _ f is a dot product of R _ m _ to _ f and R _ c _ to _ m.
In the process of coordinate conversion, a marking plate is given first, a camera is called to photograph the marking plate, and a conversion matrix (internal reference matrix) from the camera to the marking plate is calculated. Meanwhile, a laser pen is bound at the tail end of the mechanical arm sucker, three points are arranged on the marking plate, and the (x, y) values of the three points on the PLC display disc are recorded respectively.
Calculating the maximum rotation angle allowed by the sucker to rotate when the sucker sucks the workpiece according to the maximum size from the central point of the sucker to the edge of the sucker, the corner point coordinates and the side point coordinates of each workpiece and the position of the vacuum column;
comparing the required maximum rotation angle with a preset rotation angle given in the nesting diagram; and determining that the maximum rotation angle is greater than or equal to a preset rotation angle, and grabbing the workpiece by the mechanical arm.
The suction cups of the robot arm, due to their large diameter, may hit the vacuum columns at the edge of the plate link during rotation. The maximum size from the center point of the suction cup to the edge of the suction cup can be various, in this embodiment, the longest distance is 0.71m, and certainly in some embodiments, the longest distance can be 0.6-0.8 m. For each vacuum column, the center point of the column on the horizontal plane is taken as the position of the column. On the steel plate, a region to be detected, which is indicated by a rectangular frame in fig. 2, is formed, and the region to be detected is an arrangement region of the workpieces in general. If the center coordinates of the part are within the rectangular area to be detected of fig. 2, the maximum (threshold) angle that the chuck can rotate when sucking the part needs to be estimated online. That is, it needs to be calculated how many degrees the sucker rotates at the maximum in the work of sucking the workpiece, and the sucker does not collide with the vacuum column. If the given rotation angle in the nesting diagram is larger than the given threshold angle, namely, the rotation angle of the sucker required for grabbing the workpiece exceeds the actual allowable condition, at the moment, the sucker cannot grab the workpiece illegally, namely, the workpiece is at present at a collision risk and cannot grab the workpiece directly, the optimal rotation angle and the magnetic flux matrix need to be recalculated until the maximum rotation angle of the allowed rotation is larger than or equal to the preset rotation angle required in the nesting diagram, and then the workpiece is grabbed directly. Of course, if the calculated maximum rotation angle of the rotation allowed is directly greater than the rotation angle specified by the nesting figure, a direct grasping is possible.
In the calculation process, the specific positions of the workpiece and the vacuum column in the robot coordinate system are considered, and the position of the mechanical arm and the maximum size of the sucker are combined, so that the position reached by the edge of the sucker can be directly grabbed if the sucker does not interfere with the vacuum column in the rotation process of the mechanical arm, and the current rotation angle can be directly grabbed. If the position reached by the sucker interferes with the vacuum column in the grabbing process, at the moment, the current rotation angle is unreasonable, and the grabbing route needs to be recalculated.
In the embodiment, a camera is called to photograph the corner points and the edge points of the steel plate, and the corner points and the edge points of the steel plate in the photographed image are positioned; acquiring coordinates of corner points and side points of the steel plate in the image; converting coordinates of corner points and side points of the steel plate in the image into a robot coordinate system through the internal reference matrix and the external reference matrix, and matching the steel plate nesting drawing into the robot coordinate system to obtain the corner point coordinates and the side point coordinates of each workpiece; then, calculating the maximum rotation angle allowed to rotate when the sucker sucks the workpiece according to the maximum size from the central point of the sucker to the edge of the sucker, the corner point coordinates and the edge point coordinates of each workpiece and the position of the vacuum column; comparing the required maximum rotation angle with a preset rotation angle given in the nesting diagram; determining that the required maximum rotation angle is larger than or equal to a preset rotation angle, and grabbing the workpiece by the mechanical arm; the actual working condition of the current workpiece and the positions of the vacuum column, the mechanical arm and the like are fully considered, so that the mechanical arm is ensured to avoid the vacuum column (obstacle avoidance) in the workpiece grabbing process, the sucker of the mechanical arm can stably and reliably grab the workpiece, the robustness of the system and the workpiece recognition grabbing speed are improved, and the grabbing accuracy needs to reach 100%.
The invention further provides a workpiece grabbing method, which comprises a steel plate corner point and edge point positioning method, the concrete structure of the steel plate corner point and edge point positioning method refers to the embodiments, and the workpiece grabbing method adopts all the technical schemes of all the embodiments, so that the workpiece grabbing method at least has all the beneficial effects brought by the technical schemes of the embodiments, and the details are not repeated.
The present invention further provides a workpiece production line, wherein the workpiece production line uses a workpiece grasping method, the specific scheme of the workpiece grasping method refers to the above embodiments, and the workpiece production line adopts all technical schemes of all the above embodiments, so that at least all beneficial effects brought by the technical schemes of the above embodiments are provided, and details are not repeated herein.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A steel plate corner point and edge point positioning method is characterized by comprising the following steps:
designing a corresponding learning network for detecting corner points and side points aiming at a specific scene;
putting preset artificially synthesized scene picture data into a learning network for training so as to reversely propagate optimization parameters and generate an accurate pre-training model;
acquiring corner point pictures and side point pictures of a real scene, putting the picture data of the real scene into a learning network for training, and generating a final detection model by using an accurate pre-training model.
2. The steel plate corner point and edge point positioning method of claim 1, wherein the step of designing a learning network for detecting corresponding corner points and edge points for a specific scene comprises:
the feature extraction module extracts features from bottom to top and from top to bottom through the convolution module, combines the two features, and eliminates the aliasing effect by using the convolution operation again for the combined features;
the region nomination module trains a coarse potential target region nomination module so as to preliminarily locate a rough target region;
the final prediction module is used for further accurately predicting the specific positions of the corner points and the side points by training a network combining convolution and full connection;
and performing end-to-end integral training on the feature extraction module, the region nomination module and the final prediction module, wherein the three modules are used in series during prediction.
3. The steel plate corner point and edge point positioning method of claim 1, wherein before the step of training the preset artificially synthesized scene picture data in a learning network to reversely propagate the optimization parameters and generate the accurate pre-training model, the method further comprises:
the learning network is pre-trained on the currently published data set to generate a pre-trained model.
4. The steel plate corner and edge positioning method according to any one of claims 1 to 3, wherein the step of artificially synthesizing scene picture data comprises:
acquiring a scene picture shot by a camera;
according to the scene pictures, artificially synthesizing a plurality of synthetic pictures;
and marking the corner points or the side points of the synthesized picture.
5. The steel plate corner point and edge point positioning method of claim 4, wherein the step of artificially synthesizing a plurality of synthetic pictures according to the scene picture comprises:
generating a first composite picture according to the first illumination intensity and the first illumination angle;
generating a second composite picture according to the second illumination intensity and the second illumination angle;
generating a third composite picture according to the third illumination intensity and the third illumination angle;
wherein the first illumination intensity is greater than the second illumination intensity and less than the third illumination intensity;
the first illumination angle is greater than the second illumination angle and less than the third illumination angle.
6. The steel plate corner point and edge point positioning method of claim 5, wherein the step of artificially synthesizing a plurality of synthetic pictures according to different working scenes further comprises:
generating a fourth composite picture according to the fourth illumination intensity and the fourth illumination angle;
generating a fifth composite picture according to the fifth illumination intensity and the fifth illumination angle;
wherein the first illumination intensity is greater than the fifth illumination intensity and less than the fourth illumination intensity;
the first illumination angle is smaller than the fifth illumination angle and larger than the fourth illumination angle.
7. The steel plate corner and edge positioning method according to any one of claims 1 to 3, wherein the step of generating a corner picture and an edge picture of a real scene comprises:
collecting a plurality of real corner point pictures and side point pictures;
marking the corresponding corner points on the corner point picture, and marking the corresponding edge points on the edge point picture.
8. The steel plate corner and edge positioning method according to any one of claims 1 to 3, characterized in that before the model is trained, the model is preloaded into a video memory to improve the retrieval and training speed.
9. A workpiece gripping method, characterized by comprising the steel plate corner point and edge point positioning method according to any one of claims 1 to 8.
10. A workpiece production line, characterized in that the workpiece gripping method as claimed in claim 9 is used.
CN202010486614.0A 2020-06-01 2020-06-01 Steel plate corner point and edge point positioning method, workpiece grabbing method and production line Active CN111768369B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010486614.0A CN111768369B (en) 2020-06-01 2020-06-01 Steel plate corner point and edge point positioning method, workpiece grabbing method and production line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010486614.0A CN111768369B (en) 2020-06-01 2020-06-01 Steel plate corner point and edge point positioning method, workpiece grabbing method and production line

Publications (2)

Publication Number Publication Date
CN111768369A true CN111768369A (en) 2020-10-13
CN111768369B CN111768369B (en) 2023-08-25

Family

ID=72719750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010486614.0A Active CN111768369B (en) 2020-06-01 2020-06-01 Steel plate corner point and edge point positioning method, workpiece grabbing method and production line

Country Status (1)

Country Link
CN (1) CN111768369B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114078162A (en) * 2022-01-19 2022-02-22 湖南视比特机器人有限公司 Truss sorting method and system for workpiece after steel plate cutting
CN114463751A (en) * 2022-01-19 2022-05-10 湖南视比特机器人有限公司 Corner positioning method and device based on neural network and detection algorithm

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1944731A2 (en) * 2007-01-12 2008-07-16 Seiko Epson Corporation Method and apparatus for detecting objects in an image
EP1988506A2 (en) * 2007-05-03 2008-11-05 Panasonic Electric Works Europe AG Method for automatically determining testing areas, testing method and testing system
JP2013152128A (en) * 2012-01-25 2013-08-08 Hitachi Ltd Surface inspection method and apparatus therefor
US20140025607A1 (en) * 2012-07-18 2014-01-23 Jinjun Wang Confidence Based Vein Image Recognition and Authentication
CN108510062A (en) * 2018-03-29 2018-09-07 东南大学 A kind of robot irregular object crawl pose rapid detection method based on concatenated convolutional neural network
CN108731588A (en) * 2017-04-25 2018-11-02 宝山钢铁股份有限公司 A kind of machine vision steel plate length and diagonal line measuring device and method
CN108764248A (en) * 2018-04-18 2018-11-06 广州视源电子科技股份有限公司 Image feature point extraction method and device
CN109636772A (en) * 2018-10-25 2019-04-16 同济大学 The defect inspection method on the irregular shape intermetallic composite coating surface based on deep learning
CN110298292A (en) * 2019-06-25 2019-10-01 东北大学 Detection method is grabbed when the high-precision real of rule-based object polygon Corner Detection
CN110443130A (en) * 2019-07-01 2019-11-12 国网湖南省电力有限公司 A kind of electric distribution network overhead wire abnormal state detection method
US20200160178A1 (en) * 2018-11-16 2020-05-21 Nvidia Corporation Learning to generate synthetic datasets for traning neural networks

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1944731A2 (en) * 2007-01-12 2008-07-16 Seiko Epson Corporation Method and apparatus for detecting objects in an image
EP1988506A2 (en) * 2007-05-03 2008-11-05 Panasonic Electric Works Europe AG Method for automatically determining testing areas, testing method and testing system
JP2013152128A (en) * 2012-01-25 2013-08-08 Hitachi Ltd Surface inspection method and apparatus therefor
US20140025607A1 (en) * 2012-07-18 2014-01-23 Jinjun Wang Confidence Based Vein Image Recognition and Authentication
CN108731588A (en) * 2017-04-25 2018-11-02 宝山钢铁股份有限公司 A kind of machine vision steel plate length and diagonal line measuring device and method
CN108510062A (en) * 2018-03-29 2018-09-07 东南大学 A kind of robot irregular object crawl pose rapid detection method based on concatenated convolutional neural network
CN108764248A (en) * 2018-04-18 2018-11-06 广州视源电子科技股份有限公司 Image feature point extraction method and device
CN109636772A (en) * 2018-10-25 2019-04-16 同济大学 The defect inspection method on the irregular shape intermetallic composite coating surface based on deep learning
US20200160178A1 (en) * 2018-11-16 2020-05-21 Nvidia Corporation Learning to generate synthetic datasets for traning neural networks
CN110298292A (en) * 2019-06-25 2019-10-01 东北大学 Detection method is grabbed when the high-precision real of rule-based object polygon Corner Detection
CN110443130A (en) * 2019-07-01 2019-11-12 国网湖南省电力有限公司 A kind of electric distribution network overhead wire abnormal state detection method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114078162A (en) * 2022-01-19 2022-02-22 湖南视比特机器人有限公司 Truss sorting method and system for workpiece after steel plate cutting
CN114463751A (en) * 2022-01-19 2022-05-10 湖南视比特机器人有限公司 Corner positioning method and device based on neural network and detection algorithm

Also Published As

Publication number Publication date
CN111768369B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
CN111761575B (en) Workpiece, grabbing method thereof and production line
WO2020124988A1 (en) Vision-based parking space detection method and device
JP3768174B2 (en) Work take-out device
CN109903331B (en) Convolutional neural network target detection method based on RGB-D camera
CN103886107B (en) Robot localization and map structuring system based on ceiling image information
CN111553949B (en) Positioning and grabbing method for irregular workpiece based on single-frame RGB-D image deep learning
CN112906797A (en) Plane grabbing detection method based on computer vision and deep learning
CN112509063A (en) Mechanical arm grabbing system and method based on edge feature matching
CN111862201A (en) Deep learning-based spatial non-cooperative target relative pose estimation method
CN111151463A (en) Mechanical arm sorting and grabbing system and method based on 3D vision
CN111768369A (en) Steel plate corner point and edge point positioning method, workpiece grabbing method and production line
CN110490936A (en) Scaling method, device, equipment and the readable storage medium storing program for executing of vehicle camera
CN115816460B (en) Mechanical arm grabbing method based on deep learning target detection and image segmentation
CN114140439A (en) Laser welding seam feature point identification method and device based on deep learning
CN114882109A (en) Robot grabbing detection method and system for sheltering and disordered scenes
CN113245235B (en) Commodity classification method and device based on 3D vision
CN113034600A (en) Non-texture planar structure industrial part identification and 6D pose estimation method based on template matching
CN113313116B (en) Underwater artificial target accurate detection and positioning method based on vision
CN114241269B (en) A collection card vision fuses positioning system for bank bridge automatic control
CN114580559A (en) Speed measuring method based on monocular vision system
EP3825804A1 (en) Map construction method, apparatus, storage medium and electronic device
CN114193440A (en) Robot automatic grabbing system and method based on 3D vision
CN113537079A (en) Target image angle calculation method based on deep learning
CN110599407B (en) Human body noise reduction method and system based on multiple TOF cameras in downward inclination angle direction
CN115861780B (en) Robot arm detection grabbing method based on YOLO-GGCNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant