CN114619429B - Mechanical arm control method based on recognition template - Google Patents

Mechanical arm control method based on recognition template Download PDF

Info

Publication number
CN114619429B
CN114619429B CN202210434587.1A CN202210434587A CN114619429B CN 114619429 B CN114619429 B CN 114619429B CN 202210434587 A CN202210434587 A CN 202210434587A CN 114619429 B CN114619429 B CN 114619429B
Authority
CN
China
Prior art keywords
identification
distance
picture
template
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210434587.1A
Other languages
Chinese (zh)
Other versions
CN114619429A (en
Inventor
詹友军
何志雄
尹以茳
李志滔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Tiantai Robot Co Ltd
Original Assignee
Guangdong Tiantai Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Tiantai Robot Co Ltd filed Critical Guangdong Tiantai Robot Co Ltd
Priority to CN202210434587.1A priority Critical patent/CN114619429B/en
Publication of CN114619429A publication Critical patent/CN114619429A/en
Application granted granted Critical
Publication of CN114619429B publication Critical patent/CN114619429B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0093Programme-controlled manipulators co-operating with conveyor means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1669Programme controls characterised by programming, planning systems for manipulators characterised by special application, e.g. multi-arm co-operation, assembly, grasping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention aims to provide a mechanical arm control method based on an identification template, which is applied to a mechanical arm for carrying and comprises the following steps: step S1: periodically shooting an inlet of a conveying belt, acquiring a first picture, judging whether the first picture has an object, and if so, acquiring a distance identification mark of the object on the conveying belt at the moment; step S2: tracking and shooting the distance identification mark, and acquiring a second picture when the distance identification mark appears at a first specified distance point after tracking and shooting; step S3: identifying the second picture, judging whether an object in the second picture is a target to be grabbed, and if so, generating a grabbing instruction according to an identification result; step S4: when the distance recognition mark appears at a second specified distance point after tracking shooting, the mechanical arm is triggered to execute the grabbing instruction, the object to be grabbed is grabbed, the transported object on the transport belt is recognized for multiple times, the recognition times are increased, and the recognition effect of the object to be grabbed is improved.

Description

Mechanical arm control method based on recognition template
Technical Field
The invention relates to the technical field of mechanical arm control, in particular to a mechanical arm control method based on an identification template.
Background
With the continuous development of mechanical arm and mechanical arm technology, the mechanical arm is put into industrial production, but the problem that the mechanical arm is used for grabbing a target part in a complex production environment is difficult to overcome. In particular, in the production of conveyor belts, there are a number of different components on the conveyor belt, and different robot arms are required to grasp different components.
At present, the main reasons for identifying the parts by the mechanical arm in the conveying belt in a complex environment are two:
1. the placing angles of the parts on the conveying belt are different, so that when the identification template is applied mechanically, the identification points of the identification template correspond to the wrong positions of the parts.
2. Meanwhile, the parts on the conveying belt move along with the conveying belt of the conveying belt, and the camera periodically acquires pictures on the conveying belt, and identifies the pictures to judge whether the parts to be grabbed exist. Therefore, when the part appears at the edge of the picture, the part cannot be effectively identified by the identification template stored in the mechanical arm due to the change of the pixel distance of the part.
Disclosure of Invention
In view of the above drawbacks, the present invention provides a method for controlling a robot arm based on an identification template, so as to eliminate the influence of the part being shot and the part being placed on the identification template, and improve the identification rate of the robot arm for the part to be captured.
In order to achieve the purpose, the invention adopts the following technical scheme: a mechanical arm control method based on an identification template is applied to a mechanical arm for carrying, and comprises the following steps:
a distance identification mark is arranged on a belt body of the conveying belt, the distance identification mark moves along with the conveying of the belt body, and a plurality of specified distance points which do not move along with the belt body are arranged at intervals of the conveying belt;
step S1: periodically shooting an inlet of a conveying belt, acquiring a first picture, judging whether the first picture has an object, and if so, acquiring a distance identification mark of the object on the conveying belt at the moment;
step S2: tracking and shooting the distance identification mark, and acquiring a current frame picture of the current tracking and shooting video as a second picture when the distance identification mark appears at a first specified distance point after tracking and shooting;
step S3: identifying the second picture, judging whether an object in the second picture is a target to be captured or not, if so, generating a capturing instruction according to an identification result, and continuing tracking shooting on the distance identification mark;
step S4: and when the distance identification mark appears at a second specified distance point after tracking shooting, triggering the mechanical arm to execute a grabbing instruction, and grabbing the object to be grabbed.
Preferably, a plurality of the first predetermined distance points are provided;
when the object to be captured is identified to exist in the second picture acquired at a certain first specified distance point, acquiring second pictures of other first specified distance points in the following process is not needed.
Preferably, the step of determining whether the object exists in the first picture is as follows:
extracting an identification frame at the inlet of a conveying belt on a first picture by adopting an One-Stage algorithm, and if the identification frame exists, indicating that an object exists in the first picture; if the identification frame does not exist, the fact that the object does not exist in the first picture is indicated.
Preferably, the step of determining that the distance identifier appears at the first predetermined distance point includes:
acquiring a certain frame of a tracking shooting video, acquiring a pixel distance of a distance identification mark in the frame, acquiring the distance between a camera and the distance identification mark according to the linear relation between the pixel distance and the distance, judging that the distance between the camera and the distance identification mark falls into a distance threshold point, and if so, enabling the distance identification mark to reach the first specified distance point.
Preferably, before the mechanical arm identifies the picture, the method further includes the following steps:
step S31: manufacturing an identification template according to an object to be captured, performing edge expansion on the identification template and a mask image corresponding to the identification template, and setting the angle and scale range attribute of the identification template to obtain the identification template with 360 rotation angles; carrying out feature lifting on 360 identification templates, and storing in a configuration file;
step S32: and training the recognition template to obtain recognition features in the recognition template, and writing the recognition features into a configuration file.
Preferably, in step S32, the process of training the recognition template is as follows:
step S321: performing gradient quantization of the direction of a first layer of pyramid on the identification template, and extracting identification features of the identification template in the first layer of pyramid;
step S322: performing gradient quantization of the direction of a second layer of pyramid on the identification template, and extracting identification features of the identification template in the second layer of pyramid;
step S323: storing the identification features of the identification template at the current angle, which are extracted in the first layer pyramid and the second layer pyramid;
step S324: and acquiring the next identification template with different rotation angles, and repeating the steps S321 to S323 until the identification characteristics of all angles of the identification template are acquired.
Preferably, the process of identifying the second picture and determining whether the second picture has the target to be captured is as follows:
step S33: extracting the identification frame of the object in the second picture by adopting an One-Stage algorithm;
step S34: calling the identification features of 360 templates in the configuration file, and respectively matching the identification features of the 360 identification templates with the content of the identification frame to obtain a highest matching score;
step S35: and judging whether the highest matching score is greater than a threshold value, if so, indicating that the target to be captured exists in the current picture, if not, correcting the highest matching score, judging whether the corrected highest matching score is greater than the threshold value again, if so, indicating that the target to be captured exists in the current picture, otherwise, indicating that the target to be captured does not exist in the current picture.
Preferably, the specific steps of obtaining the highest matching score in step S34 are as follows:
and respectively matching the 360 identification templates with the identification frames, wherein a specific formula for acquiring the matching score is as follows:
Figure GDA0003793216420000041
wherein
Figure GDA0003793216420000042
Representing the matching score of the identification frame and the ith template, i and n are natural integers,
Figure GDA0003793216420000043
indicates the nth recognition point, y, in the recognition frame i (n) represents an nth recognition feature in an ith recognition template;
and obtaining the matching scores of the 360 recognition templates relative to the recognition frame, and sequentially arranging according to the size of the matching scores.
Preferably, the specific formula for correcting the highest matching score is as follows:
Figure GDA0003793216420000044
wherein
Figure GDA0003793216420000045
For the highest matching score after the correction,
Figure GDA0003793216420000046
the matching score is the highest, lambda is an adjusting coefficient, and an obtaining formula of the adjusting coefficient lambda is as follows;
Figure GDA0003793216420000047
wherein b is the pixel distance of the width of the recognition template under the mechanical arm, c is the pixel distance of the length of the recognition frame in the second picture, a is the pixel distance of the length of the recognition template under the mechanical arm, and d is the pixel distance of the width of the recognition frame in the second picture.
Preferably, when the target to be grabbed is not identified in the second picture taken at the last specified distance point, step S1 is repeated.
One of the above technical solutions has the following advantages or beneficial effects: 1. the object to be transported on the conveyor belt is identified for multiple times, so that the identification times are increased, and the identification effect of the object to be grabbed is improved.
2. The template drawings with 360 different rotation angles are manufactured, the state of more targets to be recognized under the real condition can be effectively covered, and the recognition accuracy is improved by increasing the number of recognized templates.
3. And correcting the highest matching score to a certain degree by adopting an adjusting coefficient so as to reduce the occurrence of misjudgment caused by the shooting angle.
Drawings
FIG. 1 is a flow chart of a method in one embodiment of the invention.
FIG. 2 is a flow diagram of an identification process in one embodiment of the invention.
Fig. 3 is a schematic diagram of the system in one embodiment of the invention.
Wherein: arm 1, camera 2, distance identification sign 3.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "axial", "radial", "circumferential", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used merely for convenience of description and for simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
As shown in fig. 1 to 3, a robot arm control method based on an identification template is applied to a robot arm (1) for transportation, and includes the following steps:
the distance identification mark (3) is arranged on the belt body of the conveying belt, the distance identification mark (3) moves along with the conveying of the belt body, and a plurality of specified distance points which do not move along with the belt body are arranged at intervals of the conveying belt;
step S1: periodically shooting an inlet of a conveying belt, acquiring a first picture, judging whether an object exists in the first picture, and if so, acquiring a distance identification mark (3) of the object on the conveying belt at the moment;
step S2: tracking shooting is carried out on the distance identification mark (3), and when the distance identification mark (3) appears at a first specified distance point after tracking shooting, a current frame picture of a current tracking shooting video is obtained and serves as a second picture;
step S3: identifying the second picture, judging whether an object in the second picture is a target to be captured or not, if so, generating a capturing instruction according to an identification result, and continuing tracking shooting on the distance identification mark (3);
step S4: when the distance recognition mark (3) appears at a second specified distance point after tracking shooting, triggering the mechanical arm (1) to execute a grabbing instruction, and grabbing the object to be grabbed.
In this application, shoot the conveyer belt through camera (2), when the conveyer belt starts, camera (2) can periodic shooting conveyer belt entrance, obtain first picture, if there is the object on the conveyer belt in the first picture, then explain the conveyer belt and begin the function, have the thing to spread into on the conveyer belt, whether only need discern this object this moment for the spare part that waits to snatch, can call arm (1) snatch this object. The shooting periodicity of the camera (2) can be adjusted according to the operation speed of the conveyer belt, for example, in the production process, an object is thrown to the conveyer belt every 10 seconds, and the shooting period of the camera (2) is set to 10 seconds. The frequency of production is consistent, and waste of stored content and running memory caused by redundant shooting is avoided.
And due to the shooting angle problem of the camera (2), the situation of misjudgment is caused in the identification process. Fig. 3 is a top view of the structure of the conveyor belt in the present application. The conveyer belt body along its length can be provided with a plurality of distance discernment sign (3) at the conveyer belt, distance discernment sign (3) are for scribbling the region of specific colour, and every distance discernment sign (3) all adopt different colours as distinguishing, and it is convenient right the judgement of discernment and distance is tracked apart from discernment sign (3), distance discernment sign (3) owing to set up on the body of the belt, can remove along with the body of the belt for distance discernment sign (3) and object can keep the state of relative stillness, thereby can effectively help camera (2) are tracked and are shot.
And when the first picture is recognized to have the object, acquiring the distance identification mark (3) tracking one side of the object at the moment. Then the camera (2) tracks and shoots the distance identification mark (3), the camera (2) obtains the distance from the distance identification mark (3) in real time, when the distance reaches a specified first specified distance point, the current frame of the video is extracted to be used as a second picture, the second picture is identified, and whether the target is to be grabbed or not is judged. When the second picture is obtained, the distance between the camera (2) and the object is shortened, and the shooting angle is gradually reduced, so that the actual recognition process of the second picture gradually tends to the recognition training process, and the recognition efficiency of the second picture can be effectively improved. When the object to be grabbed is identified, the mechanical arm (1) grabs the object to be grabbed.
When the object to be grabbed is recognized to exist, the grabbing instruction is generated, after the grabbing instruction is generated, the camera (2) continuously tracks the distance identification mark (3) and judges whether the distance identification mark reaches a second specified distance point, the second specified distance point is arranged at the edge of the grabbing range of the mechanical arm (1), when the distance identification mark (3) reaches the second specified distance point, the object to be grabbed is ready to enter the grabbing range of the mechanical arm (1), the grabbing instruction is transmitted to the mechanical arm (1), and due to delay in transmission, when the object to be grabbed is transmitted to the mechanical arm (1), the object to be grabbed already enters the grabbing range of the mechanical arm (1). And the mechanical arm (1) receives the grabbing instruction and is controlled to grab the object to be grabbed. Automatic grabbing of the target part is achieved.
Preferably, a plurality of the first predetermined distance points are provided;
when the object to be captured is identified to exist in the second picture acquired at a certain first specified distance point, the second picture is not acquired at other subsequent first specified distance points.
In the invention, a plurality of first specified distance points are arranged, the number of the second pictures and the identification frequency can be increased, the shooting angles of the second pictures acquired by each first specified distance point are different, the identification rate can be improved, and the influence on the identification result caused by different shooting angles is avoided. And the specified distance point can be set according to the distance length between the camera (2) and the inlet of the conveying belt. When the distance between the camera (2) and the inlet of the conveying belt is longer, the distance between the first specified distance points is longer, and vice versa.
Preferably, the step of determining whether the object exists in the first picture is as follows:
extracting an identification frame at the inlet of a conveying belt on a first picture by adopting an One-Stage algorithm, and if the identification frame exists, indicating that an object exists in the first picture; if the identification frame does not exist, the fact that the object does not exist in the first picture is indicated.
The One-Stage algorithm used in the invention is to perform recognition training on goods conveyed on the conveyor belt before use, identify whether an object exists on the conveyor belt and the position of the object on the conveyor belt, continue to use the One-Stage algorithm again to obtain the specific position of the conveyor belt of the object to be grabbed after the grabbing object is identified in the second picture, add the specific position into the grabbing instruction, obtain the specific position of the object to be grabbed on the conveyor belt after the grabbing instruction is analyzed, and then control the mechanical arm (1) to grab the object to be grabbed according to the specific position.
Preferably, the step of determining that the distance identifier (3) appears at the first predetermined distance point is as follows:
acquiring a certain frame of a tracking shooting video, acquiring the pixel distance of the distance identification mark (3) in the frame, acquiring the distance between the camera (2) and the distance identification mark (3) according to the linear relation between the pixel distance and the distance, judging that the distance between the camera (2) and the distance identification mark (3) falls into a distance threshold point, and if so, enabling the distance identification mark (3) to reach the first specified distance point.
In the invention, the distance between a camera (2) and a distance identification mark (3) is adopted to judge whether the distance identification mark (3) reaches the first specified distance point, for example, the first specified distance point is provided with a plurality of distance identification marks, wherein the distance threshold point of the first specified distance point is 10m, and the distance threshold point of the second specified distance point is 8 m. When the distance between the distance identification mark (3) and the camera (2) is 10m, the distance identification mark (3) falls into a first distance threshold point of a first specified distance point, the current frame of the video shot by the camera (2) is obtained as a first second picture, when the distance between the distance identification mark (3) and the camera (2) is 8m, the distance identification mark (3) falls into a second distance threshold point of the first specified distance point, and the current frame of the video shot by the camera (2) is obtained as a second picture.
Before the invention is used, the linear relation between the pixel distance and the distance in the camera (2) needs to be obtained in advance. In one embodiment, a calibration object with a known size is used, the calibration object is photographed when the camera (2) is 1, 2, 3, 4 and 5m away from the calibration object, and the pixel distances of the width x and the height y in the image are calculated by using a picture tool. The height of the calibration object is 0.285m and 0.289m respectively like the actual width of the calibration object, when the camera (2) is 1m away from the calibration object d, the pixel distance x is 200pixels, and y is 300 pixels; when the camera (2) is 2m away from the calibration object d, the pixel distance x is 100pixels, and the pixel distance y is 150 pixels. Since there is a linear relationship between the pixel distance and the actual distance, when d is 3m, x is 66pix, and y is 100 pix; when d is 4m, x is 50pix, and y is 75 pix; when d is 5m, x is 40pix, and y is 60 pix. At a vertical distance of one meter, the actual distance represented by 1 pixel is: x is 0.285 m/200 and y is 0.289 m/300. At two meters vertical distance, the actual distance represented by 1 pixel is: x is 0.285 m/100, y is 0.289 m/150, 0.289 (300/d). At a vertical distance of n meters, the actual distance represented by 1 pixel is: x is 0.285 ÷ (200/n), and y is 0.289 ÷ (300/n). Due to the difference of the pixel distance of each camera (2), the pixel distance of each camera needs to be measured by the method before use.
Preferably, before the mechanical arm (1) identifies the picture, the method further comprises the following steps:
step S31: manufacturing an identification template according to an object to be captured, performing edge expansion on the identification template and a mask image corresponding to the identification template, and setting the angle and scale range attribute of the identification template to obtain the identification template with 360 rotation angles; carrying out feature lifting on 360 identification templates, and storing the characteristics in a configuration file;
step S32: and training the recognition template to obtain recognition features in the recognition template, and writing the recognition features into a configuration file.
The invention is improved based on the existing two-dimensional image recognition, in the existing two-dimensional image recognition, a corresponding recognition model is manufactured to recognize a target image acquired by a camera, only a single training template is usually arranged in the recognition model, the image recognition is realized by lifting features in the training template, but parts on an industrial conveying belt usually have similar parts which need to be grabbed by a mechanical arm (1), the parts are irregularly scattered on the conveying belt, the parts are not recognized at a perfect angle when being recognized by the mechanical arm (1), and after the parts rotate, some recognition features can be overlapped with the recognition features of other parts in position, so that the recognition accuracy of the mechanical arm (1) is reduced. The improved scheme of the application is that identification templates with 360 rotation angles are manufactured in an identification model, and identification features in the 360 identification templates are obtained. In the identification, the mechanical arm (1) acquires a target image and compares the target image with the identification characteristics in the 360 identification templates to judge whether the current target image is a target to be identified. The identification templates with 360 different rotation angles are manufactured, the state transition of the target to be identified under the real condition can be effectively covered, and the identification accuracy is improved by increasing the number of the identified templates.
Preferably, in step S32, the process of training the recognition template is as follows:
step S321: performing gradient quantization of the direction of a first layer of pyramid on the identification template, and extracting identification features of the identification template in the first layer of pyramid;
step S322: performing gradient quantization of the direction of a second layer of pyramid on the identification template, and extracting identification features of the identification template in the second layer of pyramid;
step S323: storing the identification features of the identification template extracted from the first layer pyramid and the second layer pyramid at the current angle;
step S324: and acquiring the next identification template with different rotation angles, and repeating the steps S321 to S323 until the identification characteristics of all angles of the identification template are acquired.
In the method, the identification features are obtained in a double-layer pyramid gradient mode, the identification features in the first layer of pyramids are firstly adopted for identification during identification, and after the identification features obtained from the first layer of pyramids are matched with the identification features in the target image, the identification features obtained from the second layer of pyramids are used for further identification. Through twice recognition, the recognition accuracy can be effectively improved.
Preferably, the process of identifying the second picture and determining whether the second picture has the target to be captured is as follows:
step S33: extracting the identification frame of the object in the second picture by adopting an One-Stage algorithm;
step S34: calling the identification features of 360 templates in the configuration file, and respectively matching the identification features of the 360 identification templates with the content of the identification frame to obtain a highest matching score;
step S35: and judging whether the highest matching score is larger than a threshold value, if so, indicating that the current picture has an object to be grabbed, if not, correcting the highest matching score, judging whether the corrected highest matching score is larger than the threshold value again, if so, indicating that the current picture has the object to be grabbed, otherwise, indicating that the current picture does not have the object to be grabbed.
Because the 360 recognition templates with different angles are arranged in the application, each template cannot be perfectly matched with the content in the recognition frame of the target to be grabbed, and the target to be grabbed can be effectively recognized only by the recognition templates with similar angles. Therefore, in an embodiment of the present application, a template matching score is used to determine whether there is an identification template matching the target to be captured in the 360 identification templates. The method comprises the steps of firstly, respectively matching 360 identification templates with an identification frame, obtaining matching scores, then carrying out statistics on the size sequence of all the matching scores to obtain the highest matching score, if the matching score is larger than a set threshold value, indicating that the identification template corresponding to the highest matching score is similar to a target to be captured, and judging that the target to be captured exists in a current picture. If the highest matching score is lower than the threshold value, the influence of the shooting angle of the camera (2) can be searched at the moment, and the pixel distance of the identification frame in the picture is influenced, so that the application can further correct the highest matching score to a certain extent by adopting an adjusting coefficient, and the occurrence of misjudgment caused by the shooting angle is reduced.
Preferably, the specific steps of obtaining the highest matching score in step S34 are as follows:
and respectively matching the 360 identification templates with the identification frames, wherein a specific formula for acquiring the matching score is as follows:
Figure GDA0003793216420000121
wherein
Figure GDA0003793216420000122
Representing the matching score of the identification frame and the ith template, i and n are natural integers,
Figure GDA0003793216420000123
indicates the nth recognition point, y, in the recognition frame i (n) represents an nth recognition feature in an ith recognition template;
and obtaining the matching scores of the 360 recognition templates relative to the recognition frame, and sequentially arranging according to the size of the matching scores.
Preferably, the specific formula for correcting the highest matching score is as follows:
Figure GDA0003793216420000124
wherein
Figure GDA0003793216420000125
For the highest matching score after the correction,
Figure GDA0003793216420000126
the matching score is the highest, lambda is an adjusting coefficient, and an obtaining formula of the adjusting coefficient lambda is as follows;
Figure GDA0003793216420000131
wherein b is the pixel distance of the width of the recognition template under the mechanical arm (1), c is the pixel distance of the length of the recognition frame in the second picture, a is the pixel distance of the length of the recognition template under the mechanical arm (1), and d is the pixel distance of the width of the recognition frame in the second picture.
Preferably, when the target to be grabbed is not identified in the second picture taken at the last specified distance point, step S1 is repeated.
Due to the fact that the adjustment coefficient is introduced to adjust the recognition result, the recognition efficiency is improved, and the target to be grabbed can be recognized at a non-last specified distance point. After the recognition, if the second picture of the specified distance point after the specified distance point is continuously recognized, the running memory is wasted. Therefore, when the object to be captured is recognized, the specified distance point after the specified distance point does not need to be subjected to matching recognition.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (6)

1. A mechanical arm control method based on an identification template is applied to a mechanical arm for carrying, and is characterized in that: the method comprises the following steps:
the distance identification mark is arranged on the belt body of the conveying belt, the distance identification mark moves along with the conveying of the belt body, and a plurality of specified distance points which do not move along with the belt body are arranged at intervals of the conveying belt;
step S1: periodically shooting an inlet of a conveying belt, acquiring a first picture, judging whether the first picture has an object, and if so, acquiring a distance identification mark of the object on the conveying belt at the moment;
step S2: tracking and shooting the distance identification mark, and acquiring a current frame picture of the current tracking and shooting video as a second picture when the distance identification mark appears at a first specified distance point after tracking and shooting;
step S3: identifying the second picture, judging whether an object in the second picture is a target to be captured or not, if so, generating a capturing instruction according to an identification result, and continuing tracking shooting on the distance identification mark;
step S4: when the distance identification mark appears at a second specified distance point after tracking shooting, triggering the mechanical arm to execute a grabbing instruction, and grabbing the object to be grabbed;
before the mechanical arm identifies the picture, the method further comprises the following steps:
step S31: manufacturing an identification template according to an object to be captured, performing edge expansion on the identification template and a mask image corresponding to the identification template, and setting the angle and scale range attribute of the identification template to obtain the identification template with 360 rotation angles; extracting the characteristics of the 360 identification templates, and storing the characteristics in a configuration file;
step S32: training the recognition template to obtain recognition features in the recognition template, and writing the recognition features into a configuration file;
in step S32, the process of training the recognition template is as follows:
step S321: performing gradient quantization of the direction of a first layer of pyramid on the identification template, and extracting identification features of the identification template in the first layer of pyramid;
step S322: performing gradient quantization of the direction of a second layer of pyramid on the identification template, and extracting identification features of the identification template in the second layer of pyramid;
step S323: storing the identification features of the identification template extracted from the first layer pyramid and the second layer pyramid at the current angle;
step S324: acquiring the next identification template with different rotation angles, and repeating the steps S321-S323 until the identification characteristics of all angles of the identification template are acquired;
the process of identifying the second picture and judging whether the second picture has the target to be captured is as follows:
step S33: extracting the identification frame of the object in the second picture by adopting an One-Stage algorithm;
step S34: calling the identification features of 360 templates in the configuration file, and respectively matching the identification features of the 360 identification templates with the content of the identification frame to obtain a highest matching score;
step S35: judging whether the highest matching score is larger than a threshold value, if so, indicating that an object to be grabbed exists in the current picture, if not, correcting the highest matching score, judging whether the corrected highest matching score is larger than the threshold value again, if so, indicating that the object to be grabbed exists in the current picture, otherwise, indicating that the object to be grabbed does not exist in the current picture;
the specific steps of obtaining the highest matching score in step S34 are as follows:
and respectively matching the 360 identification templates with the identification frames, wherein a specific formula for acquiring the matching score is as follows:
Figure FDA0003793216410000021
wherein
Figure FDA0003793216410000022
Representing the matching score of the identification frame and the ith template, i and n are natural integers,
Figure FDA0003793216410000023
indicates the nth recognition point, y, in the recognition frame i (n) represents an nth recognition feature in an ith recognition template;
and obtaining the matching scores of the 360 recognition templates relative to the recognition frame, and sequentially arranging according to the size of the matching scores.
2. The method for controlling the mechanical arm based on the recognition template as claimed in claim 1, wherein:
a plurality of first prescribed distance points are provided;
when the object to be captured is identified to exist in the second picture acquired at a certain first specified distance point, the second picture is not acquired at other subsequent first specified distance points.
3. The method for controlling the mechanical arm based on the recognition template as claimed in claim 2, wherein: the step of judging whether the object exists in the first picture is as follows:
extracting an identification frame at the inlet of a conveying belt on a first picture by adopting an One-Stage algorithm, and if the identification frame exists, indicating that an object exists in the first picture; if the identification frame does not exist, the fact that the object does not exist in the first picture is indicated.
4. The method for controlling the mechanical arm based on the recognition template as claimed in claim 2, wherein: the judgment step that the distance identification mark appears at the first specified distance point is as follows:
acquiring a certain frame of a tracking shooting video, acquiring a pixel distance of a distance identification mark in the frame, acquiring the distance between a camera and the distance identification mark according to the linear relation between the pixel distance and the distance, judging that the distance between the camera and the distance identification mark falls into a distance threshold point, and if so, enabling the distance identification mark to reach the first specified distance point.
5. The method for controlling the mechanical arm based on the recognition template as claimed in claim 1, wherein the specific formula for correcting the highest matching score is as follows:
Figure FDA0003793216410000031
wherein
Figure FDA0003793216410000032
For the highest matching score after the correction,
Figure FDA0003793216410000033
the matching score is the highest, lambda is an adjusting coefficient, and an obtaining formula of the adjusting coefficient lambda is as follows;
Figure FDA0003793216410000034
wherein b is the pixel distance of the width of the recognition template under the mechanical arm, c is the pixel distance of the length of the recognition frame in the second picture, a is the pixel distance of the length of the recognition template under the mechanical arm, and d is the pixel distance of the width of the recognition frame in the second picture.
6. The method for controlling the mechanical arm based on the recognition template as recited in claim 5, wherein:
and when the target to be grabbed is not identified in the second picture shot by the last specified distance point, repeating the step S1.
CN202210434587.1A 2022-04-24 2022-04-24 Mechanical arm control method based on recognition template Active CN114619429B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210434587.1A CN114619429B (en) 2022-04-24 2022-04-24 Mechanical arm control method based on recognition template

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210434587.1A CN114619429B (en) 2022-04-24 2022-04-24 Mechanical arm control method based on recognition template

Publications (2)

Publication Number Publication Date
CN114619429A CN114619429A (en) 2022-06-14
CN114619429B true CN114619429B (en) 2022-09-30

Family

ID=81905712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210434587.1A Active CN114619429B (en) 2022-04-24 2022-04-24 Mechanical arm control method based on recognition template

Country Status (1)

Country Link
CN (1) CN114619429B (en)

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06333025A (en) * 1993-05-27 1994-12-02 Sanyo Electric Co Ltd Window size deciding method
JP4864363B2 (en) * 2005-07-07 2012-02-01 東芝機械株式会社 Handling device, working device, and program
CN103558858A (en) * 2013-11-13 2014-02-05 魏树桂 Collecting robot implement system based on machine vision
CN106671083B (en) * 2016-12-03 2019-03-19 安徽松科智能装备有限公司 A kind of assembly robot system based on Machine Vision Detection
CN110977939B (en) * 2019-11-26 2021-05-04 重庆凡聚智能科技有限公司 Target workpiece identification and positioning system
CN110948491B (en) * 2019-12-21 2021-05-18 深圳市华成工业控制股份有限公司 Industrial robot grabbing method based on visual following
CN111890361A (en) * 2020-07-10 2020-11-06 章伟 Conveyor belt visual tracking robot grabbing system and using method thereof
CN112036463A (en) * 2020-08-26 2020-12-04 国家电网有限公司 Power equipment defect detection and identification method based on deep learning
CN112686858A (en) * 2020-12-29 2021-04-20 熵智科技(深圳)有限公司 Visual defect detection method, device, medium and equipment for mobile phone charger
CN112917460B (en) * 2021-02-05 2022-04-08 深圳小黑智能科技有限公司 Control method of industrial robot, system and storage medium
CN113435483A (en) * 2021-06-10 2021-09-24 宁波帅特龙集团有限公司 Fixed-point snapshot method and system
CN114055438A (en) * 2022-01-17 2022-02-18 湖南视比特机器人有限公司 Visual guide workpiece follow-up sorting system and method

Also Published As

Publication number Publication date
CN114619429A (en) 2022-06-14

Similar Documents

Publication Publication Date Title
KR100465862B1 (en) Method for measuring quality of bandlike body method for suppressing camber instrument for measuring quality of bandlike body rolling machine and triming device
CN110697373B (en) Conveying belt deviation fault detection method based on image recognition technology
CN110450129B (en) Carrying advancing method applied to carrying robot and carrying robot thereof
CN109922250A (en) A kind of target object grasp shoot method, device and video monitoring equipment
US10625415B2 (en) Robot system
US10596707B2 (en) Article transfer device
CN110570454A (en) Method and device for detecting foreign matter invasion
CN110800282A (en) Holder adjusting method, holder adjusting device, mobile platform and medium
CN114619429B (en) Mechanical arm control method based on recognition template
CN104199425B (en) A kind of reading intelligent agriculture monitoring early-warning system and method
CN117232396B (en) Visual detection system and method for product quality of high-speed production line
CN116934719B (en) Automatic detection system for belt conveyor
CN110068284B (en) Method for monitoring tower crane by using high-speed photogrammetry technology
CN112693810B (en) Method and system for controlling movement of conveyor belt
CN114782367B (en) Control system and method for mechanical arm
CN113146636A (en) Object grabbing method and device and flexible robot
JP3863931B2 (en) Learning image creation method in printed matter inspection apparatus
CN110689537A (en) Method and system for judging whether line-scan camera is used for acquiring images at constant speed
TW201620632A (en) Method for checking a terminal of a metal plate and a rolling method using the same
JP4402764B2 (en) Moving object tracking device
CN114313883B (en) Automatic detection method and system for belt deviation based on image processing technology
CN115294111B (en) Method and device for detecting running state of carrier roller of conveyor
JP2023030752A (en) Generation method of meandering amount estimation model of steel plate, meandering amount estimation method and manufacturing method
CN110210367B (en) Training data acquisition method, electronic device and storage medium
JP2022180326A (en) Deposition substance detection device and deposition substance detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant