CN115272410A - Dynamic target tracking method, device, equipment and medium without calibration vision - Google Patents
Dynamic target tracking method, device, equipment and medium without calibration vision Download PDFInfo
- Publication number
- CN115272410A CN115272410A CN202210846573.0A CN202210846573A CN115272410A CN 115272410 A CN115272410 A CN 115272410A CN 202210846573 A CN202210846573 A CN 202210846573A CN 115272410 A CN115272410 A CN 115272410A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- point sequence
- track
- camera module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 239000011159 matrix material Substances 0.000 claims abstract description 49
- 238000006243 chemical reaction Methods 0.000 claims abstract description 33
- 238000012545 processing Methods 0.000 claims abstract description 28
- 239000013598 vector Substances 0.000 claims description 31
- 230000009466 transformation Effects 0.000 claims description 23
- 238000013519 translation Methods 0.000 claims description 13
- 238000005260 corrosion Methods 0.000 claims description 8
- 230000007797 corrosion Effects 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 5
- 238000012216 screening Methods 0.000 claims description 5
- 230000000903 blocking effect Effects 0.000 claims description 4
- 210000000988 bone and bone Anatomy 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 10
- 238000004364 calculation method Methods 0.000 abstract description 4
- 230000008859 change Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a dynamic target tracking method, a device, equipment and a medium without calibration vision, wherein a first image and a second image are obtained through a camera module, and a coordinate conversion matrix of a mechanical arm and the camera module is obtained through calculation according to the first image and the second image, so that a calibration process in a traditional mode is not needed, and the efficiency and the convenience are improved; the method comprises the steps of obtaining a target image of a dynamic target through a camera module, carrying out image processing on the target image to obtain a first track point sequence, obtaining a second track point sequence under a coordinate system of a mechanical arm according to the first track point sequence and a coordinate conversion matrix, tracking the dynamic target in real time according to the second track point sequence, returning to the step of obtaining the target image of the dynamic target through the camera module until a preset condition is reached, carrying out real-time iterative tracking, correcting a tracking error in real time, and improving the accuracy.
Description
Technical Field
The invention relates to the field of machine vision, in particular to a dynamic target tracking method, a device, equipment and a medium without calibration vision.
Background
The robot tracks and captures a target object through vision, and a target coordinate acquired by a vision sensor needs to be accurately converted into a coordinate system where the robot is located. In the prior art, a fixed conversion relation is obtained by calibrating internal parameters and external parameters of a camera and parameters of a mechanical arm in a traditional robot vision scheme, the calibration process is complex in calculation, and it is required to ensure that the relative position relation between the camera and a robot does not change slightly after the calibration is completed, and the mechanical arm is often complex in working environment and needs to repeatedly and rapidly move, pause and change directions, so that the positions of the mechanical arm and the camera are easy to change, for example, the position of the camera is changed due to looseness, and the conversion relation obtained by original calibration is inaccurate, and the calibration and tracking precision is difficult to guarantee.
Disclosure of Invention
In view of the above, in order to solve the above technical problems, an object of the present invention is to provide a dynamic target tracking method, apparatus, device and medium without calibration vision, which can improve efficiency and accuracy.
The embodiment of the invention adopts the technical scheme that:
a dynamic target tracking method without calibration vision comprises the following steps:
acquiring a first image and a second image through a camera module, and calculating to obtain a coordinate transformation matrix of the mechanical arm and the camera module according to the first image and the second image; the camera module is arranged on the mechanical arm, the first image is acquired at a first position, and the second image is acquired after the mechanical arm moves from the first position to a second position;
acquiring a target image of a dynamic target through the camera module, and performing image processing on the target image to obtain a first track point sequence;
obtaining a second track point sequence under the coordinate system of the mechanical arm according to the first track point sequence and the coordinate conversion matrix;
and tracking the dynamic target in real time according to the second track point sequence, and returning to the step of acquiring the target image of the dynamic target through the camera module until a preset condition is reached.
Further, the acquiring a first image and a second image through a camera module, and calculating a coordinate transformation matrix of the mechanical arm and the camera module according to the first image and the second image includes:
acquiring a first image through a camera module, controlling the mechanical arm to move to different second positions in different directions, and acquiring sub-images through the camera module; the second image comprises a plurality of sub-images, the first image comprises first image feature points, and the sub-images comprise second image feature points corresponding to the first image feature points;
calculating coordinate deviation according to the first image feature points and the second image feature points, and calculating to obtain a moving speed vector according to the coordinate deviation;
and acquiring the translation speed amount and the rotation speed vector of the mechanical arm moving from the first position to each second position, and calculating to obtain a coordinate conversion matrix according to the translation speed amount, the rotation speed vector and the movement speed vector.
Further, the calculating a coordinate deviation according to the first image feature point and the second image feature point includes:
determining a movement track of the image feature points according to the first image feature points and the second image feature points;
determining an end point and a middle point of the moving track from the moving track, and determining a target point according to the middle point and the end point and a preset rule;
and calculating coordinate deviation according to the endpoint, the midpoint and the coordinate positions of the target point in the first image and the second image.
Further, the calculating a coordinate transformation matrix according to the translation speed amount, the rotation speed vector, and the movement speed vector includes:
generating a matrix according to the translation speed amount and the rotation speed vector;
and obtaining a coordinate conversion matrix according to the ratio of the moving speed vector to the matrix.
Further, the image processing of the target image to obtain a first track point sequence includes:
identifying the target image to obtain a target track contour;
screening according to the target track profile and preset specification parameters to obtain a target profile; the preset specification parameters comprise at least one of area, perimeter and length-width ratio;
carrying out corrosion expansion treatment on the target contour, and carrying out bone extraction treatment on a corrosion expansion treatment result to obtain a track with a single pixel width;
and carrying out discrete processing on the track with the single pixel width to obtain a first track point sequence.
Further, the obtaining a second trajectory point sequence in the coordinate system of the mechanical arm according to the first trajectory point sequence and the coordinate transformation matrix includes:
converting the first track point sequence according to the coordinate conversion matrix to obtain a conversion track point sequence;
obtaining a historical track point sequence, and removing an overlapped part in the converted track point sequence according to the historical track point sequence to obtain a second track point sequence; the historical track point sequence is an old conversion track point sequence obtained in the last time period.
Further, the tracking the dynamic target in real time according to the second track point sequence includes:
and inputting the second track point sequence into the mechanical arm, and controlling the mechanical arm to track the dynamic target in a non-blocking mode in real time.
The embodiment of the present invention further provides a dynamic target tracking device without calibration vision, including:
the first module is used for acquiring a first image and a second image through the camera module and calculating a coordinate transformation matrix of the mechanical arm and the camera module according to the first image and the second image; the camera module is arranged on the mechanical arm, the first image is acquired at a first position, and the second image is acquired after the mechanical arm moves from the first position to a second position;
the second module is used for acquiring a target image of the dynamic target through the camera module and carrying out image processing on the target image to obtain a first track point sequence;
a third module, configured to obtain a second track point sequence in the coordinate system of the mechanical arm according to the first track point sequence and the coordinate conversion matrix;
and the fourth module is used for tracking the dynamic target in real time according to the second track point sequence and returning to the step of acquiring the target image of the dynamic target through the camera module until a preset condition is reached.
An embodiment of the present invention further provides an electronic device, which includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the method.
Embodiments of the present invention also provide a computer-readable storage medium, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the method.
The invention has the beneficial effects that: a first image and a second image are obtained through a camera module, and a coordinate conversion matrix of the mechanical arm and the camera module is obtained through calculation according to the first image and the second image, so that a calibration process in a traditional mode is not needed, and the efficiency and the convenience are improved; the method comprises the steps of obtaining a target image of a dynamic target through the camera module, carrying out image processing on the target image to obtain a first track point sequence, obtaining a second track point sequence under a coordinate system of the mechanical arm according to the first track point sequence and the coordinate conversion matrix, tracking the dynamic target in real time according to the second track point sequence, and returning to the step of obtaining the target image of the dynamic target through the camera module until a preset condition is reached, so that iterative tracking can be carried out in real time, tracking errors can be corrected in real time, and accuracy is improved.
Drawings
FIG. 1 is a schematic flow chart illustrating steps of a dynamic target tracking method without calibration vision according to the present invention.
Detailed Description
In order to make the technical solutions of the present application better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different elements and not for describing a particular sequential order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
As shown in fig. 1, an embodiment of the present invention provides a dynamic target tracking method without calibration vision, including steps S100 to S400:
s100, acquiring a first image and a second image through the camera module, and calculating to obtain a coordinate transformation matrix of the mechanical arm and the camera module according to the first image and the second image.
In the embodiment of the invention, the annular light source is arranged at the tail end of the mechanical arm and can be used for adjusting the intensity of the light source, and the camera module can be arranged on the mechanical arm through the base and is positioned at the middle shaft of the light source, so that the camera module can stably and clearly acquire images. The camera module has a coordinate system of the camera module (or referred to as a base coordinate system), and the robot arm has a coordinate system of the robot arm.
Optionally, when the mechanical arm is at the first position, the camera module acquires a first image, and after the mechanical arm is controlled to move from the first position to the second position, the camera module acquires a second image.
Specifically, step S100 includes steps S110-S130:
and S110, acquiring a first image through the camera module, controlling the mechanical arm to move to different second positions in different directions, and acquiring sub-images through the camera module.
It should be noted that the directions include X, Y, Z, which is a coordinate system, and other numbers of directions may be included in other embodiments without specific limitation, and the three directions are moved to corresponding second positions, that is, three second positions, and three sub-images may be obtained by the camera module, where the second image includes three sub-images. The first image comprises a first image characteristic point, the sub-image comprises a second image characteristic point corresponding to the first image characteristic point, and the first image characteristic point and the second image characteristic point can be obtained through recognition of an image processing algorithm.
And S120, calculating coordinate deviation according to the first image characteristic points and the second image characteristic points, and calculating to obtain a movement velocity vector according to the coordinate deviation.
Optionally, the step S120 of calculating the coordinate deviation according to the first image feature point and the second image feature point includes steps S1201 to S1203:
and S1201, determining the movement track of the image feature point according to the first image feature point and the second image feature point.
Optionally, the selection of the first image feature point and the second image feature point is determined according to an actual situation and is not limited; and determining the first image characteristic point and the movement track of the image characteristic point represented by the first image characteristic point according to a second image characteristic point obtained after the first image characteristic point moves.
And S1202, determining an end point and a middle point of the moving track from the moving track, and determining a target point according to the middle point and the end point and a preset rule.
Alternatively, the endpoint and the preset estimation may be adjusted according to actual conditions, and are not particularly limited. For example, the endpoint may be a lower endpoint in the movement track, and since the conversion matrix of the camera and the robot arm obtained by one feature point is a non-square matrix 2*6, the preset rule is to select the lower endpoint in the movement track, a middle point of the movement track, and generate a third isosceles triangle vertex angle according to coordinates of the two points as corner points of the isosceles triangle, where the coordinate of the vertex angle is used as the target point.
And S1203, calculating coordinate deviation according to the end points, the middle points and the coordinate positions of the target points in the first image and the second image.
Optionally, the coordinate deviation may be calculated according to the coordinate positions of the endpoint, the midpoint, and the target point in the first image and the second image.
Optionally, in step S120, the moving velocity vector is calculated according to the coordinate deviation, specifically: according to the coordinate deviation and the moving time of the first position to the second position, the velocity vectors of three points including the end point, the middle point and the target point can be calculated, so that the moving velocity vector is obtainedWherein the content of the first and second substances,is the velocity vector of the end point,is a vector of the mid-point velocity of,is the velocity vector of the target point.
And S130, acquiring the translation speed amount and the rotation speed vector of the mechanical arm moving from the first position to each second position, and calculating to obtain a coordinate conversion matrix according to the translation speed amount, the rotation speed vector and the movement speed vector.
Optionally, step S130 includes steps S1301-S1302:
and S1301, generating a matrix according to the translation speed quantity and the rotation speed vector.
Specifically, the amount of translation speed and the rotation speed vector of the mechanical arm moving from the first position to each second position can be read by a control system of the mechanical arm, and the amount of translation speed is recorded as (T)x,Ty,Tz) The rotational velocity vector is (W)x,Wy,Wz) Wherein, TxIs a translational velocity component in the X direction, TyIs a translational velocity component in the Y direction, TzIs the translational velocity component in the Z direction, WxIs the rotational speed component in the X direction, WyThe component of the rotation speed in the Y direction, WzFor the rotational speed component in the Z direction, a matrix is generated
And S1302, obtaining a coordinate transformation matrix according to the ratio of the moving speed vector to the matrix.
Optionally, a calculation formula of the coordinate transformation matrix a is as follows, so the coordinate transformation matrix a can be calculated according to a ratio of the moving velocity vector to the matrix, and the coordinate transformation matrix a is a jacobian matrix.
S200, acquiring a target image of the dynamic target through the camera module, and performing image processing on the target image to obtain a first track point sequence.
Specifically, a target image of the dynamic target in the moving state is acquired by the camera module.
Optionally, the step S200 of performing image processing on the target image to obtain the first track point sequence includes steps S210 to S240:
and S210, identifying the target image to obtain a target track contour.
Specifically, the target image is identified through an image processing algorithm to obtain a target track contour.
S220, screening according to the target track profile and the preset specification parameters to obtain the target profile.
Optionally, the preset specification parameters include at least one of area, perimeter, and aspect ratio. Illustratively, the screening process is performed according to the target track contour, the area, the perimeter, and the aspect ratio, and the screening process result is taken as the target track contour.
And S230, carrying out corrosion expansion processing on the target contour, and carrying out bone extraction processing on the corrosion expansion processing result to obtain a track with a single pixel width.
S240, carrying out discrete processing on the track with the single pixel width to obtain a first track point sequence.
In the embodiment of the invention, the target contour is subjected to corrosion expansion processing, the skeleton extraction processing is carried out on the corrosion expansion processing result through a skeleton extraction algorithm, a track with the width of a single pixel is obtained, and then the track with the width of the single pixel is subjected to discrete processing, so that a first track point sequence is obtained.
And S300, obtaining a second track point sequence under the coordinate system of the mechanical arm according to the first track point sequence and the coordinate conversion matrix.
S400, tracking the dynamic target in real time according to the second track point sequence, and returning to the step of acquiring the target image of the dynamic target through the camera module until a preset condition is reached.
Specifically, step S300 includes steps S310-S320:
and S310, converting the first track point sequence according to the coordinate conversion matrix to obtain a conversion track point sequence.
Specifically, a coordinate conversion matrix is multiplied by the first track point sequence, so that conversion is achieved, and a conversion track point sequence under a mechanical arm coordinate system is obtained.
And S320, acquiring a historical track point sequence, and removing an overlapped part in the converted track point sequence according to the historical track point sequence to obtain a second track point sequence.
It should be noted that the historical track point sequence is an old conversion track point sequence acquired in the last time period. For example, in step S400, the step of obtaining the target image of the dynamic target by the camera module is returned to perform an iteration, a new coordinate transformation matrix and a corresponding transformation track point sequence are obtained in each iteration, and at this time, the new coordinate transformation matrix and the corresponding transformation track point sequence are recorded as a time period, so that each time period has a corresponding transformation track point sequence, assuming that the current iteration is the second iteration, the old transformation track point sequence obtained in the previous time period is the transformation track point sequence obtained in the first iteration, and if the current iteration is the first iteration, the old transformation track point sequence obtained in the previous time period may be a preset empty sequence.
Optionally, the second track point sequence is obtained by comparing the historical track point sequence with the converted track point sequence to determine an overlapping portion, and then removing the overlapping portion in the converted track point sequence.
It should be noted that, in S400, the dynamic target is tracked in real time according to the second track point sequence, specifically: and inputting the second track point sequence into a mechanical arm (such as a control system of the mechanical arm), and controlling the mechanical arm to track the dynamic target in a non-blocking mode in real time. It should be noted that the control system of the non-blocking mode robot arm issues a movement command and does not wait for the state feedback of the movement command to directly execute the mode of the next command.
In the embodiment of the invention, the step of acquiring the target image of the dynamic target through the camera module is returned in the step S400, that is, the step S200 is executed again to acquire a new target image, so that a new coordinate conversion matrix and a new second track point sequence are finally determined to realize iteration, and therefore, the coordinate conversion relation error possibly caused by interference factors of looseness of the camera module or change of the relative position between the camera module and the mechanical arm in the moving process of the mechanical arm is corrected through real-time iteration, so that the accuracy of real-time tracking is improved, the real-time performance is improved, the visual processing period (namely the period of processing the image of the target image to obtain the second track point sequence) is compressed to be within ten milliseconds, and the processing of the visual processing period and the mechanical arm tracking are carried out simultaneously in two threads to improve the tracking real-time performance, so that the mechanical arm can work efficiently, simply and accurately in a complex environment.
It should be noted that the preset conditions include, but are not limited to, knowing that the target image of the dynamic target cannot be acquired by the camera module, that is, the dynamic target is located outside the field of view of the camera module, and at this time, controlling the mechanical arm and the camera module to enter a standby state.
The embodiment of the present invention further provides a dynamic target tracking device without calibration vision, including:
the first module is used for acquiring a first image and a second image through the camera module and calculating a coordinate transformation matrix of the mechanical arm and the camera module according to the first image and the second image; the camera module is arranged on the mechanical arm, a first image is obtained at a first position, and a second image is obtained after the mechanical arm moves from the first position to a second position;
the second module is used for acquiring a target image of the dynamic target through the camera module and carrying out image processing on the target image to obtain a first track point sequence;
the third module is used for obtaining a second track point sequence under the coordinate system of the mechanical arm according to the first track point sequence and the coordinate conversion matrix;
and the fourth module is used for tracking the dynamic target in real time according to the second track point sequence and returning to the step of acquiring the target image of the dynamic target through the camera module until a preset condition is reached.
The contents in the above method embodiments are all applicable to the present apparatus embodiment, the functions specifically implemented by the present apparatus embodiment are the same as those in the above method embodiments, and the advantageous effects achieved by the present apparatus embodiment are also the same as those achieved by the above method embodiments.
The embodiment of the present invention further provides an electronic device, where the electronic device includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the dynamic object tracking method without calibration vision of the foregoing embodiment. The electronic device of the embodiment of the invention includes but is not limited to a mobile phone, a tablet computer, a vehicle-mounted computer and the like.
The contents in the above method embodiments are all applicable to the present apparatus embodiment, the functions specifically implemented by the present apparatus embodiment are the same as those in the above method embodiments, and the beneficial effects achieved by the present apparatus embodiment are also the same as those achieved by the above method embodiments.
Embodiments of the present invention further provide a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the dynamic target tracking method without calibration vision of the foregoing embodiments.
Embodiments of the present invention also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the dynamic target tracking method without calibration vision of the foregoing embodiment.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" is used to describe the association relationship of the associated object, indicating that there may be three relationships, for example, "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes multiple instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing programs, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; it will be understood by those skilled in the art that various changes in the embodiments and modifications thereof, and equivalents thereof, may be made without departing from the spirit and scope of the embodiments and modifications of the present disclosure.
Claims (10)
1. A dynamic target tracking method without calibration vision is characterized by comprising the following steps:
acquiring a first image and a second image through a camera module, and calculating to obtain a coordinate transformation matrix of the mechanical arm and the camera module according to the first image and the second image; the camera module is arranged on the mechanical arm, the first image is acquired at a first position, and the second image is acquired after the mechanical arm moves from the first position to a second position;
acquiring a target image of a dynamic target through the camera module, and performing image processing on the target image to obtain a first track point sequence;
obtaining a second track point sequence under the coordinate system of the mechanical arm according to the first track point sequence and the coordinate conversion matrix;
and tracking the dynamic target in real time according to the second track point sequence, and returning to the step of acquiring the target image of the dynamic target through the camera module until a preset condition is reached.
2. The uncalibrated-vision dynamic target tracking method according to claim 1, characterized in that: the acquiring a first image and a second image through a camera module, and calculating a coordinate transformation matrix of the mechanical arm and the camera module according to the first image and the second image comprises:
acquiring a first image through a camera module, controlling the mechanical arm to move to different second positions in different directions, and acquiring sub-images through the camera module; the second image comprises a plurality of sub-images, the first image comprises first image feature points, and the sub-images comprise second image feature points corresponding to the first image feature points;
calculating coordinate deviation according to the first image characteristic points and the second image characteristic points, and calculating to obtain a moving speed vector according to the coordinate deviation;
and acquiring the translation speed amount and the rotation speed vector of the mechanical arm moving from the first position to each second position, and calculating to obtain a coordinate conversion matrix according to the translation speed amount, the rotation speed vector and the movement speed vector.
3. The uncalibrated-vision dynamic target tracking method according to claim 2, characterized in that: the calculating a coordinate deviation according to the first image feature point and the second image feature point includes:
determining a movement track of the image feature points according to the first image feature points and the second image feature points;
determining an end point and a middle point of the moving track from the moving track, and determining a target point according to the middle point and the end point and a preset rule;
and calculating coordinate deviation according to the endpoint, the midpoint and the coordinate positions of the target point in the first image and the second image.
4. The uncalibrated-vision dynamic target tracking method according to claim 2, characterized in that: the calculating a coordinate transformation matrix according to the translation velocity amount, the rotation velocity vector and the movement velocity vector includes:
generating a matrix according to the translation speed quantity and the rotation speed vector;
and obtaining a coordinate conversion matrix according to the ratio of the moving speed vector to the matrix.
5. The dynamic target tracking method without calibration vision according to any one of claims 1-4, characterized in that: the image processing of the target image to obtain a first track point sequence includes:
identifying the target image to obtain a target track contour;
screening according to the target track profile and preset specification parameters to obtain a target profile; the preset specification parameters comprise at least one of area, perimeter and length-width ratio;
carrying out corrosion expansion treatment on the target contour, and carrying out bone extraction treatment on a corrosion expansion treatment result to obtain a track with a single pixel width;
and carrying out discrete processing on the track with the single pixel width to obtain a first track point sequence.
6. The uncalibrated-vision dynamic target tracking method according to any one of claims 1 to 4, wherein: the obtaining of the second trajectory point sequence under the coordinate system of the mechanical arm according to the first trajectory point sequence and the coordinate conversion matrix includes:
converting the first track point sequence according to the coordinate conversion matrix to obtain a conversion track point sequence;
obtaining a historical track point sequence, and removing an overlapped part in the converted track point sequence according to the historical track point sequence to obtain a second track point sequence; the historical track point sequence is an old conversion track point sequence obtained in the last time period.
7. The uncalibrated-vision dynamic target tracking method according to any one of claims 1 to 4, wherein: the real-time tracking of the dynamic target according to the second sequence of trajectory points includes:
and inputting the second track point sequence into the mechanical arm, and controlling the mechanical arm to track the dynamic target in a non-blocking mode in real time.
8. A dynamic target tracking device without calibration vision, comprising:
the first module is used for acquiring a first image and a second image through the camera module and calculating a coordinate transformation matrix of the mechanical arm and the camera module according to the first image and the second image; the camera module is arranged on the mechanical arm, the first image is obtained at a first position, and the second image is obtained after the mechanical arm moves from the first position to a second position;
the second module is used for acquiring a target image of the dynamic target through the camera module and carrying out image processing on the target image to obtain a first track point sequence;
a third module, configured to obtain a second trajectory point sequence in the coordinate system of the mechanical arm according to the first trajectory point sequence and the coordinate conversion matrix;
and the fourth module is used for tracking the dynamic target in real time according to the second track point sequence and returning to the step of acquiring the target image of the dynamic target through the camera module until a preset condition is reached.
9. An electronic device, characterized in that: the electronic device comprises a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the method according to any one of claims 1-7.
10. A computer-readable storage medium characterized by: the storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions that is loaded and executed by a processor to implement the method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210846573.0A CN115272410A (en) | 2022-07-19 | 2022-07-19 | Dynamic target tracking method, device, equipment and medium without calibration vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210846573.0A CN115272410A (en) | 2022-07-19 | 2022-07-19 | Dynamic target tracking method, device, equipment and medium without calibration vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115272410A true CN115272410A (en) | 2022-11-01 |
Family
ID=83766597
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210846573.0A Pending CN115272410A (en) | 2022-07-19 | 2022-07-19 | Dynamic target tracking method, device, equipment and medium without calibration vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115272410A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116149327A (en) * | 2023-02-08 | 2023-05-23 | 广州番禺职业技术学院 | Real-time tracking prospective path planning system, method and device |
-
2022
- 2022-07-19 CN CN202210846573.0A patent/CN115272410A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116149327A (en) * | 2023-02-08 | 2023-05-23 | 广州番禺职业技术学院 | Real-time tracking prospective path planning system, method and device |
CN116149327B (en) * | 2023-02-08 | 2023-10-20 | 广州番禺职业技术学院 | Real-time tracking prospective path planning system, method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3753685B1 (en) | Control system and control method | |
US8687057B2 (en) | Three-dimensional measurement apparatus and control method therefor | |
EP2684651A2 (en) | Robot system, robot, robot control device, robot control method, and robot control program | |
JP5248806B2 (en) | Information processing apparatus and information processing method | |
US9607244B2 (en) | Image processing device, system, image processing method, and image processing program | |
JPH11118444A (en) | Non-contact image measuring system | |
JP6758903B2 (en) | Information processing equipment, information processing methods, programs, systems, and article manufacturing methods | |
JP6973444B2 (en) | Control system, information processing device and control method | |
CN114310901B (en) | Coordinate system calibration method, device, system and medium for robot | |
CN114355953B (en) | High-precision control method and system of multi-axis servo system based on machine vision | |
CN115272410A (en) | Dynamic target tracking method, device, equipment and medium without calibration vision | |
JP6936974B2 (en) | Position / orientation estimation device, position / orientation estimation method and program | |
CN110853102A (en) | Novel robot vision calibration and guide method, device and computer equipment | |
KR20130075712A (en) | A laser-vision sensor and calibration method thereof | |
CN112767479A (en) | Position information detection method, device and system and computer readable storage medium | |
JP2005069757A (en) | Method and system for presuming position and posture of camera using fovea wide-angle view image | |
CN112907669A (en) | Camera pose measuring method and device based on coplanar feature points | |
CN114131149A (en) | Laser vision weld joint tracking system, equipment and storage medium based on CenterNet | |
Sullivan et al. | Using active-deformable models to track deformable objects in robotic visual servoing experiments | |
JP6631225B2 (en) | 3D shape measuring device | |
JP2003106813A (en) | Method for determining state value of rigid body in space by using video meter | |
CN110328669A (en) | The end orbit acquisition of robot for real training and tracking and device | |
KR102585332B1 (en) | Device and method executing calibration between robot hand and camera separated from robot hand | |
Kwok et al. | Vision system and projective rectification for a robot drawing platform | |
CN112643718B (en) | Image processing apparatus, control method therefor, and storage medium storing control program therefor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |