CN104647390A - Multi-camera combined initiative object tracking method for teleoperation of mechanical arm - Google Patents
Multi-camera combined initiative object tracking method for teleoperation of mechanical arm Download PDFInfo
- Publication number
- CN104647390A CN104647390A CN201510072044.XA CN201510072044A CN104647390A CN 104647390 A CN104647390 A CN 104647390A CN 201510072044 A CN201510072044 A CN 201510072044A CN 104647390 A CN104647390 A CN 104647390A
- Authority
- CN
- China
- Prior art keywords
- coordinate
- video camera
- camera
- particle
- coordinate system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Processing (AREA)
Abstract
The invention relates to a multi-camera combined initiative object tracking method for teleoperation of a mechanical arm, which belongs to the field of the teleoperation of the mechanical arm. The method comprises the steps of installing a plurality of cameras in different angles on the basis of the teleoperation, calibrating each camera, realizing the combined initiative tracking by adopting the matching of particle filter and sift local characteristics, enabling a to-be-tracked target to always stay in the center of a view field, and monitoring each angle of the mechanical arm, so that the tracking failure caused by factors such as the shielding can be avoided, and the tracking robustness can be improved by adding the human-machine interaction. According to the method, in the teleoperation process, a new target is marked in a human-machine interaction way, so that the target always stays in an observation area of an operator, or the target is updated, so that a subsequent control strategy can be made by the operator.
Description
Technical field
The present invention relates to a kind of multiple-camera for mechanical arm remote operating associating active tracing order calibration method, belong to mechanical arm remote operating field.
Background technology
For the remote operating of mechanical arm, vision system is a key technology in mechanical arm remote operating, and it provides real-time image information and the spatial pose information of target object for motion arm.State residing for motion arm, environment and working stage are also fed back to terrestrial operation person by vision system intuitively simultaneously.
In current teleoperation method, video camera provides image information, do not utilize the active tracing of image information realization to specific objective in the mechanical arm course of work, and camera all remains unchanged, can not active accommodation visual angle, do not consider that manipulator motion exceedes camera visual field and situation about cannot monitor.
In existing technical literature, patent of invention " control system and method thereof based on the Space teleoperation robot of Kinect ", publication number is CN201310193564.7, utilizes Kinect to realize three-dimensional environment modeling, and carries out Concordance to prediction environment.The shortcoming of the method is, only adopts a video camera to take mechanical arm region, and does not follow the tracks of realization of goal.In addition, the method camera keeps motionless, cannot guarantee that mechanical arm is all the time in visual field.
Summary of the invention
The object of the invention is to propose a kind of multiple-camera for mechanical arm remote operating associating active tracing order calibration method, on remote operating basis, for a set of multiple camera active vision system of mechanical arm remote operating design, with Real-time Collection, transmitting image, for operator provides visual observation foundation intuitively.
The associating of the multiple-camera for the mechanical arm remote operating active tracing order calibration method that the present invention proposes, comprises the following steps:
(1) multiple video camera is placed in the upper left of mechanical arm, upper right and front, video camera is demarcated, obtain the Intrinsic Matrix M of multiple cameras respectively
1, outer parameter matrix M
2, distortion factor and module and carriage transformation matrix T
12, T
23, T
34, concrete steps are as follows:
(1-1) set the coordinate of spatial point P in camera coordinate system as P (x
c, y
c, z
c), make spatial point P along the ray cast by photocentre on a plane of delineation, image coordinate system is set up in the plane of delineation, obtaining the projection coordinate of this spatial point P in image coordinate system is P (x, y), by this projection coordinate, the coordinate be expressed as in computer picture coordinate system is P (u, v), obtained by pinhole imaging system principle, the coordinate (x of spatial point P in camera coordinates
c, y
c, z
c) with the coordinate conversion relation of projection coordinate P (x, y) be:
Wherein f is focal length of camera;
(1-2) image-generating unit is set, the x-axis of this image-generating unit in above-mentioned image coordinate system and the physical size on y-axis direction are respectively dx and dy, coordinate (the u of any one pixel under above computer image coordinate system in image-generating unit, there is following coordinate conversion relation in the coordinate (x, y) v) and under above-mentioned image coordinate system:
Wherein, O (u
0, v
0) for being arranged on video camera primary optical axis any point at the imager coordinate of computer picture coordinate system, this imager coordinate and principal point coordinate;
(1-3) according to the coordinate conversion relation of above-mentioned steps (1-1) and step (1-2), the Intrinsic Matrix M of video camera is obtained
1for:
Wherein,
for the normalization focal length of focal length of camera f on the u axle of camera image plane coordinate system,
for the normalization focal length of focal length of camera f on the v axle of camera image plane coordinate system;
(1-4) set the coordinate of spatial point P in world coordinate system as (x
w, y
w, z
w), there is following relation in the coordinate of spatial point P in camera coordinate system and the coordinate in world coordinate system:
Wherein, R is the unit orthogonal matrix of 3 × 3, and t is the D translation vector between camera coordinate system and world coordinate system, and definition [R t] is wherein external parameters of cameras matrix M
2;
(1-5) according to the camera intrinsic parameter M that above-mentioned steps (1-3) and step (1-4) obtain
1with outer parameter M
2, the Transformation Relation of Projection obtained between the coordinate of spatial point P in world coordinate system and the projection coordinate of spatial point P in computer picture coordinate system is: P=M
1m
2;
(1-6) set the radial distortion parameter of video camera as k
1, k
2and k
3, make k
1, k
2and k
3meet following equation group:
Solve this equation group, obtain the radial distortion parameter k of video camera
1, k
2and k
3, wherein,
(x, y) is the home position coordinate of spatial point P in image coordinate system, (x
u, y
u) the ideal position coordinate that obtained by image-forming principle for spatial point P;
If the tangential distortion parameter of video camera is p
1and p
2, make p
1and p
2meet following equation group:
Solve this equation group, obtain the tangential distortion parameter p of video camera
1and p
2;
(1-7) radial distortion parameter obtained according to above-mentioned steps (1-6) is k
1, k
2and k
3with tangential distortion parameter p
1and p
2, there is following transformational relation between the home position coordinate of spatial point P and ideal position coordinate:
Following transformational relation is there is between the ideal position coordinate of spatial point P and home position coordinate:
(1-8) travel through each video camera in multiple video camera, repeat step (1-1)-step (1-7), complete the demarcation of video camera;
(1-9) set spin matrix between first video camera in multiple video camera and second video camera as R
12, the translation matrix between first video camera and second video camera is Tran
12, obtain the module and carriage transformation matrix T between first video camera and second video camera
12: T
12=[R
12tran
12];
(1-10) travel through often any two video cameras in multiple video camera, repeat step (1-9), obtain the module and carriage transformation matrix T of multiple video camera
12, T
23, T
34;
(2) make to wait to capture target through the calibrated multiple cameras active tracing of above-mentioned steps (1), specifically comprise the following steps:
(2-1) upper left of mechanical arm, upper right and front is placed in multiple through above-mentioned steps (1) calibrated video camera, if wait to capture target's center S in video camera imaging plane P
don be projected as S', calculate the distance d between S' and the center C of video camera imaging plane, setpoint distance threshold value th, the d that adjusts the distance judges, if d > is th, then sends an adjustment instruction to video camera, the direction making camera lens carry out distance d is reduced is rotated, until d≤th, if d≤th, then camera lens is made to keep origin-location;
(2-2) adopt particle filter tracking algorithm, video camera associating active tracing is waited to capture target, and concrete steps are as follows:
(2-2-1) make any video camera in multiple cameras obtain to wait the sequence of video images capturing target, to t in sequence of video images
0the image in moment carries out manual mark, mark out wherein wait capture target area;
(2-2-2) capture the center of target area for initial point with above-mentioned waiting, produce a particle collection
, wherein m is the particle number of particle set, m=1 ..., M, each particle represents one and treats the region that crawl target may exist; If particle collection
in the motion Normal Distribution of particle, particle collection
in each particle independent propagation, obtain t particle collection
and the t+1 moment particle collection
;
(2-2-3) t is established
0moment particle collection
reference histograms be
reference histograms q
*total L gray level, if t particle collection
color histogram be q
t(x)=q
t(n; X), n=1,2...L, x be particle collection
in particle, to t particle collection
in each particle independent propagation after, the t+1 moment particle collection that obtains
observe, obtain particle collection
in the color histogram of each particle region and reference histograms, calculate particle collection
in Pasteur distance D between the color histogram of each particle region and reference histograms:
definition particle weights are ω, and make ω=D, the value of N is 200;
(2-2-4) carrying out posterior probability calculating to above-mentioned particle weights, there is probability expectation E (x in what obtain t+1 moment particle
t+1):
wherein, ω
t+1each particle weights in t+1 moment;
(2-2-5) by above-mentioned desired value E (x
t+1) as waiting that capturing target exists probability optimal estimation in the t+1 moment, and by above-mentioned particle collection
in the center that there is the region that probability optimal particle covers as wait capture center, target area;
(2-2-6) repeat above-mentioned steps (2-2-2) ~ step (2-2-5), obtain waiting to capture target there is probability optimal estimation and waiting to capture center, target area at moment t to subsequent time t+1;
(2-2-7) repeat step (2-1), what make camera lens aligning above-mentioned steps (2-2-6) treats crawl center, target area;
(2-2-8) local feature region waiting to capture target area of above-mentioned steps (2-2-7) is extracted;
(2-2-9) make other several video cameras in multiple cameras obtain to wait the sequence of video images capturing target, from multiple sequence of video images, extract local feature region respectively;
(2-2-10) that step (2-2-8) is extracted waits that the local feature region capturing target area mates with all local feature region that step (2-2-9) is extracted, and obtains the precise region waiting to capture target area of other several video cameras in multiple cameras;
(2-2-11) repeat above-mentioned steps (2-2-2) ~ step (2-2-10), the precise region waiting to capture target area every platform video camera being followed the tracks of respectively obtain, realize multiple-camera associating active tracing target.
The associating of the multiple-camera for the mechanical arm remote operating active tracing order calibration method that the present invention proposes, has the following advantages:
1, the associating of the multiple-camera for mechanical arm remote operating active tracking method of the present invention, adopts multiple video camera, realizes active tracing spy being captured to target.
2, the multiple video cameras in the present invention, are placed in different angles, and carry out associating active tracing, tracked target is in optical center all the time, monitor mechanical arm all angles, and avoid because blocking, tracking failure that the factor such as background clutter causes.
3, present invention employs particle filter tracking algorithm and sift Feature Points Matching, realize multiple video camera and combine tracking, improve the robustness of Camera location.
Accompanying drawing explanation
Fig. 1 is video camera imaging principle schematic in the inventive method.
Fig. 2 is camera control model schematic in the inventive method
Detailed description of the invention
The associating of the multiple-camera for the mechanical arm remote operating active tracing order calibration method that the present invention proposes, comprises the following steps:
(1) multiple video camera is placed in the upper left of mechanical arm, upper right and front, video camera is demarcated, obtain the Intrinsic Matrix M of multiple cameras respectively
1, outer parameter matrix M
2, distortion factor and module and carriage transformation matrix T
12, T
23, T
34, concrete steps are as follows:
(1-1) as shown in Figure 1, if the coordinate of spatial point P in camera coordinate system is P (x
c, y
c, z
c), make spatial point P along the ray cast by photocentre on a plane of delineation, image coordinate system is set up in the plane of delineation, obtaining the projection coordinate of this spatial point P in image coordinate system is P (x, y), by this projection coordinate, the coordinate be expressed as in computer picture coordinate system is P (u, v), obtained by pinhole imaging system principle, the coordinate (x of spatial point P in camera coordinates
c, y
c, z
c) with the coordinate conversion relation of projection coordinate P (x, y) be:
Wherein f is focal length of camera;
(1-2) image-generating unit is set, the x-axis of this image-generating unit in above-mentioned image coordinate system and the physical size on y-axis direction are respectively dx and dy, coordinate (the u of any one pixel under above computer image coordinate system in image-generating unit, there is following coordinate conversion relation in the coordinate (x, y) v) and under above-mentioned image coordinate system:
Wherein, O (u
0, v
0) for being arranged on video camera primary optical axis any point at the imager coordinate of computer picture coordinate system, this imager coordinate and principal point coordinate;
(1-3) according to the coordinate conversion relation of above-mentioned steps (1-1) and step (1-2), the Intrinsic Matrix M of video camera is obtained
1for:
Wherein,
for the normalization focal length of focal length of camera f on the u axle of camera image plane coordinate system,
for the normalization focal length of focal length of camera f on the v axle of camera image plane coordinate system;
(1-4) set the coordinate of spatial point P in world coordinate system as (x
w, y
w, z
w), there is following relation in the coordinate of spatial point P in camera coordinate system and the coordinate in world coordinate system:
Wherein, R is the unit orthogonal matrix of 3 × 3, and t is the D translation vector between camera coordinate system and world coordinate system, and definition [R t] is wherein external parameters of cameras matrix M
2;
(1-5) according to the camera intrinsic parameter M that above-mentioned steps (1-3) and step (1-4) obtain
1with outer parameter M
2, the Transformation Relation of Projection obtained between the coordinate of spatial point P in world coordinate system and the projection coordinate of spatial point P in computer picture coordinate system is: P=M
1m
2;
(1-6) set the radial distortion parameter of video camera as k
1, k
2and k
3, make k
1, k
2and k
3meet following equation group:
Solve this equation group, obtain the radial distortion parameter k of video camera
1, k
2and k
3, wherein,
(x, y) is the home position coordinate of spatial point P in image coordinate system, (x
u, y
u) the ideal position coordinate that obtained by image-forming principle for spatial point P;
If the tangential distortion parameter of video camera is p
1and p
2, make p
1and p
2meet following equation group:
Solve this equation group, obtain the tangential distortion parameter p of video camera
1and p
2;
(1-7) radial distortion parameter obtained according to above-mentioned steps (1-6) is k
1, k
2and k
3with tangential distortion parameter p
1and p
2, there is following transformational relation between the home position coordinate of spatial point P and ideal position coordinate:
Following transformational relation is there is between the ideal position coordinate of spatial point P and home position coordinate:
(1-8) travel through each video camera in multiple video camera, repeat step (1-1)-step (1-7), complete the demarcation of video camera;
(1-9) set spin matrix between first video camera in multiple video camera and second video camera as R
12, the translation matrix between first video camera and second video camera is Tran
12, obtain the module and carriage transformation matrix T between first video camera and second video camera
12: T
12=[R
12tran
12];
(1-10) travel through often any two video cameras in multiple video camera, repeat step (1-9), obtain the module and carriage transformation matrix T of multiple video camera
12, T
23, T
34;
(2) make to wait to capture target through the calibrated multiple cameras active tracing of above-mentioned steps (1), specifically comprise the following steps:
(2-1) upper left of mechanical arm, upper right and front is placed in multiple through above-mentioned steps (1) calibrated video camera, if wait to capture target's center S in video camera imaging plane P
don be projected as S', as shown in Figure 2, calculate the distance d between S' and the center C of video camera imaging plane, setpoint distance threshold value th, the d that adjusts the distance judges, if d > is th, then send an adjustment instruction to video camera, the direction making camera lens carry out distance d is reduced is rotated, until d≤th, if d≤th, then camera lens is made to keep origin-location;
(2-2) adopt particle filter tracking algorithm, video camera associating active tracing is waited to capture target, and concrete steps are as follows:
(2-2-1) make any video camera in multiple cameras obtain to wait the sequence of video images capturing target, to t in sequence of video images
0the image in moment carries out manual mark, mark out wherein wait capture target area;
(2-2-2) capture the center of target area for initial point with above-mentioned waiting, produce a particle collection
, wherein m is the particle number of particle set, m=1 ..., M, each particle represents one and treats the region that crawl target may exist; If particle collection
in the motion Normal Distribution of particle, particle collection
in each particle independent propagation, obtain t particle collection
and the t+1 moment particle collection
;
(2-2-3) t is established
0moment particle collection
reference histograms be
reference histograms q
*total L gray level, if t particle collection
color histogram be q
t(x)=q
t(n; X), n=1,2...L, x be particle collection
in particle, to t particle collection
in each particle independent propagation after, the t+1 moment particle collection that obtains
observe, obtain particle collection
in the color histogram of each particle region and reference histograms, calculate particle collection
in Pasteur distance D between the color histogram of each particle region and reference histograms:
definition particle weights are ω, and make ω=D, the value of N is 200;
(2-2-4) carrying out posterior probability calculating to above-mentioned particle weights, there is probability expectation E (x in what obtain t+1 moment particle
t+1):
wherein, ω
t+1each particle weights in t+1 moment;
(2-2-5) by above-mentioned desired value E (x
t+1) as waiting that capturing target exists probability optimal estimation in the t+1 moment, and by above-mentioned particle collection
in the center that there is the region that probability optimal particle covers as wait capture center, target area;
(2-2-6) repeat above-mentioned steps (2-2-2) ~ step (2-2-5), obtain waiting to capture target there is probability optimal estimation and waiting to capture center, target area at moment t to subsequent time t+1;
(2-2-7) repeat step (2-1), what make camera lens aligning above-mentioned steps (2-2-6) treats crawl center, target area;
(2-2-8) local feature region waiting to capture target area of above-mentioned steps (2-2-7) is extracted;
(2-2-9) make other several video cameras in multiple cameras obtain to wait the sequence of video images capturing target, from multiple sequence of video images, extract local feature region (i.e. sift local feature region) respectively,
(2-2-10) that step (2-2-8) is extracted waits that the local feature region capturing target area mates with all local feature region that step (2-2-9) is extracted, and obtains the precise region waiting to capture target area of other several video cameras in multiple cameras;
(2-2-11) repeat above-mentioned steps (2-2-2) ~ step (2-2-10), the precise region waiting to capture target area every platform video camera being followed the tracks of respectively obtain, realize multiple-camera associating active tracing target.
Claims (1)
1., for a multiple-camera associating active tracing order calibration method for mechanical arm remote operating, it is characterized in that the method comprises the following steps:
(1) multiple video camera is placed in the upper left of mechanical arm, upper right and front, video camera is demarcated, obtain the Intrinsic Matrix M of multiple cameras respectively
1, outer parameter matrix M
2, distortion factor and module and carriage transformation matrix T
12, T
23, T
34, concrete steps are as follows:
(1-1) set the coordinate of spatial point P in camera coordinate system as P (x
c, y
c, z
c), make spatial point P along the ray cast by photocentre on a plane of delineation, image coordinate system is set up in the plane of delineation, obtaining the projection coordinate of this spatial point P in image coordinate system is P (x, y), by this projection coordinate, the coordinate be expressed as in computer picture coordinate system is P (u, v), obtained by pinhole imaging system principle, the coordinate (x of spatial point P in camera coordinates
c, y
c, z
c) with the coordinate conversion relation of projection coordinate P (x, y) be:
Wherein f is focal length of camera;
(1-2) image-generating unit is set, the x-axis of this image-generating unit in above-mentioned image coordinate system and the physical size on y-axis direction are respectively dx and dy, coordinate (the u of any one pixel under above computer image coordinate system in image-generating unit, there is following coordinate conversion relation in the coordinate (x, y) v) and under above-mentioned image coordinate system:
Wherein, O (u
0, v
0) for being arranged on video camera primary optical axis any point at the imager coordinate of computer picture coordinate system, this imager coordinate and principal point coordinate;
(1-3) according to the coordinate conversion relation of above-mentioned steps (1-1) and step (1-2), the Intrinsic Matrix M of video camera is obtained
1for:
Wherein,
for the normalization focal length of focal length of camera f on the u axle of camera image plane coordinate system,
for the normalization focal length of focal length of camera f on the v axle of camera image plane coordinate system;
(1-4) world coordinate system is set up, if the coordinate of spatial point P in world coordinate system is (x
w, y
w, z
w), there is following relation in the coordinate of spatial point P in camera coordinate system and the coordinate in world coordinate system:
Wherein, R is the unit orthogonal matrix of 3 × 3, and t is the D translation vector between camera coordinate system and world coordinate system, and definition [R t] is wherein external parameters of cameras matrix M
2;
(1-5) according to the camera intrinsic parameter M that above-mentioned steps (1-3) and step (1-4) obtain
1with outer parameter M
2, the Transformation Relation of Projection obtained between the coordinate of spatial point P in world coordinate system and the projection coordinate of spatial point P in computer picture coordinate system is: P=M
1m
2;
(1-6) set the radial distortion parameter of video camera as k
1, k
2and k
3, make k
1, k
2and k
3meet following equation group:
Solve this equation group, obtain the radial distortion parameter k of video camera
1, k
2and k
3, wherein,
(x, y) is the home position coordinate of spatial point P in image coordinate system, (x
u, y
u) the ideal position coordinate that obtained by image-forming principle for spatial point P;
If the tangential distortion parameter of video camera is p
1and p
2, make p
1and p
2meet following equation group:
Solve this equation group, obtain the tangential distortion parameter p of video camera
1and p
2;
(1-7) radial distortion parameter obtained according to above-mentioned steps (1-6) is k
1, k
2and k
3with tangential distortion parameter p
1and p
2, there is following transformational relation between the home position coordinate of spatial point P and ideal position coordinate:
Following transformational relation is there is between the ideal position coordinate of spatial point P and home position coordinate:
(1-8) travel through each video camera in multiple video camera, repeat step (1-1)-step (1-7), complete the demarcation of video camera;
(1-9) set spin matrix between first video camera in multiple video camera and second video camera as R
12, the translation matrix between first video camera and second video camera is Tran
12, obtain the module and carriage transformation matrix T between first video camera and second video camera
12: T
12=[R
12tran
12];
(1-10) travel through often any two video cameras in multiple video camera, repeat step (1-9), obtain the module and carriage transformation matrix T of multiple video camera
12, T
23, T
34;
(2) make to wait to capture target through the calibrated multiple cameras active tracing of above-mentioned steps (1), specifically comprise the following steps:
(2-1) upper left of mechanical arm, upper right and front is placed in multiple through above-mentioned steps (1) calibrated video camera, if wait to capture target's center S in video camera imaging plane P
don be projected as S', calculate the distance d between S' and the center C of video camera imaging plane, setpoint distance threshold value th, the d that adjusts the distance judges, if d > is th, then sends an adjustment instruction to video camera, the direction making camera lens carry out distance d is reduced is rotated, until d≤th, if d≤th, then camera lens is made to keep origin-location;
(2-2) adopt particle filter tracking algorithm, video camera associating active tracing is waited to capture target, and concrete steps are as follows:
(2-2-1) make any video camera in multiple cameras obtain to wait the sequence of video images capturing target, to t in sequence of video images
0the image in moment carries out manual mark, mark out wherein wait capture target area;
(2-2-2) wait that the center capturing target area is for initial point, produces a particle collection with above-mentioned
wherein m is the particle number of particle set, m=1 ..., M, each particle represents one and treats the region that crawl target may exist; If particle collection
in the motion Normal Distribution of particle, particle collection
in each particle independent propagation, obtain the particle collection of t
with the particle collection in t+1 moment
(2-2-3) t is established
0moment particle collection
reference histograms be
reference histograms q
*total L gray level, if t particle collection
color histogram be q
t(x)=q
t(n; X), n=1,2...L, x are particle collection
in particle, to t particle collection
in each particle independent propagation after, the t+1 moment particle collection obtained
observe, obtain particle collection
in the color histogram of each particle region and reference histograms, calculate particle collection
in Pasteur distance D between the color histogram of each particle region and reference histograms:
definition particle weights are ω, and make ω=D, the value of N is 200;
(2-2-4) carrying out posterior probability calculating to above-mentioned particle weights, there is probability expectation E (x in what obtain t+1 moment particle
t+1):
wherein, ω
t+1each particle weights in t+1 moment;
(2-2-5) by above-mentioned desired value E (x
t+1) as waiting that capturing target exists probability optimal estimation in the t+1 moment, and by above-mentioned particle collection
in the center that there is the region that probability optimal particle covers as wait capture center, target area;
(2-2-6) repeat above-mentioned steps (2-2-2) ~ step (2-2-5), obtain waiting to capture target there is probability optimal estimation and waiting to capture center, target area at moment t to subsequent time t+1;
(2-2-7) repeat step (2-1), what make camera lens aligning above-mentioned steps (2-2-6) treats crawl center, target area;
(2-2-8) local feature region waiting to capture target area of above-mentioned steps (2-2-7) is extracted;
(2-2-9) make other several video cameras in multiple cameras obtain to wait the sequence of video images capturing target, from multiple sequence of video images, extract local feature region respectively;
(2-2-10) that step (2-2-8) is extracted waits that the local feature region capturing target area mates with all local feature region that step (2-2-9) is extracted, and obtains the precise region waiting to capture target area of other several video cameras in multiple cameras;
(2-2-11) repeat above-mentioned steps (2-2-2) ~ step (2-2-10), the precise region waiting to capture target area every platform video camera being followed the tracks of respectively obtain, realize multiple-camera associating active tracing target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510072044.XA CN104647390B (en) | 2015-02-11 | 2015-02-11 | For the multiple-camera associating active tracing order calibration method of mechanical arm remote operating |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510072044.XA CN104647390B (en) | 2015-02-11 | 2015-02-11 | For the multiple-camera associating active tracing order calibration method of mechanical arm remote operating |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104647390A true CN104647390A (en) | 2015-05-27 |
CN104647390B CN104647390B (en) | 2016-02-10 |
Family
ID=53239282
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510072044.XA Active CN104647390B (en) | 2015-02-11 | 2015-02-11 | For the multiple-camera associating active tracing order calibration method of mechanical arm remote operating |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104647390B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105447869A (en) * | 2015-11-30 | 2016-03-30 | 四川华雁信息产业股份有限公司 | Particle swarm optimization algorithm based camera self-calibration method and apparatus |
CN106023139A (en) * | 2016-05-05 | 2016-10-12 | 北京圣威特科技有限公司 | Indoor tracking and positioning method based on multiple cameras and system |
CN106934353A (en) * | 2017-02-28 | 2017-07-07 | 北京奥开信息科技有限公司 | A kind of method of the recognition of face and active tracing for robot of supporting parents |
CN107150343A (en) * | 2017-04-05 | 2017-09-12 | 武汉科技大学 | A kind of system that object is captured based on NAO robots |
CN108074264A (en) * | 2017-11-30 | 2018-05-25 | 深圳市智能机器人研究院 | A kind of classification multi-vision visual localization method, system and device |
CN111481293A (en) * | 2020-04-16 | 2020-08-04 | 首都医科大学 | Multi-viewpoint optical positioning method and system based on optimal viewpoint selection |
CN113687627A (en) * | 2021-08-18 | 2021-11-23 | 太仓中科信息技术研究院 | Target tracking method based on camera robot |
CN114074320A (en) * | 2020-08-10 | 2022-02-22 | 库卡机器人(广东)有限公司 | Robot control method and device |
CN117464692A (en) * | 2023-12-27 | 2024-01-30 | 中信重工机械股份有限公司 | Lining plate grabbing mechanical arm control method based on structured light vision system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US2861699A (en) * | 1950-10-16 | 1958-11-25 | Gen Mills Inc | Method and apparatus for performing operations at a remote point |
US5300869A (en) * | 1992-07-30 | 1994-04-05 | Iowa State University Research Foundation, Inc. | Nonholonomic camera space manipulation |
CN1590040A (en) * | 2003-09-03 | 2005-03-09 | 中国科学院自动化研究所 | Pickup camera self calibration method based on robot motion |
WO2009059716A1 (en) * | 2007-11-05 | 2009-05-14 | Sebastian Repetzki | Pointing device and method for operating the pointing device |
CN103170973A (en) * | 2013-03-28 | 2013-06-26 | 上海理工大学 | Man-machine cooperation device and method based on Kinect video camera |
CN103209809A (en) * | 2010-05-14 | 2013-07-17 | 康耐视公司 | System and method for robust calibration between a machine vision system and a robot |
-
2015
- 2015-02-11 CN CN201510072044.XA patent/CN104647390B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US2861699A (en) * | 1950-10-16 | 1958-11-25 | Gen Mills Inc | Method and apparatus for performing operations at a remote point |
US5300869A (en) * | 1992-07-30 | 1994-04-05 | Iowa State University Research Foundation, Inc. | Nonholonomic camera space manipulation |
CN1590040A (en) * | 2003-09-03 | 2005-03-09 | 中国科学院自动化研究所 | Pickup camera self calibration method based on robot motion |
WO2009059716A1 (en) * | 2007-11-05 | 2009-05-14 | Sebastian Repetzki | Pointing device and method for operating the pointing device |
CN103209809A (en) * | 2010-05-14 | 2013-07-17 | 康耐视公司 | System and method for robust calibration between a machine vision system and a robot |
CN103170973A (en) * | 2013-03-28 | 2013-06-26 | 上海理工大学 | Man-machine cooperation device and method based on Kinect video camera |
Non-Patent Citations (2)
Title |
---|
刘亚辉: "面向智能空间的多视角视觉系统关键技术研究", 《中国博士学位论文全文数据库信息科技辑2011年》 * |
孙美霞: "基于立体视觉的串联机器人跟踪检测系统", 《中国优秀硕士学位论文全文数据库信息科技辑2012年》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105447869A (en) * | 2015-11-30 | 2016-03-30 | 四川华雁信息产业股份有限公司 | Particle swarm optimization algorithm based camera self-calibration method and apparatus |
CN105447869B (en) * | 2015-11-30 | 2019-02-12 | 四川华雁信息产业股份有限公司 | Camera self-calibration method and device based on particle swarm optimization algorithm |
CN106023139A (en) * | 2016-05-05 | 2016-10-12 | 北京圣威特科技有限公司 | Indoor tracking and positioning method based on multiple cameras and system |
CN106023139B (en) * | 2016-05-05 | 2019-05-17 | 北京圣威特科技有限公司 | A kind of indoor tracking and positioning method and system based on multiple-camera |
CN106934353B (en) * | 2017-02-28 | 2020-08-04 | 北京奥开信息科技有限公司 | Face recognition and active tracking method for endowment robot |
CN106934353A (en) * | 2017-02-28 | 2017-07-07 | 北京奥开信息科技有限公司 | A kind of method of the recognition of face and active tracing for robot of supporting parents |
CN107150343A (en) * | 2017-04-05 | 2017-09-12 | 武汉科技大学 | A kind of system that object is captured based on NAO robots |
CN107150343B (en) * | 2017-04-05 | 2019-07-23 | 武汉科技大学 | A kind of system based on NAO robot crawl object |
CN108074264A (en) * | 2017-11-30 | 2018-05-25 | 深圳市智能机器人研究院 | A kind of classification multi-vision visual localization method, system and device |
CN111481293A (en) * | 2020-04-16 | 2020-08-04 | 首都医科大学 | Multi-viewpoint optical positioning method and system based on optimal viewpoint selection |
CN114074320A (en) * | 2020-08-10 | 2022-02-22 | 库卡机器人(广东)有限公司 | Robot control method and device |
CN114074320B (en) * | 2020-08-10 | 2023-04-18 | 库卡机器人(广东)有限公司 | Robot control method and device |
CN113687627A (en) * | 2021-08-18 | 2021-11-23 | 太仓中科信息技术研究院 | Target tracking method based on camera robot |
CN113687627B (en) * | 2021-08-18 | 2022-08-19 | 太仓中科信息技术研究院 | Target tracking method based on camera robot |
CN117464692A (en) * | 2023-12-27 | 2024-01-30 | 中信重工机械股份有限公司 | Lining plate grabbing mechanical arm control method based on structured light vision system |
CN117464692B (en) * | 2023-12-27 | 2024-03-08 | 中信重工机械股份有限公司 | Lining plate grabbing mechanical arm control method based on structured light vision system |
Also Published As
Publication number | Publication date |
---|---|
CN104647390B (en) | 2016-02-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104647390B (en) | For the multiple-camera associating active tracing order calibration method of mechanical arm remote operating | |
CN107659774B (en) | Video imaging system and video processing method based on multi-scale camera array | |
CN108111818A (en) | Moving target active perception method and apparatus based on multiple-camera collaboration | |
EP2899691A1 (en) | Target tracking method and system for intelligent tracking high speed dome camera | |
WO2012151777A1 (en) | Multi-target tracking close-up shooting video monitoring system | |
CN103024350A (en) | Master-slave tracking method for binocular PTZ (Pan-Tilt-Zoom) visual system and system applying same | |
CN105059190B (en) | The automobile door opening collision warning device and method of view-based access control model | |
JP5105481B2 (en) | Lane detection device, lane detection method, and lane detection program | |
CN109345587B (en) | Hybrid vision positioning method based on panoramic vision and monocular vision | |
CN111536981B (en) | Embedded binocular non-cooperative target relative pose measurement method | |
US20210377432A1 (en) | Information processing apparatus, information processing method, program, and interchangeable lens | |
CN114268736B (en) | Tower foundation ball-type camera shooting method with high space coverage | |
WO2020063058A1 (en) | Calibration method for multi-degree-of-freedom movable vision system | |
CN113276106A (en) | Climbing robot space positioning method and space positioning system | |
CN104680528A (en) | Space positioning method of explosive-handling robot based on binocular stereo vision | |
WO2020135187A1 (en) | Unmanned aerial vehicle recognition and positioning system and method based on rgb_d and deep convolutional network | |
WO2017187694A1 (en) | Region of interest image generating device | |
CN112307912A (en) | Method and system for determining personnel track based on camera | |
CN115563732A (en) | Spraying track simulation optimization method and device based on virtual reality | |
KR100948872B1 (en) | Camera image correction method and apparatus | |
CN110991306B (en) | Self-adaptive wide-field high-resolution intelligent sensing method and system | |
KR101977635B1 (en) | Multi-camera based aerial-view 360-degree video stitching and object detection method and device | |
CN111800588A (en) | Optical unmanned aerial vehicle monitoring system based on three-dimensional light field technology | |
CN108694713A (en) | A kind of the ring segment identification of satellite-rocket docking ring part and measurement method based on stereoscopic vision | |
CN114905512A (en) | Panoramic tracking and obstacle avoidance method and system for intelligent inspection robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |