CN105955271A - Multi-robot coordinated motion method based on multi-view geometry - Google Patents

Multi-robot coordinated motion method based on multi-view geometry Download PDF

Info

Publication number
CN105955271A
CN105955271A CN201610333164.5A CN201610333164A CN105955271A CN 105955271 A CN105955271 A CN 105955271A CN 201610333164 A CN201610333164 A CN 201610333164A CN 105955271 A CN105955271 A CN 105955271A
Authority
CN
China
Prior art keywords
prime
robot
projection
machine
following
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610333164.5A
Other languages
Chinese (zh)
Other versions
CN105955271B (en
Inventor
万程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201610333164.5A priority Critical patent/CN105955271B/en
Publication of CN105955271A publication Critical patent/CN105955271A/en
Application granted granted Critical
Publication of CN105955271B publication Critical patent/CN105955271B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Abstract

The present invention discloses a multi-robot coordinated motion method based on multi-view geometry. The method comprises the following steps of appointing two robots in a multi-robot system as pilot machines and other robots as following machines; calculating the two-focus tensors between the following machine and the two pilot machines to obtain two projection straight lines of the target position of the following machine on an image, wherein the straight line intersection point is the projection point of the target position on a following machine image; enabling the following machine to move along a connection line of an optical center of a camera image of the following machine and the target position projection point until the positions of the projection points of the pilot machines are overlapped with the projection positions before motion, repeating the above processing on the residual following machines one by one, thereby realizing the situation that the relative positions of the residual following machines and the two pilot machines are consistent with the relative positions before motion. According to the present invention, the multi-view geometry is applied to the multi-robot system, the mutual projection knowledge of a plurality of cameras is introduced, the control stability is improved, and the action of the multi-view geometry in the robot control is promoted.

Description

A kind of multi-robot coordination movement technique based on multiple views geometry
Technical field
The present invention relates to technical field of robot control, be specifically related to a kind of multi-robot coordination based on multiple views geometry Movement technique.
Background technology
Multi-robot coordination is moved to have in real life and is widely applied very much.On the one hand, some work is only with one Robot has been difficult to, and just can well be achieved the goal by the cooperation of multiple robots;On the other hand, multirobot is passed through Between coordination, robot system efficiency in operation process can be improved.In a lot of applications, it is desirable that many machines People's coordination exercise, especially in the case of completing the tasks such as military affairs, production, fire-fighting, also requires robot manipulating task and is moved through Journey keeps certain formation, how to control robot and keep rank and seem even more important.Here it is the formation of robot controls.
In research and the application of multi-robot coordination operation, present existing control method, due to deficient in stability, Robot can only be to compare in indoor or level land etc. to realize coordination exercise in the most single environment, strongly limit the use of robot Scope and application conditions, be difficult to realize the co-ordination of multirobot in more universal complicated landform.
Summary of the invention
Goal of the invention: for the deficiencies in the prior art, the present invention provides a kind of multirobot based on multiple views geometry to assist Adjust movement technique, improve the stability of control method so that robot is applicable to more complicated complete dynamic environment.
Technical scheme: multi-robot coordination movement technique based on multiple views geometry of the present invention, for multirobot System, comprises the steps:
(1) specify the Liang Tai robot in multi-robot system as guide's machine, remaining robot as the machine of following, two Guide's machine first moves;
(2) choose a machine of following and be designated as C1, two guide's machines are designated as C respectively2And C3, C1、C2、C3Position before motion is divided It is not designated as C1 t、C2 t、C3 t, C2And C3First moving, post exercise position is respectively C2 t+1、C3 t+1
(3) C is calculated1 tWith C2 t+1Between bifocal tensor F12, utilize formula l2=F12e21 tCalculate C1Target location C1 t+1 At C1Projection straight line l on image2, wherein antipodal points e21 tIt is C1 tAt C2 tProjection on image;
(4) C is calculated1 tWith C3 t+1Between bifocal tensor F13, utilize formula l3=F13e31 tCalculate C1Target location C1 t+1 At C1Projection straight line l on image3, wherein antipodal points e31 tIt is C1 tAt C3 tProjection on image, projection straight line l3With step (3) Gained projection straight line l2Intersection point x be C1Target location C1 t+1At C1Subpoint on image;
(5) C is made1Optical center and C along its camera review1The line motion of target location subpoint x, until C2、C3 At C1Subpoint position e on camera review12、e13With the projected position e before motion12 t、e13 tOverlap, i.e. realize C1、C2、C3 Relative position before post exercise moves relative to position keeps consistent, completes one and follows machine and two guide's machines coordination fortune The control that dynamic formation keeps;
(6) machine of following residue repeats step (2) to (5) one by one, it is achieved residue follows the relative of machine and two guide's machines Relative position before position moves keeps consistent, thus completes all robot coordinated motion formations of multi-robot system and protect The control held.
Improving technique scheme further, in described step (3), step (4), the algorithm of bifocal tensor is as follows:
X, x ' it is spatial point X subpoint on the two width planes of delineation, bifocal tensorElement be determinantal expansion X ' in formulajxiCoefficient, be written as:
Wherein, ai、biIt is the spatial point X row vector that projects in projection matrix A and B that x, x on two width images ' puts respectively; To r, s, t=1,2,3, tensor εrstIt is defined as follows:
To given i value, unless p, q are different from i and different, otherwise tensor εipqIt is zero, to εjrsIn like manner;
As long as from the foregoing, it will be observed that provide corresponding point, often group corresponding point will provide for 1 bilinear relation formula: then byBy x1WritingCan obtain after expansion:
uu ′ vu ′ u ′ uv ′ vv ′ v ′ u v 1 f 11 f 12 f 13 · · · f 33 = 0 ;
In the case of two viewpoints, provide 8 groups of corresponding point, then can obtain following system of linear equations:
u 1 u 1 ′ v 1 u 1 ′ u 1 ′ u 1 v 1 ′ v 1 v 1 ′ v 1 ′ u 1 v 1 1 · · · · · · u 8 u 8 ′ v 8 u 8 ′ u 8 ′ u 8 v 8 ′ v 8 v 8 ′ v 8 ′ u 8 v 8 1 f 11 f 1 2 f 1 3 · · · f 33 = 0 · · · 0 ;
Above-mentioned homogeneous equation group,It is identified the poorest invariant, when order is 8, it is possible to use linear algorithm to ask Obtain unique solution.
Further, in described step (3), step (4), the algorithm of bifocal tensor is as follows:With
Define a vector t, vector t be byFormed: in the case of two viewpoints, In the case of three viewpoints,In the case of four viewpoints,If having N number of Corresponding point, have following relation: Mt=0, M are N*9 respectively, the matrix of 9N*27,81N*81;
Assume to obtain N from the antipodal points provided1Individual Line independent relational expression, M1T=0, obtains N from corresponding point2Individual line Property independence formula, M2They simultaneous are become the system of a linear relation by t=0:At M1Middle addition 0 OK, square formation M' is createed1, from SVD (M'1)=UDVTIn extract V, and by leaving out the front N of V1Row obtain V', fromIn solve t', finally obtain result t from t=V't'.
Further, described step (5) is until C2And C3At C1 tE on image12、e13With the projected position e before motion12 t、 e13 tEuclidean distance both less than equal to 5pixel time, i.e. realize C1、C2、C3Post exercise move relative to position before relative Position keeps consistent.
Beneficial effect: compared with prior art, advantages of the present invention: multiple views geometric techniques applies to automatically control motion There have been vicennial history, such as three-dimensional reconstruction, vision induction etc. in field, is widely used in the automatic control of multiple robot System, the automatic Pilot of automobile, camera chain, Motion Recognition etc., but traditional multiple views geometry is mostly applied to static state Environment, the most applicable for dynamic environment.Present invention firstly provides new multiple views geometry, introduce multiple cameras and mutually throw The knowledge of shadow, the research of lifting multiple views geometry is to higher field, and its stability has compared to conventional multi-view geometry Large increase, promotes the effect in robot control of the multiple views geometry so that robot is applicable to more complicated complete dynamic ring Border, solves the technical barriers such as stability well.Can manufacture, based on this theory, the robot that the suitability is higher, meet people's day The productive life requirement that benefit increases.
Accompanying drawing explanation
Fig. 1 is the schematic diagram utilizing bifocal tensor to realize robot coordinated motion.
Detailed description of the invention
Below by accompanying drawing, technical solution of the present invention is described in detail.
Embodiment 1: multi-robot coordination movement technique based on multiple views geometry of the present invention, including walking as follows Rapid:
(1) using the Liang Ge robot in multi-robot system as guide's machine, remaining robot, as the machine of following, first examines Considering a coordination exercise method followed between machine and guide's machine, the machine of following is designated as C1, two guide's machines are designated as C respectively2And C3
Position before (2) three robot motions is respectively C1 t、C2 t、C3 t, post exercise position is designated as C1 t+1、C2 t+1、C3 t +1, when guide's machine moves to C2 t+1、C3 t+1Time, follow machine also at initial position C1 t, as it is shown in figure 1, determine C by subsequent calculations1 t +1Position;
(3) C is calculated1 tAnd C2 t+1Between bifocal tensor F12, can get C according to multiple views geometry1 t+1On image Projection straight line l2, computing formula is l2=F12e21 t, wherein antipodal points e21 tIt is C1 tAt C2 tOn projection, e21 tIt is used for remembering movement Front formation;
(4) same to step (3), calculate C1And C3 t+1Between bifocal tensor F13, utilize bifocal tensor sum antipodal points e31 t Between relational expression l3=F13e31 t, calculate C1 t+1At C1 tOn projection straight line l3, e31 tIt is C1 tAt C3 tOn projection, integrating step (3) projection straight line l obtained2, the intersection point X of two straight lines, namely destination locations C can be obtained1 t+1At C1 tProjection on image;
(5) machine C is followed in order1Along C1 tMove with the line of X, until C2And C3At C1 tE on image12With e13Position and e12 tWith e12 tThe Euclidean distance of position both less than equal to 5pixel, to realize C1 t+1、C2 t+1、C3 t+1Relative position and C1 t、 C2 t、C3 tPosition keep consistent, i.e. complete the control that robot coordinated motion formation is kept, in like manner other robot is also Can " follow up " in this way.
Wherein bifocal tensor F12And F13Algorithm as follows:
(1) corresponding point method:
X, x ' it is spatial point X subpoint on the two width planes of delineation, bifocal tensorElement be determinantal expansion X ' in formulajxiCoefficient.Can be written as:
Wherein, ai、biIt is the spatial point X row vector that projects in projection matrix A and B that x, x on two width images ' puts respectively; To r, s, t=1,2,3, tensor εrstIt is defined as follows:
εipq、εjrsDefinition and εrstIdentical, therefore for given i value, unless p, q are different from i and different, otherwise open Amount εipqIt is zero, to εjrsIn like manner.Therefore this and formula only comprise four nonzero terms.And come across the determinant in these four all Comprise the four same row of matrix A and B, thus they have equal value except outer symbol.But, εipqεjrsValue make this Four have identical symbol equal.
As long as by upper it is known that provide corresponding point, often group corresponding point will provide for 1 bilinear relation formula.Then byBy x1WritingCan obtain after expansion:
uu ′ vu ′ u ′ uv ′ vv ′ v ′ u v 1 f 11 f 12 f 13 · · · f 33 = 0.
If providing 8 groups of match points, then can obtain following system of linear equations:
u 1 u 1 ′ v 1 u 1 ′ u 1 ′ u 1 v 1 ′ v 1 v 1 ′ v 1 ′ u 1 v 1 1 · · · · · · u 8 u 8 ′ v 8 u 8 ′ u 8 ′ u 8 v 8 ′ v 8 v 8 ′ v 8 ′ u 8 v 8 1 f 11 f 1 2 f 1 3 · · · f 33 = 0 · · · 0
This is a homogeneous equation group,The poorest invariant can be identified.If there is a solution, the left side Rank of matrix is necessarily up to 8, if order is just 8, then solution is unique, and can solve with linear algorithm.
It can be seen that in the case of two viewpoints, needed for calculating the linear solution of multiple views geometry, corresponding point number is 8。
(2) mutual sciagraphy
If there being the projection of second video camera on piece image, it is possible to obtain the antipodal points of approximation, be denoted as e12, In like manner can define e21.The two antipodal points and fundamental matrix have a following relation:
With
Each of which relation all can provide three linear restrictions.But, the two linear relationship is not separate , if by two relation simultaneous, the most only five linear restrictions can be provided.
When there being two video cameras, have two kinds of projection situations:
(1) only have 1 video camera to be projected on another video camera;
(2) 2 video cameras mutually project.
Can obtain 1 antipodal points during situation (1), this antipodal points can provide 3 independent linearity constraints.
Can obtain 2 antipodal points during situation (2), these 2 antipodal points can provide 5 linear independent relations constraints.
Therefore, in order to calculate bifocal tensor, when not considering mutually to project, need 8 corresponding point.Considering mutually During projection, in the case of the first, have only to 5 corresponding point, in the case of the second, have only to 3 corresponding point.
A kind of method introducing now practicality, can calculate multifocal tensor by linear gauge in the case of mutually projection.
Define a vector t, vector t be byFormed: in the case of two viewpoints, In the case of three viewpoints,In the case of four viewpoints,If having N number of Corresponding point, have following relation: Mt=0, M are N*9 respectively, the matrix of 9N*27,81N*81;
Assume to obtain N from the antipodal points provided1Individual Line independent relational expression, M1T=0, obtains N from corresponding point2Individual line Property independence formula, M2They simultaneous are become the system of a linear relation by t=0:
[ M 1 T , M 2 T ] T t = 0
If there being than requiring more linear relation in this system, just having a conventional method and going to find out A young waiter in a wineshop or an inn takes advantage of solution.Least square solution can make algebraic distance minimum, i.e.But this solution can not strictly meet right The constraint that limit is given.From the results of view, the antipodal points calculated from t by the method for least square of standard can not be given Antipodal points corresponding.In order to solve this problem, find that method of least square willIt is minimised as M1T=0.Very Different value is decomposed and can is effectively used in here.Specifically, by M1Middle addition 0 row, creates square formation M'1.From SVD(M'1)=UDVTIn extract V, and by leaving out the front N of V1Row obtain V'.FromIn solve t'.? Result t is obtained eventually from t=V't'.
Although as it has been described above, represented and described the present invention with reference to specific preferred embodiment, but it must not be explained For the restriction to the present invention self.Under the spirit and scope of the present invention premise defined without departing from claims, can be right Various changes can be made in the form and details for it.

Claims (4)

1. a multi-robot coordination movement technique based on multiple views geometry, for multi-robot system, it is characterised in that: bag Include following steps:
(1) specify the Liang Tai robot in multi-robot system as guide's machine, remaining robot as the machine of following, two guides Machine first moves;
(2) choose a machine of following and be designated as C1, two guide's machines are designated as C respectively2And C3, C1、C2、C3Position before motion is remembered respectively For C1 t、C2 t、C3 t, C2And C3First moving, post exercise position is respectively C2 t+1、C3 t+1
(3) C is calculated1 tWith C2 t+1Between bifocal tensor F12, utilize formula l2=F12e21 tCalculate C1Target location C1 t+1At C1 Projection straight line l on image2, wherein antipodal points e21 tIt is C1 tAt C2 tProjection on image;
(4) C is calculated1 tWith C3 t+1Between bifocal tensor F13, utilize formula l3=F13e31 tCalculate C1Target location C1 t+1At C1 Projection straight line l on image3, wherein antipodal points e31 tIt is C1 tAt C3 tProjection on image, projection straight line l3With step (3) gained Projection straight line l2Intersection point x be C1Target location C1 t+1At C1Subpoint on image;
(5) C is made1Optical center and C along its camera review1The line motion of target location subpoint x, until C2、C3At C1 Subpoint position e on camera review12、e13With the projected position e before motion12 t、e13 tOverlap, i.e. realize C1、C2、C3Motion After relative position move before relative position keep consistent, complete one and follow machine and two guide machine coordination exercise teams The control that shape keeps;
(6) machine of following residue repeats step (2) to (5) one by one, it is achieved residue follows the relative position of machine and two guide's machines Relative position before moving keeps consistent, thus completes what all robot coordinated motion formations of multi-robot system kept Control.
Multi-robot coordination movement technique based on multiple views geometry the most according to claim 1, it is characterised in that: described step Suddenly in (3), step (4), the algorithm of bifocal tensor is as follows:
X, x ' it is spatial point X subpoint on the two width planes of delineation, bifocal tensorElement be in determinantal expansion x′jxiCoefficient, be written as:
Wherein, ai、biIt is the spatial point X row vector that projects in projection matrix A and B that x, x on two width images ' puts respectively;εipq、 εjrsIt is defined as: to r, s, t=1,2,3, tensor εrstFor:
To given i value, unless p, q are different from i and different, otherwise tensor εipqIt is zero, to εjrsIn like manner;
As long as from the foregoing, it will be observed that provide corresponding point, often group corresponding point will provide for 1 bilinear relation formula: then by By x1WritingCan obtain after expansion:
uu ′ vu ′ u ′ uv ′ vv ′ v ′ u v 1 f 11 f 12 f 13 . . . f 33 = 0 ;
In the case of two viewpoints, provide 8 groups of corresponding point, then can obtain following system of linear equations:
u 1 u 1 ′ v 1 u 1 ′ u 1 ′ u 1 v 1 ′ v 1 v 1 ′ v 1 ′ u 1 v 1 1 . . . . . . u 8 u 8 ′ v 8 u 8 ′ u 8 ′ u 8 v 8 ′ v 8 v 8 ′ v 8 ′ u 8 v 8 1 f 11 f 12 f 13 . . . f 33 = 0 . . . 0 ;
Above-mentioned homogeneous equation group,It is identified the poorest invariant, when order is 8, it is possible to use linear algorithm to try to achieve only One solves.
Multi-robot coordination movement technique based on multiple views geometry the most according to claim 1, it is characterised in that: described step Suddenly in (3), step (4), the algorithm of bifocal tensor is as follows:With
Define a vector t, vector t be byFormed: in the case of two viewpoints,Three regard In the case of Dian,In the case of four viewpoints,If there being N number of correspondence Point, has following relation: Mt=0, M are N*9 respectively, the matrix of 9N*27,81N*81;
Assume to obtain N from the antipodal points provided1Individual Line independent relational expression, M1T=0, obtains N from corresponding point2Individual linear only Vertical relational expression, M2They simultaneous are become the system of a linear relation by t=0:At M1Middle addition 0 row, Create square formation M'1, from SVD (M'1)=UDVTIn extract V, and by leaving out the front N of V1Row obtain V', fromIn solve t', finally obtain result t from t=V't'.
Multi-robot coordination movement technique based on multiple views geometry the most according to claim 1, it is characterised in that: described step Suddenly until C in (5)2And C3At C1 tE on image12、e13With the projected position e before motion12 t、e13 tEuclidean distance both less than etc. When 5pixel, i.e. realize C1、C2、C3Relative position before post exercise moves relative to position keeps consistent.
CN201610333164.5A 2016-05-18 2016-05-18 A kind of multi-robot coordination movement technique based on multiple views geometry Active CN105955271B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610333164.5A CN105955271B (en) 2016-05-18 2016-05-18 A kind of multi-robot coordination movement technique based on multiple views geometry

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610333164.5A CN105955271B (en) 2016-05-18 2016-05-18 A kind of multi-robot coordination movement technique based on multiple views geometry

Publications (2)

Publication Number Publication Date
CN105955271A true CN105955271A (en) 2016-09-21
CN105955271B CN105955271B (en) 2019-01-11

Family

ID=56912004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610333164.5A Active CN105955271B (en) 2016-05-18 2016-05-18 A kind of multi-robot coordination movement technique based on multiple views geometry

Country Status (1)

Country Link
CN (1) CN105955271B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102331711A (en) * 2011-08-12 2012-01-25 江苏合成物联网科技有限公司 Formation control method for mobile autonomous robots
CN105096341A (en) * 2015-07-27 2015-11-25 浙江大学 Mobile robot pose estimation method based on trifocal tensor and key frame strategy
US20150356648A1 (en) * 2011-04-12 2015-12-10 Dan Baryakar Online Shopping by Multi Participants Via a Robot

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150356648A1 (en) * 2011-04-12 2015-12-10 Dan Baryakar Online Shopping by Multi Participants Via a Robot
CN102331711A (en) * 2011-08-12 2012-01-25 江苏合成物联网科技有限公司 Formation control method for mobile autonomous robots
CN105096341A (en) * 2015-07-27 2015-11-25 浙江大学 Mobile robot pose estimation method based on trifocal tensor and key frame strategy

Also Published As

Publication number Publication date
CN105955271B (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN106553195B (en) Object 6DOF localization method and system during industrial robot crawl
CN106600583A (en) Disparity map acquiring method based on end-to-end neural network
CN106055091A (en) Hand posture estimation method based on depth information and calibration method
CN106874914A (en) A kind of industrial machinery arm visual spatial attention method based on depth convolutional neural networks
CN105512621A (en) Kinect-based badminton motion guidance system
CN101398933B (en) Method for recovering three-dimensional geometric information from image
Krainin et al. Manipulator and object tracking for in hand model acquisition
CN104123747A (en) Method and system for multimode touch three-dimensional modeling
CN106688017A (en) Method and device for generating a point cloud map, and a computer system
CN103942829A (en) Single-image human body three-dimensional posture reconstruction method
CN100429677C (en) Forming and editing method for three dimension martial art actions based on draft driven by data
CN106371442A (en) Tensor-product-model-transformation-based mobile robot control method
Gu A journey from robot to digital human: mathematical principles and applications with MATLAB programming
CN102768767A (en) Online three-dimensional reconstructing and locating method for rigid body
CN104392447B (en) A kind of image matching method based on shade of gray
Singh et al. Sketch drawing by nao humanoid robot
CN105844672A (en) Rapid and continuous collision detection method for multi-joint model
Brossette et al. Point-cloud multi-contact planning for humanoids: Preliminary results
Yu et al. Accurate and robust visual localization system in large-scale appearance-changing environments
Pang et al. Basicnet: Lightweight 3d hand pose estimation network based on biomechanical structure information for dexterous manipulator teleoperation
CN101833759B (en) Robot scene depth discrimination method based on continuous videos
CN105955271A (en) Multi-robot coordinated motion method based on multi-view geometry
CN105404174B (en) A kind of method for solving of six degree of freedom serial manipulator inverse kinematic
Hayet et al. Humanoid locomotion planning for visually guided tasks
CN105867428B (en) Multi-robot system order switching method based on mode multiple views geometry of doing more physical exercises

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Wan Cheng

Inventor after: Xu Peiyuan

Inventor after: Sun Jing

Inventor before: Wan Cheng

GR01 Patent grant
GR01 Patent grant