CN100573388C - The robot control method of real-time color auto acquisition and robot - Google Patents

The robot control method of real-time color auto acquisition and robot Download PDF

Info

Publication number
CN100573388C
CN100573388C CNB2008102020042A CN200810202004A CN100573388C CN 100573388 C CN100573388 C CN 100573388C CN B2008102020042 A CNB2008102020042 A CN B2008102020042A CN 200810202004 A CN200810202004 A CN 200810202004A CN 100573388 C CN100573388 C CN 100573388C
Authority
CN
China
Prior art keywords
color
robot
pixel
mark
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2008102020042A
Other languages
Chinese (zh)
Other versions
CN101398689A (en
Inventor
熊蓉
李毅
褚健
刘建良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZHONGKONG SCIENCE AND TECHNOLOGY GROUP Co Ltd
Zhejiang University ZJU
Original Assignee
ZHONGKONG SCIENCE AND TECHNOLOGY GROUP Co Ltd
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZHONGKONG SCIENCE AND TECHNOLOGY GROUP Co Ltd, Zhejiang University ZJU filed Critical ZHONGKONG SCIENCE AND TECHNOLOGY GROUP Co Ltd
Priority to CNB2008102020042A priority Critical patent/CN100573388C/en
Publication of CN101398689A publication Critical patent/CN101398689A/en
Application granted granted Critical
Publication of CN100573388C publication Critical patent/CN100573388C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A kind of robot control method of real-time color auto acquisition comprises: (1) is provided with a top cover at the robot top, is distributed with some kinds of colour code samples on the top cover; (2) described colour code sample is sampled, and each the colour code information after gathering is carried out Threshold Segmentation; (3) after robot obtains colored global image, will use quick color to cut apart and obtain block message of all kinds in the image with sequential connected region searching algorithm; (4) obtain the related colour block message by step (3), and carry out triangle/trapezoidal template matches with the template of storage in advance after extracting color lump, and find immediate template, thereby calculate pixel coordinate and the angle information of target in current static global image of this template correspondence; (5) finish the self-align operation of this robot; (6) determine the path planning of this robot, move so that realize the autonomous of robot.

Description

The robot control method of real-time color auto acquisition and robot
Technical field
The present invention relates to automation field, relate in particular to autonomous robot control method that moves and corresponding robot system that a kind of SPEED VISION color is gathered and handled.
Background technology
Autonomous mobile robot can be discerned and measure target and barrier in real time, learns and understands external environment, and carry out motion planning according to the target of setting, and adjusts motion planning under unmanned the intervention in real time, moves to the destination, and finishes the work of setting.Therefore, it can help the mankind to finish some harmful, that the mankind can't participate in directly work, the dangerous and accurate work such as field work, deep-sea detecting, military surveillance, space development, coring pollution etc.
The autonomous mobile robot system is made of motion, sensor mechanism, control gear and decision-making body's four parts at present.Motion is combined by mobile devices such as rollers, uses for robot movement; Sensor mechanism comprises ultrasound wave, laser, humiture, pressure, speed acceleration transducer, odometer, GPS etc., respectively can motion state, self-position, external environment information and the external environment of robot measurement self in the position of other objects; Decision-making body can make path planning according to the various information of collecting, and finishes by the control gear control robot and to keep away barrier motion.
Vision Builder for Automated Inspection is occupied more and more important position in the sensor of robot, it is meant the visual performance that realizes the people with computing machine, just realizes identification to objective three-dimensional world with computing machine.Vision Builder for Automated Inspection generally all can relate to a robot device's color acquisition problems, and general existing color collection all is under the condition of off-line, by manual mode the robot colour code of static state is carried out the color collection by veteran slip-stick artist.Cause prior art to exist following defective thus:
At first, because in the environment that actual robot uses, vision system causes that for human factor etc. the change of illumination condition is extremely responsive, causes the static colouring information of gathering of off-line not to be suitable for real time environment.Though we can construct various illumination models in theory, carry out self-adaptation in the operational process, the adjustment of model parameter is reached desirable effect.But in fact because the continuous introducing of parameter increases, the systematic error accumulation causes robot vision partly to become unstable more, even collapse.
Secondly, the image that autonomous mobile robot collects by collectors such as cameras, in determining present image during the pixel coordinate of other target (as other autonomous mobile robot), need to judge that a pixel belongs to certain color, for three dimensionality color spaces such as normally used YUV, RGB and HIS, carry out 6 comparison operations as needs, belong to a kind of in the n kind color if judge it, just need carry out 6n time relatively calculates, calculated amount is very big, the easy like this judgement delay that causes the longer time.
Summary of the invention
A purpose of the present invention is to provide a kind of robot control method of real-time color auto acquisition, cause that to solve in the prior art human factor etc. the change of illumination condition is extremely responsive, cause the static colouring information of gathering of off-line not to be suitable for real time environment, influence the technical matters of the color identification of robot.
Another object of the present invention is to provide a kind of robot of real-time color auto acquisition.
A kind of robot control method of real-time color auto acquisition comprises:
(1) at the robot top one top cover is set, is distributed with some kinds of colour code samples on the top cover;
(2) described colour code sample is sampled, and each the colour code information after gathering is carried out Threshold Segmentation;
(3) after robot obtains colored global image, will use quick color to cut apart and obtain block message of all kinds in the image with sequential connected region searching algorithm;
(4) obtain the related colour block message by step (3), and carry out triangle/trapezoidal template matches with the template of storage in advance after extracting color lump, and find immediate template, thereby calculate pixel coordinate and the angle information of target in current static global image of this template correspondence;
(5) finish the self-align operation of this robot;
(6) determine the path planning of this robot, move so that realize the autonomous of robot.
The present invention also comprises: robot timing or event-triggered ground carry out Threshold Segmentation again to described top cover colour code sample of color.
The central circular colour code of top cover is distributed in the some distances of distance center respectively for yellow, other figure colour codes in the step (1), be followed successively by redness, orange, blue and green clockwise, and those circular colour codes be arranged on the papery plate or plastic plate of black.
In the step (2) fast color cut apart further and be: at first set up array Hcp, Scp, Icp, array size is 360,256,256, initially all is 0; From low level, stipulate fellow deputies' color successively, during the setpoint color threshold value, correspondence position in Hcp, Scp, the corresponding scope of Icp is made as 1.Other colors by that analogy; Judge when a certain pixel belongs to a certain color, read element in the corresponding array according to the value of pixel, carry out " position " and computing, if the value on a certain position of operation result is 1, the color of representing this representative in this pixel input color component, if all positions all are 0, then explain this pixel and do not belong to any color of having set, the value of three color components is carried out 2 positions and computing just can directly judge color gamut under the value of this pixel.
Sequential connected region searching algorithm further is in the step (2): in the line search, adjacent same color pixel is as a Mark, the starting point, number of pixels and the color type that comprise this section pixel among each Mark, and to set the connected region that same color constitutes be a Group, and Group has write down the information of color, sequence number, sum of all pixels, boundary rectangle, the center of gravity that calculates and each Mark of this connected region; In the search procedure, carrying out along with line scanning, the Mark that new search arrives, according to the color characteristic of Mark or be added among the existing Group, or produce new Group, the Group that merges adjacency simultaneously, when searching new Mark and finding its terminal point, the pixel that same color is arranged when just checking lastrow in its eight connected region, if no, just think new connected region, and generate new Group and write down this Mark, and this Mark is as the initial root node of connected region, and new Group joins the end of record chained list; If be checked through the pixel of same color, just upgrade the information of relevant Group, and revise the mark of this Mark; If the Group adjacency of this Mark while and a plurality of same colors, all Group all merge among the leftmost Group of root node, upgrade Group information.
The path planning of determining this robot is to calculate acquisition by expanding tree algorithm fast at random.
Step (5) also further comprises: autonomous mobile robot camera self-align, in the overall vision system of static state, utilize the Tsai method to find the solution the 3D projection model; In dynamic overall vision system, adopt improved Zhang Zhengyou algorithm, rely on multiple image to set up its corresponding relation.
A kind of robot of real-time color auto acquisition, comprise motion, sensor mechanism, control gear and decision-making body, also comprise image acquisition and processing mechanism, in robot one top cover is set, be distributed with some kinds of colour code samples on the top cover, image acquisition and processing mechanism comprise camera lens, camera, video interface and computer processing unit, and computer processing unit further comprises:
Color dividing processing subelement: carry out color through digitized image and cut apart to what collect;
Be communicated with search and handle subelement: the image that color is cut apart obtains block message of all kinds in the image through sequential connected region searching algorithm;
Pattern recognition process subelement: carry out triangle/trapezoidal template matches with the template of storage in advance after extracting color lump, and find immediate template, thereby calculate pixel coordinate and the angle information of target in current static global image of this template correspondence;
Self-align processing subelement: be used to finish the self-align operation of this robot;
Color threshold manager: be used to gather top cover colour code sample, and color is carried out the fault value cut apart;
Path planning subelement: the path planning that is used for determining this robot.
Camera lens is for focusing or zoom lens; Video interface is IEEE1394a/b, Camera Link or gigabit Ethernet according to the model of camera; Computer system is the embedded development platform of PC, industrial computer or the ARM system of x86 system.
Described autonomous robot adopts f180 mini-football robot.
Compared with prior art, the present invention has following technique effect: on determining present image during each target position, the present invention sets the sampling top cover in advance in robot, in this way, the color sample that robot site samples, has presence, it is the scene (current light etc.) at place in the overall coloured image that collects of robot, with the environment at the color sample place that samples be broadly similar, so just improve accuracy rate and fast speed that its color is judged, also improved the precision of each target position on the present image.
The present invention can utilize the color sample that collects to carry out twice and computing when color is cut apart fast apace, has determined each color of pixel information, reaches the effect of cutting apart fast.
The present invention can be regularly or event-triggered ground sample, more improved the accuracy rate that its color is judged.
Description of drawings
Fig. 1 is a model of robot of the present invention;
Fig. 2 is an auto color acquisition module synoptic diagram;
Fig. 3 is a robot color acquisition process process flow diagram;
Fig. 4 is the process flow diagram of the robot control method of real-time color auto acquisition of the present invention;
Fig. 5 is the sequential search algorithm synoptic diagram together with the zone;
The synoptic diagram of the heavy partitioning algorithm of Fig. 6 color threshold.
Embodiment
Below in conjunction with accompanying drawing, specify the present invention.
Core of the present invention is: the static sample mode of off-line that robot color acquisition system is sampled automatically and substituted prior art with the color that barrier avoiding function is arranged, and this advantage is to realize by each algoritic module of robot interior processor; In addition, voluntarily barrier is hidden, realized that no worker monitor keeps away barrier voluntarily by the path planning system in the processor.
A kind of robot of real-time color auto acquisition sees also Fig. 1, and it is a model of robot, robot of the present invention can adopt f180 mini-football robot, main inventive point of the present invention is a top cover is set thereon, sees also Fig. 2, and it is an exemplary plot of top cover.Top cover as shown in Figure 2, top cover is made up of the papery plate or the plastics top board of the circular colour code paper of five kinds of diameter 43-50mm and the mute light black of diameter.The order of color is as shown in Figure 2, and the top board center is yellow, and other color lumps are evenly distributed on distance center 50mm place, are followed successively by redness, orange, blue and green clockwise.Be connected and fixed by bolt between top cover and the autonomous mobile robot.The color type of the colour code on the top cover and number are to set according to concrete scene.Robot itself has a harvester, gather the colour code on the top cover, harvester (as camera, video camera etc.) can adopt image acquisition that robot now has and the equipment in the processing mechanism, also can independently add, but normally utilize existing equipment to finish the collection of its colour code.This top cover contained whole we to gather colouring information.In dynamic environment, we only satisfy strictness the color of this top cover as adaptive input, discern the object in some other visual range simultaneously.
Robot generally includes motion, sensor mechanism, control gear and decision-making body.Also comprise image acquisition and processing mechanism in addition, image acquisition and processing mechanism comprise existing have usually camera lens, camera, video interface and computer processing unit.Wherein camera lens can be selected tight shot or zoom lens according to the difference of environment; Video interface can be IEEE1394a/b, CameraLink, gigabit Ethernet etc. according to the selection of camera; Computer system can be the PC or the industrial computer of x86 system, also can select the embedded development platform of ARM system.
Divide from logic, computer processing unit 1 further comprises (seeing also Fig. 3):
Color dividing processing subelement 11: the image that collects is carried out color cut apart;
Be communicated with search and handle subelement 12: the image that color is cut apart obtains block message of all kinds in the image through sequential connected region searching algorithm;
Pattern recognition process subelement 13: carry out triangle/trapezoidal template matches with the template of storage in advance after extracting color lump, and find immediate template, thereby calculate pixel coordinate and the angle information of target in current static global image of this template correspondence;
Self-align processing subelement 14: be used to finish the self-align operation of this robot;
Color threshold manager 15: be used to gather top cover colour code sample, and color is carried out the fault value cut apart.With robot play soccer the match this scene be example, robot can carry out the sampling of colour code sample in advance before beginning, and the result that will sample is arranged in the color threshold manager, and the color threshold manager also can be written into configuration file by off-line, determines to gather the information of color.After the program start, mention the array of " position " and computing according to generating the back, and carry out the maintenance that each color threshold overlaps.When program withdraws from, the compression of threshold value array is stored in the configuration file.
Path planning subelement 16: the path planning that is used for determining this robot.
Subelement on the computer processing unit 1 is normally realized with software, can be integrated in a processor, realizes by the processor entity.Computer processing unit 1 can be connected with topworks 2, goes to finish robot from the effect that moves.
The present invention proposes the method that a kind of no worker monitor real-time color is gathered, and sees also Fig. 4, and this system combines machine vision and the motion control arithmetic that autonomous mobile robot is in kind and the process simulating, verifying is crossed, and can realize that the vision of no worker monitor is sampled automatically.It has comprised following steps:
S101: at the robot top one top cover is set, is distributed with some kinds of colour code samples on the top cover
The papery plate or the plastics top board of five kinds of circular colour code paper and the mute light black of diameter are formed a top cover.Select yellow circular colour code paper to place described top cover centre, other colour code paper then are evenly distributed on distance center 50mm place, are followed successively by red, orange, blue, green clockwise.Offer some threaded holes in addition around top board, to be bolted to an autonomous mobile robot top, this autonomous mobile robot can be selected f180 mini-football robot, but is not limited to this kind robot with top cover.
S102: described colour code sample is sampled, and each the colour code information after gathering is carried out Threshold Segmentation;
Robot timing or event-triggered ground carry out Threshold Segmentation again to described top cover colour code sample of color.Robot can be when each intermission, or the one predefined time of every mistake (as ten minutes).This step mainly is the background subtraction segmentation result that utilizes in colourity and the brightness, as Fig. 6, is used for calculating threshold information as each color lump according to the zone more than the criterion dotted line.Color lump is avoided orange and red interference according to the ordering of colourity size.Consider on-the-spot situation, robot can on-the-spot sample to colour code information, carries out Threshold Segmentation again.Existing is that threshold value is stored in advance, like this, the threshold information of storage all is some more satisfactory states (such as the strong states of illumination), and on-the-spot color when gathering and the colour code threshold information of storage just have error, make easily follow-uply deviation to occur when obtaining color lump information, and then influence the action of robot.And the present invention gathers according to existing situation, determines Threshold Segmentation, has reduced the generation of this situation, has improved the precision that obtains color lump information.
S130: after robot obtains colored global image, will use quick color to cut apart and obtain block message of all kinds in the image with sequential connected region searching algorithm.
At first introduce quick color partitioning algorithm:
Color threshold generally all is to represent with maximal value and minimum value.Judge whether a pixel value belongs to certain color, just judge this pixel on each component of color space whether all between the maximal value and minimum value of color threshold.For three dimensionality color spaces such as normally used YUV, RGB and HIS, judge that a pixel belongs to certain color and need carry out 6 comparison operations, belong to a kind of in the n kind color if judge it, just need carry out 6n comparison operation, therefore calculated amount is very big, is a very big burden to robotic vision system.
For this reason, the inventor has proposed a kind of quick color dividing method based on the HIS space, only need carry out 2 " position with " computings, can judge certain pixel value and whether belong to a kind of in the sample color.Concrete grammar is as follows:
At first set up three arrays, be respectively Hcp, Scp, Icp, array size is respectively 360,256,256, and initially all is 0.From low level, stipulate fellow deputies' color successively.This setting can be that configuration file is predefined.During the setpoint color threshold value, correspondence position in Hcp, Scp, the corresponding scope of Icp is made as 1, other colors by that analogy.During judgement, read element in the corresponding array, carry out " position " and computing according to the value of pixel, if the value on a certain position of result is 1, represent that this pixel belongs to the color of this representative in the color component,, then explain this pixel and do not belong to any color of having set if all positions all are 0.The value of three color components is carried out 2 positions and computing just can directly judge color gamut under the value of this pixel.
The color gamut at each pixel place in determining image, behind promptly definite color type, next the present invention determines color lump information in the present image with sequential connected region searching algorithm.
What sequential connected region searching algorithm integral body adopted is line search, and at first, we are adjacent same color pixel definition a Mark earlier, have comprised starting point, number of pixels and the color type of this section pixel among this Mark; The connected region that same color is constituted is defined as a data structure Group then, has write down center of gravity that color, sequence number, sum of all pixels, boundary rectangle, the computing machine of this connected region obtain and the information of each Mark among this Group.In the process of judging connected region, adopt 8 connected relations to judge at each pixel.
See also Fig. 5, it is sequential connected region searching algorithm synoptic diagram.
At first regard each connected region as one tree, each Mark is a node, connected region keeps left the limit most as the root node of setting, as just comprising the Mark of one or four same color pixels and the Mark of a First Five-Year Plan same color pixel among the figure (a), two Group:Group1 and Group2 have just been produced simultaneously.
Then along with the carrying out of line scanning, when searching new Mark and finding its terminal point, just check whether lastrow in its eight connected region has the pixel of same color, if be checked through the pixel of same color information, just upgrade the information of relevant Group, and revise the mark of this Mark, be increased to 8 pixels as Group1 among the figure (b), Group2 is increased to 9 pixels, and Group1 is increased to 16 pixels among the figure (c), and Group2 is increased to 9 pixels; If be not checked through the pixel of same color information, then think new connected region, and generate new Group and write down this Mark that this Mark is as the initial root node of new connected region.If this Mark simultaneously and the Group adjacency of a plurality of same colors in addition, all Group all merge among the leftmost Group of root node, upgrade Group information, and as scheming shown in (d), Group2 has merged to Group1, has formed a connected region.
S104: obtain the related colour block message by step S103, and extract behind the color lump with computer system in advance the template of storage carry out triangle/trapezoidal template matches, and find immediate template, thereby calculate pixel coordinate and the angle information of each target (as target robot) in current static global image.
After each color lump is determined on the present image, can mate with the template (as other target robot, ball etc.) of storage in advance by those color lumps, what the match is successful, can confirm the target that each color lump is represented, be ball, or other robot.After target being carried out the sampling of some points, can determine its center of gravity or center, measure center of gravity or the pixel coordinate of center in present image, be the position at this target place.
As for angle information, can obtain in the following manner, find one group of vector on the template, on the color lump after the coupling, find another group vector, these two vectors are divided by, can obtain the cos value of this angle.Such as, a certain color lump is a certain robot, the point (as head and so on) at the point at the center of gravity place of this robot, place, two edges on the template, on color lump, find center of gravity, and the point of this edge correspondence, can determine two groups of vectors, also determined thus this robot angle information (than in, towards which)
S105: obtain the position of current autonomous mobile robot and the self-align work that attitude is carried out described autonomous mobile robot by some physical sensors.
So-called autonomous robot is self-align, and generally to be the positional information that obtains according to the sensor that robot carries and environmental information adjust current self pose, promptly is equivalent to obtain the local message of autonomous mobile robot periphery.In the present embodiment, the self-align of autonomous mobile robot mainly is the self-align of camera, and camera is self-align by following realization: in the overall vision system of static state, can initiatively demarcate the camera external parameter, here utilize Tsai method linear solution 3D projection model, the robustness height; In dynamic overall vision system, can adopt improved Zhang Zhengyou algorithm, rely on multiple image to set up its corresponding relation, dirigibility is strong.The inventor optimizes the Tsai method, makes it for error less than 10mm.Below this improved Tsai method is set forth:
Desirable lens imaging should satisfy the pinhole imaging system principle, but truly optical system is owing to mismachining tolerance and rigging error between camera lens, and there are error to a certain degree in actual image point that object point is become on the plane of delineation and theoretical picture point, i.e. distortion.Distortion type can be divided into two kinds of radial distortion and tangential distortions, and main in most of optical systems what exist is radial distortion, can be with following formulate, and this also is the nonlinearized reason of pinhole imaging system linear model.
(x-u c)(1+k 1(u 2+v 2))=u-u c
(y-v c) (1+k 1(u 2+ v 2))=v-v cWherein, (u v) is the digital coordinate of a point, and (x y) is desirable digital coordinate, and (uc vc) is the center of distortion.
The Tsai method is a kind of two-step approach, and the first step draws first group of parameter according to radially consistent the constraint, although radially consistent the constraint is non-linear, is to use very simple method just can obtain first group of parameter.Such as on the plane of delineation, point (uc, vc), (x, y), (u, v) conllinear, perhaps straight line (xc, yc) (x, y) with straight line (xc, yc), (u, v) parallel or slope equates then have:
x - u c y - v c = u - u c v - v c
Usually picture centre is got the coordinate of making center of distortion and principal point, therefore:
x - u 0 y - v 0 = u - u 0 v - v 0
Second step was to come second group of parameter of iterative with non-linear square law, made the error minimum of projection equation, and the initial value of iteration is to obtain ignoring the equation of finding the solution two unknown quantitys under the lens distortion situation.
(wherein k1 is that radially the coefficient of first order, focal length, Tz that f is camera lens of deformation are the transfer vector (Tx that world coordinates is tied to camera coordinates system to camera lens yet the distortion correction algorithm of Tsai has only carried out computation optimization to k1, f and Tz, Ty, a parameter in), and other inner parameters and external parameter all adopt the linear approximate value of calculating as a result of Tz).In order to improve computational accuracy, at first fix other parameters in the present embodiment, k1 is optimized, fixed part parameter one by one is optimized calculating to other parameters more then, and all parameters are optimized in last optimization simultaneously.Need the imaging plane of camera and plane included angle, place will reach more than 30 ° owing to need to obtain enough depth informations.If can not calculate the value of f and Tz, only can only the interior control information of two dimensional field ground level with the value of f/Tz.
Because key point is coplanar point, 5 unknown numbers (establishing Ty=1) are arranged when finding the solution homogeneous linear equations in approximate treatment, so need to choose 5 key points at least,, can adopt pseudo-transposition Matrix Solving minimum variance to try to achieve unknown quantity when selected point during more than 5.The distribution of key point is wide as far as possible.By the nonlinear optimization storehouse of formula translation, guarantee to have reduced operation time under the prerequisite of precision.
S106: determine the path planning of this robot, move so that realize the autonomous of robot.
The present invention utilizes the k-d tree to quicken searching of arest neighbors table and improves the RRT algorithm, and utilizes this improved RRT algorithm to carry out the path planning of described autonomous mobile robot.
Traditional motion planning algorithm such as A* algorithm, C space path searching method, Roadmap algorithm, Artificial Potential Field method etc. all are not suitable for being applied under the constraint condition of non-integrality.The La Valle and the Kuffner of UIUC university have proposed the RRT algorithm.The RRT algorithm is a kind of stochastic search methods of single inquiry, and it does not make up whole reflections in C space, but through certain strategy is sampled in the C space, is applicable to nonholonomic system.The inventor improves for the RRT algorithm in the present embodiment, utilize k-d to set and quicken searching of the most contiguous table, the index exploding problem of effectively having avoided degree of freedom to increase and having brought adds new argument and adjusts compromise in planning efficiency and path planning optimization.
The RRT planning in path uses search tree to search for from the original state to the dbjective state.Down with RRT false code for expansion, in each iterative process, at first generate a sampled point, RRT tree of alternate selection is carried out the ExtendState function and obtains a new node then, then another RRT tree is carried out the Connect expansion to this new node, meeting up to two trees reaches termination.
Three main functions are arranged: state expansion ExtendState (), actual range RealDistance (), random state RandomState () here.At first, ExtsendState function calculation specific range or after the time, in order to arrive the next state of dbjective state.If in arriving the process of new state and the barrier in the environment bump, can return the EmptyState state of acquiescence so.Usually, the heuristic control robot that here all suits.Heuristic (Heuristic method) does not need very complicated, does not need to keep away barrier yet, only carries out collision detection and gets final product.Because do not need to rely on and random search, the node of cooking up under the good more heuritic approach average case should be few more.Secondly, the RealDistance function needs the estimation function of computing time or distance, is used for estimating that Extend also needs the repetition how many times could arrive the destination.At last, the RandomState function returns the consistent state in the ambient condition space.
function?RRTMotionPlan(Environment,initialState,goalState)
variable?nearestState,extendedState,targetState
variable?RRTTree
nearestState←initialState
RRTtree←initialState
while?Distance(nearestState,goalState)<threshold?do
targetState←Target(goalState)
nearestState←Nearest(RRTTree,targetState)
extendedState←Extend(Environment,nearestState,targetState)
if?extendedState!=EmptyState
then?AddNode(RRTTree,extendedState)
return?RRTTree
function?Target(goalState)
variable?p
p←UniformRandom(0,1)
ifp∈[0,GoalProb]
then?return?goalState
else?ifp∈[GoalProb,1]
then?return?RandomState()
function?Nearest(RRTTree,targetState)
variable?nearestState;
nearestState←EmptyState;
for?each?States∈RRTTree?do
if?Distance(s,targetState)<Distance(nearestState,targetState)
then?nearestState←s;
return?nearestState;
Follow-up, this robot will plan that by communication interface good path inputs to executive system, by controlling motor drive module after the executive system process speed resolution process, move thereby finish the autonomous of robot.
More than disclosed only be several specific embodiment of the present invention, but the present invention is not limited thereto, any those skilled in the art can think variation, all should drop in protection scope of the present invention.

Claims (10)

1, a kind of robot control method of real-time color auto acquisition is characterized in that, comprising:
(1) at the robot top one top cover is set, is distributed with some kinds of colour code samples on the top cover;
(2) described colour code sample is sampled, and each the colour code information after gathering is carried out Threshold Segmentation;
(3) after robot obtains colored global image, will use quick color to cut apart and obtain block message of all kinds in the image with sequential connected region searching algorithm;
(4) obtain the related colour block message by step (3), and carry out triangle/trapezoidal template matches with the template of storage in advance after extracting color lump, and find immediate template, thereby calculate pixel coordinate and the angle information of target in current static global image of this template correspondence;
(5) finish the self-align operation of this robot;
(6) determine the path planning of this robot, move so that realize the autonomous of robot.
2, the robot control method of real-time color auto acquisition as claimed in claim 1 is characterized in that, also comprises:
Robot timing or event-triggered ground carry out Threshold Segmentation again to described top cover colour code sample of color.
3, the robot control method of real-time color auto acquisition as claimed in claim 1 or 2, it is characterized in that, the central circular colour code of top cover is distributed in the some distances of distance center respectively for yellow, other figure colour codes in the step (1), be followed successively by redness, orange, blue and green clockwise, and those circular colour codes be arranged on the papery plate or plastic plate of black.
4, the robot control method of real-time color auto acquisition as claimed in claim 1 or 2 is characterized in that, in the step (3) fast color cut apart further and be:
At first set up array Hcp, Scp, Icp, array size is 360,256,256, initially all is 0; From low level, stipulate fellow deputies' color successively, during the setpoint color threshold value, correspondence position in Hcp, Scp, the corresponding scope of Icp is made as 1, other colors are by that analogy;
Judge when a certain pixel belongs to a certain color, read element in the corresponding array according to the value of pixel, carry out " position " and computing, if the value on a certain position of operation result is 1, the color of representing this representative in this pixel input color component, if all positions all are 0, then explain this pixel and do not belong to any color of having set, the value of three color components is carried out 2 positions and computing just can directly judge color gamut under the value of this pixel.
As the robot control method of 2 described real-time color auto acquisitions in the claim 1, it is characterized in that 5, sequential connected region searching algorithm further is in the step (3):
In the line search, adjacent same color pixel is as a Mark, the starting point, number of pixels and the color type that comprise this section pixel among each Mark, and to set the connected region that same color constitutes be a Group, and Group has write down the information of color, sequence number, sum of all pixels, boundary rectangle, the center of gravity that calculates and each Mark of this connected region;
In the search procedure, carrying out along with line scanning, the Mark that new search arrives, according to the color characteristic of Mark or be added among the existing Group, or produce new Group, the Group that merges adjacency simultaneously, when searching new Mark and finding its terminal point, the pixel that same color is arranged when just checking lastrow in its eight connected region, if no, just think new connected region, and generate new Group and write down this Mark, and this Mark is as the initial root node of connected region, and new Group joins the end of record chained list; If be checked through the pixel of same color, just upgrade the information of relevant Group, and revise the mark of this Mark; If the Group adjacency of this Mark while and a plurality of same colors, all Group all merge among the leftmost Group of root node, upgrade Group information.
6, the robot control method of real-time color auto acquisition as claimed in claim 1 or 2 is characterized in that, the path planning of determining this robot is to calculate acquisition by expanding tree algorithm fast at random.
7, the robot control method of real-time color auto acquisition as claimed in claim 1 or 2, it is characterized in that, step (5) also further comprises: autonomous mobile robot camera self-align, in the overall vision system of static state, utilize the Tsai method to find the solution the 3D projection model; In dynamic overall vision system, adopt improved Zhang Zhengyou algorithm, rely on multiple image to set up its corresponding relation.
8, a kind of robot of real-time color auto acquisition, comprise motion, sensor mechanism, control gear and decision-making body, also comprise image acquisition and processing mechanism, it is characterized in that, in robot one top cover is set, be distributed with some kinds of colour code samples on the top cover, image acquisition and processing mechanism comprise camera lens, camera, video interface and computer processing unit, and computer processing unit further comprises:
Color dividing processing subelement: the image that collects is carried out color cut apart;
Be communicated with search and handle subelement: the image that color is cut apart obtains block message of all kinds in the image through sequential connected region searching algorithm;
Pattern recognition process subelement: carry out triangle/trapezoidal template matches with the template of storage in advance after extracting color lump, and find immediate template, thereby calculate pixel coordinate and the angle information of target in current static global image of this template correspondence;
Self-align processing subelement: be used to finish the self-align operation of this robot;
Color threshold manager: be used to gather top cover colour code sample, and color is carried out the fault value cut apart;
Path planning subelement: the path planning that is used for determining this robot.
9, the robot of real-time color auto acquisition as claimed in claim 8, it is characterized in that, the central circular colour code of top cover is distributed in the some distances of distance center respectively for yellow, other figure colour codes, be followed successively by redness, orange, blue and green clockwise, and those circular colour codes be arranged on the papery plate or plastic plate of black.
10, the robot of real-time color auto acquisition as claimed in claim 8 is characterized in that, camera lens is for focusing or zoom lens; Video interface is IEEE1394a/b, CameraLink or gigabit Ethernet according to the model of camera; Computer system is the embedded development platform of PC, industrial computer or the ARM system of x86 system, and described autonomous robot adopts f180 mini-football robot.
CNB2008102020042A 2008-10-30 2008-10-30 The robot control method of real-time color auto acquisition and robot Expired - Fee Related CN100573388C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2008102020042A CN100573388C (en) 2008-10-30 2008-10-30 The robot control method of real-time color auto acquisition and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2008102020042A CN100573388C (en) 2008-10-30 2008-10-30 The robot control method of real-time color auto acquisition and robot

Publications (2)

Publication Number Publication Date
CN101398689A CN101398689A (en) 2009-04-01
CN100573388C true CN100573388C (en) 2009-12-23

Family

ID=40517286

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2008102020042A Expired - Fee Related CN100573388C (en) 2008-10-30 2008-10-30 The robot control method of real-time color auto acquisition and robot

Country Status (1)

Country Link
CN (1) CN100573388C (en)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102375416B (en) * 2010-08-13 2013-10-23 同济大学 Human type robot kicking action information processing method based on rapid search tree
CN102997910B (en) * 2012-10-31 2016-04-13 上海交通大学 A kind of based on road of ground surface target location guidance system and method
CN103056864A (en) * 2013-01-24 2013-04-24 上海理工大学 Device and method for detecting position and angle of wheeled motion robot in real time
CN103927511B (en) * 2014-02-25 2017-02-15 华北电力大学(保定) image identification method based on difference feature description
CN103970134B (en) * 2014-04-16 2017-01-18 江苏科技大学 Multi-mobile-robot system collaborative experimental platform and visual segmentation and positioning method thereof
CN104154997B (en) * 2014-07-16 2016-02-10 北京空间机电研究所 A kind of UAV system small-sized self-stabilization aviation multi-optical spectrum imaging system
CN104253981B (en) * 2014-09-28 2017-11-28 武汉烽火众智数字技术有限责任公司 A kind of method that moving target for video investigation presses color sequence
CN104318297A (en) * 2014-09-28 2015-01-28 广西南宁推特信息技术有限公司 Color mark positioning electronic tag-based robot positioning system and method
WO2016065627A1 (en) 2014-10-31 2016-05-06 深圳市大疆创新科技有限公司 Location-based control method and apparatus, mobile machine and robot
CN105095895B (en) * 2015-04-23 2018-09-25 广州广电运通金融电子股份有限公司 Valuable file identification device self-correction recognition methods
CN105136302B (en) * 2015-05-04 2017-09-29 北京佰能电气技术有限公司 A kind of positioner based on color sensor
US10195740B2 (en) * 2015-09-10 2019-02-05 X Development Llc Using object observations of mobile robots to generate a spatio-temporal object inventory, and using the inventory to determine monitoring parameters for the mobile robots
US9864377B2 (en) * 2016-04-01 2018-01-09 Locus Robotics Corporation Navigation using planned robot travel paths
CN107491714B (en) * 2016-06-13 2022-04-05 中科晶锐(苏州)科技有限公司 Intelligent robot and target object identification method and device thereof
CN107831758A (en) * 2016-09-16 2018-03-23 天津思博科科技发展有限公司 Robot device with adaptation function
CN107831675A (en) * 2016-09-16 2018-03-23 天津思博科科技发展有限公司 Online robot control device based on intelligence learning technology
CN107336251A (en) * 2016-09-20 2017-11-10 苏州小璐机器人有限公司 A kind of control method and system of robot queue
CN106426171A (en) * 2016-11-01 2017-02-22 河池学院 Self-walking type intelligent soccer robot
CN106541404B (en) * 2016-11-03 2018-11-20 四川阿泰因机器人智能装备有限公司 A kind of Robot visual location air navigation aid
CN108280842B (en) * 2017-12-29 2020-07-10 广州海昇计算机科技有限公司 Foreground segmentation method for overcoming illumination mutation
CN109458977A (en) * 2018-10-21 2019-03-12 合肥优控科技有限公司 Robot orientation method, system and computer readable storage medium
CN109202911B (en) * 2018-11-08 2021-06-25 北京理工大学 Three-dimensional positioning method for cluster amphibious robot based on panoramic vision
CN111178366B (en) * 2018-11-12 2023-07-25 杭州萤石软件有限公司 Mobile robot positioning method and mobile robot
CN109901594A (en) * 2019-04-11 2019-06-18 清华大学深圳研究生院 A kind of localization method and system of weed-eradicating robot
CN110377033B (en) * 2019-07-08 2020-11-03 浙江大学 RGBD information-based small football robot identification and tracking grabbing method
CN110618682B (en) * 2019-09-24 2022-11-01 河海大学常州校区 Centralized control type football robot color code structure and identification method thereof
CN111168681B (en) * 2020-01-10 2021-09-21 山东大学 Mechanical arm intelligent obstacle avoidance method and system for man-machine safety interaction and robot
CN113377108A (en) * 2021-06-09 2021-09-10 乐聚(深圳)机器人技术有限公司 Method, device, equipment and storage medium for controlling biped robot

Also Published As

Publication number Publication date
CN101398689A (en) 2009-04-01

Similar Documents

Publication Publication Date Title
CN100573388C (en) The robot control method of real-time color auto acquisition and robot
CN110136208B (en) Joint automatic calibration method and device for robot vision servo system
Beltrán et al. Automatic extrinsic calibration method for lidar and camera sensor setups
CN109598765B (en) Monocular camera and millimeter wave radar external parameter combined calibration method based on spherical calibration object
CN112785702A (en) SLAM method based on tight coupling of 2D laser radar and binocular camera
Liu et al. Targetless extrinsic calibration of multiple small FoV LiDARs and cameras using adaptive voxelization
CN110176032B (en) Three-dimensional reconstruction method and device
CN112132972A (en) Three-dimensional reconstruction method and system for fusing laser and image data
García-Moreno et al. LIDAR and panoramic camera extrinsic calibration approach using a pattern plane
CN110579764B (en) Registration method and device for depth camera and millimeter wave radar, and electronic equipment
CN102693543B (en) Method for automatically calibrating Pan-Tilt-Zoom in outdoor environments
Yan et al. Joint camera intrinsic and lidar-camera extrinsic calibration
CN115376109A (en) Obstacle detection method, obstacle detection device, and storage medium
CN109752724A (en) A kind of image laser integral type navigation positioning system
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN114137564A (en) Automatic indoor object identification and positioning method and device
Song et al. Calibration of event-based camera and 3d lidar
CN114419259B (en) Visual positioning method and system based on physical model imaging simulation
CN115546594A (en) Real-time target detection method based on laser radar and camera data fusion
Jiao et al. Lce-calib: automatic lidar-frame/event camera extrinsic calibration with a globally optimal solution
CN113932712A (en) Melon and fruit vegetable size measuring method based on depth camera and key points
Yudin et al. Cloudvision: Dnn-based visual localization of autonomous robots using prebuilt lidar point cloud
US20230298203A1 (en) Method for selecting surface points from a cad model for locating industrial 3d objects, application of this method to the location of industrial 3d objects, and augmented reality system usi
dos Santos Rosa et al. Prerequisite study for the development of embedded instrumentation for plant phenotyping using computational vision
Rosa et al. Stereo vision embedded system proposal for plant phenotyping

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20091223

Termination date: 20121030