CN100573388C - The robot control method of real-time color auto acquisition and robot - Google PatentsThe robot control method of real-time color auto acquisition and robot Download PDF
- Publication number
- CN100573388C CN100573388C CNB2008102020042A CN200810202004A CN100573388C CN 100573388 C CN100573388 C CN 100573388C CN B2008102020042 A CNB2008102020042 A CN B2008102020042A CN 200810202004 A CN200810202004 A CN 200810202004A CN 100573388 C CN100573388 C CN 100573388C
- Prior art keywords
- Prior art date
- 230000003068 static Effects 0.000 claims abstract description 14
- 230000011218 segmentation Effects 0.000 claims abstract description 10
- 238000000034 methods Methods 0.000 claims description 15
- 230000000875 corresponding Effects 0.000 claims description 10
- 239000003086 colorant Substances 0.000 claims description 6
- 239000004033 plastic Substances 0.000 claims description 5
- 229920003023 plastics Polymers 0.000 claims description 5
- 230000018109 developmental process Effects 0.000 claims description 4
- 230000001960 triggered Effects 0.000 claims description 4
- 238000003909 pattern recognition Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 238000004040 coloring Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000004364 calculation methods Methods 0.000 description 1
- 230000001276 controlling effects Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005516 engineering processes Methods 0.000 description 1
The present invention relates to automation field, relate in particular to autonomous robot control method that moves and corresponding robot system that a kind of SPEED VISION color is gathered and handled.
Autonomous mobile robot can be discerned and measure target and barrier in real time, learns and understands external environment, and carry out motion planning according to the target of setting, and adjusts motion planning under unmanned the intervention in real time, moves to the destination, and finishes the work of setting.Therefore, it can help the mankind to finish some harmful, that the mankind can't participate in directly work, the dangerous and accurate work such as field work, deep-sea detecting, military surveillance, space development, coring pollution etc.
The autonomous mobile robot system is made of motion, sensor mechanism, control gear and decision-making body's four parts at present.Motion is combined by mobile devices such as rollers, uses for robot movement; Sensor mechanism comprises ultrasound wave, laser, humiture, pressure, speed acceleration transducer, odometer, GPS etc., respectively can motion state, self-position, external environment information and the external environment of robot measurement self in the position of other objects; Decision-making body can make path planning according to the various information of collecting, and finishes by the control gear control robot and to keep away barrier motion.
Vision Builder for Automated Inspection is occupied more and more important position in the sensor of robot, it is meant the visual performance that realizes the people with computing machine, just realizes identification to objective three-dimensional world with computing machine.Vision Builder for Automated Inspection generally all can relate to a robot device's color acquisition problems, and general existing color collection all is under the condition of off-line, by manual mode the robot colour code of static state is carried out the color collection by veteran slip-stick artist.Cause prior art to exist following defective thus:
At first, because in the environment that actual robot uses, vision system causes that for human factor etc. the change of illumination condition is extremely responsive, causes the static colouring information of gathering of off-line not to be suitable for real time environment.Though we can construct various illumination models in theory, carry out self-adaptation in the operational process, the adjustment of model parameter is reached desirable effect.But in fact because the continuous introducing of parameter increases, the systematic error accumulation causes robot vision partly to become unstable more, even collapse.
Secondly, the image that autonomous mobile robot collects by collectors such as cameras, in determining present image during the pixel coordinate of other target (as other autonomous mobile robot), need to judge that a pixel belongs to certain color, for three dimensionality color spaces such as normally used YUV, RGB and HIS, carry out 6 comparison operations as needs, belong to a kind of in the n kind color if judge it, just need carry out 6n time relatively calculates, calculated amount is very big, the easy like this judgement delay that causes the longer time.
Summary of the invention
A purpose of the present invention is to provide a kind of robot control method of real-time color auto acquisition, cause that to solve in the prior art human factor etc. the change of illumination condition is extremely responsive, cause the static colouring information of gathering of off-line not to be suitable for real time environment, influence the technical matters of the color identification of robot.
Another object of the present invention is to provide a kind of robot of real-time color auto acquisition.
A kind of robot control method of real-time color auto acquisition comprises:
(1) at the robot top one top cover is set, is distributed with some kinds of colour code samples on the top cover;
(2) described colour code sample is sampled, and each the colour code information after gathering is carried out Threshold Segmentation;
(3) after robot obtains colored global image, will use quick color to cut apart and obtain block message of all kinds in the image with sequential connected region searching algorithm;
(4) obtain the related colour block message by step (3), and carry out triangle/trapezoidal template matches with the template of storage in advance after extracting color lump, and find immediate template, thereby calculate pixel coordinate and the angle information of target in current static global image of this template correspondence;
(5) finish the self-align operation of this robot;
(6) determine the path planning of this robot, move so that realize the autonomous of robot.
The present invention also comprises: robot timing or event-triggered ground carry out Threshold Segmentation again to described top cover colour code sample of color.
The central circular colour code of top cover is distributed in the some distances of distance center respectively for yellow, other figure colour codes in the step (1), be followed successively by redness, orange, blue and green clockwise, and those circular colour codes be arranged on the papery plate or plastic plate of black.
In the step (2) fast color cut apart further and be: at first set up array Hcp, Scp, Icp, array size is 360,256,256, initially all is 0; From low level, stipulate fellow deputies' color successively, during the setpoint color threshold value, correspondence position in Hcp, Scp, the corresponding scope of Icp is made as 1.Other colors by that analogy; Judge when a certain pixel belongs to a certain color, read element in the corresponding array according to the value of pixel, carry out " position " and computing, if the value on a certain position of operation result is 1, the color of representing this representative in this pixel input color component, if all positions all are 0, then explain this pixel and do not belong to any color of having set, the value of three color components is carried out 2 positions and computing just can directly judge color gamut under the value of this pixel.
Sequential connected region searching algorithm further is in the step (2): in the line search, adjacent same color pixel is as a Mark, the starting point, number of pixels and the color type that comprise this section pixel among each Mark, and to set the connected region that same color constitutes be a Group, and Group has write down the information of color, sequence number, sum of all pixels, boundary rectangle, the center of gravity that calculates and each Mark of this connected region; In the search procedure, carrying out along with line scanning, the Mark that new search arrives, according to the color characteristic of Mark or be added among the existing Group, or produce new Group, the Group that merges adjacency simultaneously, when searching new Mark and finding its terminal point, the pixel that same color is arranged when just checking lastrow in its eight connected region, if no, just think new connected region, and generate new Group and write down this Mark, and this Mark is as the initial root node of connected region, and new Group joins the end of record chained list; If be checked through the pixel of same color, just upgrade the information of relevant Group, and revise the mark of this Mark; If the Group adjacency of this Mark while and a plurality of same colors, all Group all merge among the leftmost Group of root node, upgrade Group information.
The path planning of determining this robot is to calculate acquisition by expanding tree algorithm fast at random.
Step (5) also further comprises: autonomous mobile robot camera self-align, in the overall vision system of static state, utilize the Tsai method to find the solution the 3D projection model; In dynamic overall vision system, adopt improved Zhang Zhengyou algorithm, rely on multiple image to set up its corresponding relation.
A kind of robot of real-time color auto acquisition, comprise motion, sensor mechanism, control gear and decision-making body, also comprise image acquisition and processing mechanism, in robot one top cover is set, be distributed with some kinds of colour code samples on the top cover, image acquisition and processing mechanism comprise camera lens, camera, video interface and computer processing unit, and computer processing unit further comprises:
Color dividing processing subelement: carry out color through digitized image and cut apart to what collect;
Be communicated with search and handle subelement: the image that color is cut apart obtains block message of all kinds in the image through sequential connected region searching algorithm;
Pattern recognition process subelement: carry out triangle/trapezoidal template matches with the template of storage in advance after extracting color lump, and find immediate template, thereby calculate pixel coordinate and the angle information of target in current static global image of this template correspondence;
Self-align processing subelement: be used to finish the self-align operation of this robot;
Color threshold manager: be used to gather top cover colour code sample, and color is carried out the fault value cut apart;
Path planning subelement: the path planning that is used for determining this robot.
Camera lens is for focusing or zoom lens; Video interface is IEEE1394a/b, Camera Link or gigabit Ethernet according to the model of camera; Computer system is the embedded development platform of PC, industrial computer or the ARM system of x86 system.
Described autonomous robot adopts f180 mini-football robot.
Compared with prior art, the present invention has following technique effect: on determining present image during each target position, the present invention sets the sampling top cover in advance in robot, in this way, the color sample that robot site samples, has presence, it is the scene (current light etc.) at place in the overall coloured image that collects of robot, with the environment at the color sample place that samples be broadly similar, so just improve accuracy rate and fast speed that its color is judged, also improved the precision of each target position on the present image.
The present invention can utilize the color sample that collects to carry out twice and computing when color is cut apart fast apace, has determined each color of pixel information, reaches the effect of cutting apart fast.
The present invention can be regularly or event-triggered ground sample, more improved the accuracy rate that its color is judged.
Description of drawings
Fig. 1 is a model of robot of the present invention;
Fig. 2 is an auto color acquisition module synoptic diagram;
Fig. 3 is a robot color acquisition process process flow diagram;
Fig. 4 is the process flow diagram of the robot control method of real-time color auto acquisition of the present invention;
Fig. 5 is the sequential search algorithm synoptic diagram together with the zone;
The synoptic diagram of the heavy partitioning algorithm of Fig. 6 color threshold.
Below in conjunction with accompanying drawing, specify the present invention.
Core of the present invention is: the static sample mode of off-line that robot color acquisition system is sampled automatically and substituted prior art with the color that barrier avoiding function is arranged, and this advantage is to realize by each algoritic module of robot interior processor; In addition, voluntarily barrier is hidden, realized that no worker monitor keeps away barrier voluntarily by the path planning system in the processor.
A kind of robot of real-time color auto acquisition sees also Fig. 1, and it is a model of robot, robot of the present invention can adopt f180 mini-football robot, main inventive point of the present invention is a top cover is set thereon, sees also Fig. 2, and it is an exemplary plot of top cover.Top cover as shown in Figure 2, top cover is made up of the papery plate or the plastics top board of the circular colour code paper of five kinds of diameter 43-50mm and the mute light black of diameter.The order of color is as shown in Figure 2, and the top board center is yellow, and other color lumps are evenly distributed on distance center 50mm place, are followed successively by redness, orange, blue and green clockwise.Be connected and fixed by bolt between top cover and the autonomous mobile robot.The color type of the colour code on the top cover and number are to set according to concrete scene.Robot itself has a harvester, gather the colour code on the top cover, harvester (as camera, video camera etc.) can adopt image acquisition that robot now has and the equipment in the processing mechanism, also can independently add, but normally utilize existing equipment to finish the collection of its colour code.This top cover contained whole we to gather colouring information.In dynamic environment, we only satisfy strictness the color of this top cover as adaptive input, discern the object in some other visual range simultaneously.
Robot generally includes motion, sensor mechanism, control gear and decision-making body.Also comprise image acquisition and processing mechanism in addition, image acquisition and processing mechanism comprise existing have usually camera lens, camera, video interface and computer processing unit.Wherein camera lens can be selected tight shot or zoom lens according to the difference of environment; Video interface can be IEEE1394a/b, CameraLink, gigabit Ethernet etc. according to the selection of camera; Computer system can be the PC or the industrial computer of x86 system, also can select the embedded development platform of ARM system.
Divide from logic, computer processing unit 1 further comprises (seeing also Fig. 3):
Color dividing processing subelement 11: the image that collects is carried out color cut apart;
Be communicated with search and handle subelement 12: the image that color is cut apart obtains block message of all kinds in the image through sequential connected region searching algorithm;
Pattern recognition process subelement 13: carry out triangle/trapezoidal template matches with the template of storage in advance after extracting color lump, and find immediate template, thereby calculate pixel coordinate and the angle information of target in current static global image of this template correspondence;
Self-align processing subelement 14: be used to finish the self-align operation of this robot;
Color threshold manager 15: be used to gather top cover colour code sample, and color is carried out the fault value cut apart.With robot play soccer the match this scene be example, robot can carry out the sampling of colour code sample in advance before beginning, and the result that will sample is arranged in the color threshold manager, and the color threshold manager also can be written into configuration file by off-line, determines to gather the information of color.After the program start, mention the array of " position " and computing according to generating the back, and carry out the maintenance that each color threshold overlaps.When program withdraws from, the compression of threshold value array is stored in the configuration file.
Path planning subelement 16: the path planning that is used for determining this robot.
Subelement on the computer processing unit 1 is normally realized with software, can be integrated in a processor, realizes by the processor entity.Computer processing unit 1 can be connected with topworks 2, goes to finish robot from the effect that moves.
The present invention proposes the method that a kind of no worker monitor real-time color is gathered, and sees also Fig. 4, and this system combines machine vision and the motion control arithmetic that autonomous mobile robot is in kind and the process simulating, verifying is crossed, and can realize that the vision of no worker monitor is sampled automatically.It has comprised following steps:
S101: at the robot top one top cover is set, is distributed with some kinds of colour code samples on the top cover
The papery plate or the plastics top board of five kinds of circular colour code paper and the mute light black of diameter are formed a top cover.Select yellow circular colour code paper to place described top cover centre, other colour code paper then are evenly distributed on distance center 50mm place, are followed successively by red, orange, blue, green clockwise.Offer some threaded holes in addition around top board, to be bolted to an autonomous mobile robot top, this autonomous mobile robot can be selected f180 mini-football robot, but is not limited to this kind robot with top cover.
S102: described colour code sample is sampled, and each the colour code information after gathering is carried out Threshold Segmentation;
Robot timing or event-triggered ground carry out Threshold Segmentation again to described top cover colour code sample of color.Robot can be when each intermission, or the one predefined time of every mistake (as ten minutes).This step mainly is the background subtraction segmentation result that utilizes in colourity and the brightness, as Fig. 6, is used for calculating threshold information as each color lump according to the zone more than the criterion dotted line.Color lump is avoided orange and red interference according to the ordering of colourity size.Consider on-the-spot situation, robot can on-the-spot sample to colour code information, carries out Threshold Segmentation again.Existing is that threshold value is stored in advance, like this, the threshold information of storage all is some more satisfactory states (such as the strong states of illumination), and on-the-spot color when gathering and the colour code threshold information of storage just have error, make easily follow-uply deviation to occur when obtaining color lump information, and then influence the action of robot.And the present invention gathers according to existing situation, determines Threshold Segmentation, has reduced the generation of this situation, has improved the precision that obtains color lump information.
S130: after robot obtains colored global image, will use quick color to cut apart and obtain block message of all kinds in the image with sequential connected region searching algorithm.
At first introduce quick color partitioning algorithm:
Color threshold generally all is to represent with maximal value and minimum value.Judge whether a pixel value belongs to certain color, just judge this pixel on each component of color space whether all between the maximal value and minimum value of color threshold.For three dimensionality color spaces such as normally used YUV, RGB and HIS, judge that a pixel belongs to certain color and need carry out 6 comparison operations, belong to a kind of in the n kind color if judge it, just need carry out 6n comparison operation, therefore calculated amount is very big, is a very big burden to robotic vision system.
For this reason, the inventor has proposed a kind of quick color dividing method based on the HIS space, only need carry out 2 " position with " computings, can judge certain pixel value and whether belong to a kind of in the sample color.Concrete grammar is as follows:
At first set up three arrays, be respectively Hcp, Scp, Icp, array size is respectively 360,256,256, and initially all is 0.From low level, stipulate fellow deputies' color successively.This setting can be that configuration file is predefined.During the setpoint color threshold value, correspondence position in Hcp, Scp, the corresponding scope of Icp is made as 1, other colors by that analogy.During judgement, read element in the corresponding array, carry out " position " and computing according to the value of pixel, if the value on a certain position of result is 1, represent that this pixel belongs to the color of this representative in the color component,, then explain this pixel and do not belong to any color of having set if all positions all are 0.The value of three color components is carried out 2 positions and computing just can directly judge color gamut under the value of this pixel.
The color gamut at each pixel place in determining image, behind promptly definite color type, next the present invention determines color lump information in the present image with sequential connected region searching algorithm.
What sequential connected region searching algorithm integral body adopted is line search, and at first, we are adjacent same color pixel definition a Mark earlier, have comprised starting point, number of pixels and the color type of this section pixel among this Mark; The connected region that same color is constituted is defined as a data structure Group then, has write down center of gravity that color, sequence number, sum of all pixels, boundary rectangle, the computing machine of this connected region obtain and the information of each Mark among this Group.In the process of judging connected region, adopt 8 connected relations to judge at each pixel.
See also Fig. 5, it is sequential connected region searching algorithm synoptic diagram.
At first regard each connected region as one tree, each Mark is a node, connected region keeps left the limit most as the root node of setting, as just comprising the Mark of one or four same color pixels and the Mark of a First Five-Year Plan same color pixel among the figure (a), two Group:Group1 and Group2 have just been produced simultaneously.
Then along with the carrying out of line scanning, when searching new Mark and finding its terminal point, just check whether lastrow in its eight connected region has the pixel of same color, if be checked through the pixel of same color information, just upgrade the information of relevant Group, and revise the mark of this Mark, be increased to 8 pixels as Group1 among the figure (b), Group2 is increased to 9 pixels, and Group1 is increased to 16 pixels among the figure (c), and Group2 is increased to 9 pixels; If be not checked through the pixel of same color information, then think new connected region, and generate new Group and write down this Mark that this Mark is as the initial root node of new connected region.If this Mark simultaneously and the Group adjacency of a plurality of same colors in addition, all Group all merge among the leftmost Group of root node, upgrade Group information, and as scheming shown in (d), Group2 has merged to Group1, has formed a connected region.
S104: obtain the related colour block message by step S103, and extract behind the color lump with computer system in advance the template of storage carry out triangle/trapezoidal template matches, and find immediate template, thereby calculate pixel coordinate and the angle information of each target (as target robot) in current static global image.
After each color lump is determined on the present image, can mate with the template (as other target robot, ball etc.) of storage in advance by those color lumps, what the match is successful, can confirm the target that each color lump is represented, be ball, or other robot.After target being carried out the sampling of some points, can determine its center of gravity or center, measure center of gravity or the pixel coordinate of center in present image, be the position at this target place.
As for angle information, can obtain in the following manner, find one group of vector on the template, on the color lump after the coupling, find another group vector, these two vectors are divided by, can obtain the cos value of this angle.Such as, a certain color lump is a certain robot, the point (as head and so on) at the point at the center of gravity place of this robot, place, two edges on the template, on color lump, find center of gravity, and the point of this edge correspondence, can determine two groups of vectors, also determined thus this robot angle information (than in, towards which)
S105: obtain the position of current autonomous mobile robot and the self-align work that attitude is carried out described autonomous mobile robot by some physical sensors.
So-called autonomous robot is self-align, and generally to be the positional information that obtains according to the sensor that robot carries and environmental information adjust current self pose, promptly is equivalent to obtain the local message of autonomous mobile robot periphery.In the present embodiment, the self-align of autonomous mobile robot mainly is the self-align of camera, and camera is self-align by following realization: in the overall vision system of static state, can initiatively demarcate the camera external parameter, here utilize Tsai method linear solution 3D projection model, the robustness height; In dynamic overall vision system, can adopt improved Zhang Zhengyou algorithm, rely on multiple image to set up its corresponding relation, dirigibility is strong.The inventor optimizes the Tsai method, makes it for error less than 10mm.Below this improved Tsai method is set forth:
Desirable lens imaging should satisfy the pinhole imaging system principle, but truly optical system is owing to mismachining tolerance and rigging error between camera lens, and there are error to a certain degree in actual image point that object point is become on the plane of delineation and theoretical picture point, i.e. distortion.Distortion type can be divided into two kinds of radial distortion and tangential distortions, and main in most of optical systems what exist is radial distortion, can be with following formulate, and this also is the nonlinearized reason of pinhole imaging system linear model.
(x-u c)(1+k 1(u 2+v 2))＝u-u c
(y-v c) (1+k 1(u 2+ v 2))=v-v cWherein, (u v) is the digital coordinate of a point, and (x y) is desirable digital coordinate, and (uc vc) is the center of distortion.
The Tsai method is a kind of two-step approach, and the first step draws first group of parameter according to radially consistent the constraint, although radially consistent the constraint is non-linear, is to use very simple method just can obtain first group of parameter.Such as on the plane of delineation, point (uc, vc), (x, y), (u, v) conllinear, perhaps straight line (xc, yc) (x, y) with straight line (xc, yc), (u, v) parallel or slope equates then have:
Usually picture centre is got the coordinate of making center of distortion and principal point, therefore:
Second step was to come second group of parameter of iterative with non-linear square law, made the error minimum of projection equation, and the initial value of iteration is to obtain ignoring the equation of finding the solution two unknown quantitys under the lens distortion situation.
(wherein k1 is that radially the coefficient of first order, focal length, Tz that f is camera lens of deformation are the transfer vector (Tx that world coordinates is tied to camera coordinates system to camera lens yet the distortion correction algorithm of Tsai has only carried out computation optimization to k1, f and Tz, Ty, a parameter in), and other inner parameters and external parameter all adopt the linear approximate value of calculating as a result of Tz).In order to improve computational accuracy, at first fix other parameters in the present embodiment, k1 is optimized, fixed part parameter one by one is optimized calculating to other parameters more then, and all parameters are optimized in last optimization simultaneously.Need the imaging plane of camera and plane included angle, place will reach more than 30 ° owing to need to obtain enough depth informations.If can not calculate the value of f and Tz, only can only the interior control information of two dimensional field ground level with the value of f/Tz.
Because key point is coplanar point, 5 unknown numbers (establishing Ty=1) are arranged when finding the solution homogeneous linear equations in approximate treatment, so need to choose 5 key points at least,, can adopt pseudo-transposition Matrix Solving minimum variance to try to achieve unknown quantity when selected point during more than 5.The distribution of key point is wide as far as possible.By the nonlinear optimization storehouse of formula translation, guarantee to have reduced operation time under the prerequisite of precision.
S106: determine the path planning of this robot, move so that realize the autonomous of robot.
The present invention utilizes the k-d tree to quicken searching of arest neighbors table and improves the RRT algorithm, and utilizes this improved RRT algorithm to carry out the path planning of described autonomous mobile robot.
Traditional motion planning algorithm such as A* algorithm, C space path searching method, Roadmap algorithm, Artificial Potential Field method etc. all are not suitable for being applied under the constraint condition of non-integrality.The La Valle and the Kuffner of UIUC university have proposed the RRT algorithm.The RRT algorithm is a kind of stochastic search methods of single inquiry, and it does not make up whole reflections in C space, but through certain strategy is sampled in the C space, is applicable to nonholonomic system.The inventor improves for the RRT algorithm in the present embodiment, utilize k-d to set and quicken searching of the most contiguous table, the index exploding problem of effectively having avoided degree of freedom to increase and having brought adds new argument and adjusts compromise in planning efficiency and path planning optimization.
The RRT planning in path uses search tree to search for from the original state to the dbjective state.Down with RRT false code for expansion, in each iterative process, at first generate a sampled point, RRT tree of alternate selection is carried out the ExtendState function and obtains a new node then, then another RRT tree is carried out the Connect expansion to this new node, meeting up to two trees reaches termination.
Three main functions are arranged: state expansion ExtendState (), actual range RealDistance (), random state RandomState () here.At first, ExtsendState function calculation specific range or after the time, in order to arrive the next state of dbjective state.If in arriving the process of new state and the barrier in the environment bump, can return the EmptyState state of acquiescence so.Usually, the heuristic control robot that here all suits.Heuristic (Heuristic method) does not need very complicated, does not need to keep away barrier yet, only carries out collision detection and gets final product.Because do not need to rely on and random search, the node of cooking up under the good more heuritic approach average case should be few more.Secondly, the RealDistance function needs the estimation function of computing time or distance, is used for estimating that Extend also needs the repetition how many times could arrive the destination.At last, the RandomState function returns the consistent state in the ambient condition space.
Follow-up, this robot will plan that by communication interface good path inputs to executive system, by controlling motor drive module after the executive system process speed resolution process, move thereby finish the autonomous of robot.
More than disclosed only be several specific embodiment of the present invention, but the present invention is not limited thereto, any those skilled in the art can think variation, all should drop in protection scope of the present invention.
Priority Applications (1)
|Application Number||Priority Date||Filing Date||Title|
|CNB2008102020042A CN100573388C (en)||2008-10-30||2008-10-30||The robot control method of real-time color auto acquisition and robot|
Applications Claiming Priority (1)
|Application Number||Priority Date||Filing Date||Title|
|CNB2008102020042A CN100573388C (en)||2008-10-30||2008-10-30||The robot control method of real-time color auto acquisition and robot|
|Publication Number||Publication Date|
|CN101398689A CN101398689A (en)||2009-04-01|
|CN100573388C true CN100573388C (en)||2009-12-23|
Family Applications (1)
|Application Number||Title||Priority Date||Filing Date|
|CNB2008102020042A CN100573388C (en)||2008-10-30||2008-10-30||The robot control method of real-time color auto acquisition and robot|
Country Status (1)
|CN (1)||CN100573388C (en)|
Families Citing this family (18)
|Publication number||Priority date||Publication date||Assignee||Title|
|CN102375416B (en) *||2010-08-13||2013-10-23||同济大学||Human type robot kicking action information processing method based on rapid search tree|
|CN102997910B (en) *||2012-10-31||2016-04-13||上海交通大学||A kind of based on road of ground surface target location guidance system and method|
|CN103056864A (en) *||2013-01-24||2013-04-24||上海理工大学||Device and method for detecting position and angle of wheeled motion robot in real time|
|CN103927511B (en) *||2014-02-25||2017-02-15||华北电力大学（保定）||image identification method based on difference feature description|
|CN103970134B (en) *||2014-04-16||2017-01-18||江苏科技大学||Multi-mobile-robot system collaborative experimental platform and visual segmentation and positioning method thereof|
|CN104154997B (en) *||2014-07-16||2016-02-10||北京空间机电研究所||A kind of UAV system small-sized self-stabilization aviation multi-optical spectrum imaging system|
|CN104253981B (en) *||2014-09-28||2017-11-28||武汉烽火众智数字技术有限责任公司||A kind of method that moving target for video investigation presses color sequence|
|CN104318297A (en) *||2014-09-28||2015-01-28||广西南宁推特信息技术有限公司||Color mark positioning electronic tag-based robot positioning system and method|
|JP2017531259A (en)||2014-10-31||2017-10-19||エスゼット ディージェイアイ テクノロジー カンパニー リミテッドＳｚ Ｄｊｉ Ｔｅｃｈｎｏｌｏｇｙ Ｃｏ．，Ｌｔｄ||Position-based control method, apparatus, movable device, and robot|
|CN105095895B (en) *||2015-04-23||2018-09-25||广州广电运通金融电子股份有限公司||Valuable file identification device self-correction recognition methods|
|CN105136302B (en) *||2015-05-04||2017-09-29||北京佰能电气技术有限公司||A kind of positioner based on color sensor|
|CN107831675A (en) *||2016-09-16||2018-03-23||天津思博科科技发展有限公司||Online robot control device based on intelligence learning technology|
|CN107831758A (en) *||2016-09-16||2018-03-23||天津思博科科技发展有限公司||Robot device with adaptation function|
|CN107336251A (en) *||2016-09-20||2017-11-10||苏州小璐机器人有限公司||A kind of control method and system of robot queue|
|CN106426171A (en) *||2016-11-01||2017-02-22||河池学院||Self-walking type intelligent soccer robot|
|CN106541404B (en) *||2016-11-03||2018-11-20||四川阿泰因机器人智能装备有限公司||A kind of Robot visual location air navigation aid|
|CN108280842A (en) *||2017-12-29||2018-07-13||广州海昇计算机科技有限公司||A kind of foreground segmentation method overcoming illuminance abrupt variation|
|CN110377033A (en) *||2019-07-08||2019-10-25||浙江大学||A kind of soccer robot identification based on RGBD information and tracking grasping means|
- 2008-10-30 CN CNB2008102020042A patent/CN100573388C/en not_active IP Right Cessation
Also Published As
|Publication number||Publication date|
|CN104848858B (en)||Quick Response Code and be used for robotic vision-inertia combined navigation system and method|
|CN105258702B (en)||A kind of global localization method based on SLAM navigator mobile robot|
|Veľas et al.||Calibration of rgb camera with velodyne lidar|
|US8600192B2 (en)||System and method for finding correspondence between cameras in a three-dimensional vision system|
|Tan et al.||Color model-based real-time learning for road following|
|JP5671281B2 (en)||Position / orientation measuring apparatus, control method and program for position / orientation measuring apparatus|
|JP2013217893A (en)||Model generation device, position posture estimation device, information processor, model generation method, position posture estimation method, information processing method|
|US20170287166A1 (en)||Camera calibration method using a calibration target|
|CN102313536B (en)||Method for barrier perception based on airborne binocular vision|
|US7336814B2 (en)||Method and apparatus for machine-vision|
|Hoppe et al.||Photogrammetric camera network design for micro aerial vehicles|
|JP2012141962A (en)||Position and orientation measurement device and position and orientation measurement method|
|US20110141306A1 (en)||Image capturing device, method of searching for occlusion region, and program|
|US9672630B2 (en)||Contour line measurement apparatus and robot system|
|JP2004334819A (en)||Stereo calibration device and stereo image monitoring device using same|
|CN105160702B (en)||The stereopsis dense Stereo Matching method and system aided in based on LiDAR point cloud|
|Heng et al.||Leveraging Image‐based Localization for Infrastructure‐based Calibration of a Multi‐camera Rig|
|US8792726B2 (en)||Geometric feature extracting device, geometric feature extracting method, storage medium, three-dimensional measurement apparatus, and object recognition apparatus|
|EP1459035B1 (en)||Method for determining corresponding points in stereoscopic three-dimensional measurements|
|Everts et al.||Cooperative Object Tracking with Multiple PTZ Cameras.|
|JP6475772B2 (en)||Navigation device and method by visual positioning|
|Stepan et al.||Robust data fusion with occupancy grid|
|González-Baños et al.||Planning robot motion strategies for efficient model construction|
|CN104778688A (en)||Method and device for registering point cloud data|
|CN104574406B (en)||A kind of combined calibrating method between 360 degree of panorama laser and multiple vision systems|
|SE01||Entry into force of request for substantive examination|
|C10||Entry into substantive examination|
|C14||Grant of patent or utility model|
|CF01||Termination of patent right due to non-payment of annual fee||
Granted publication date: 20091223
Termination date: 20121030
|C17||Cessation of patent right|