CN105554472B - The method of the video monitoring system and its positioning robot of overlay environment - Google Patents
The method of the video monitoring system and its positioning robot of overlay environment Download PDFInfo
- Publication number
- CN105554472B CN105554472B CN201610065186.8A CN201610065186A CN105554472B CN 105554472 B CN105554472 B CN 105554472B CN 201610065186 A CN201610065186 A CN 201610065186A CN 105554472 B CN105554472 B CN 105554472B
- Authority
- CN
- China
- Prior art keywords
- robot
- video monitoring
- data
- image
- environment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/021—Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
-
- H04W4/04—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W64/00—Locating users or terminals or network equipment for network management purposes, e.g. mobility management
- H04W64/006—Locating users or terminals or network equipment for network management purposes, e.g. mobility management with additional information processing, e.g. for direction or speed determination
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Multimedia (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a kind of video monitoring system of overlay environment and its methods of positioning robot, wherein, the video monitoring system includes several video monitoring equipments, and each video monitoring equipment covers local environment, and the local environment of adjacent video monitoring device covering overlaps;The method of video monitoring system positioning robot the following steps are included: one width monitoring area of timed shooting color digital image, RGB24 image is converted into HSV image, determine threshold value set of the color of the marker in robot crown center under HSV mode, color image into black white image, the white pixel aggregation zone of all robots in monitoring range is obtained, determine robot location and is sent to robot.The invention has the beneficial effects that: video monitoring system can not only be organically combined with robot, and can simplify the self-contained integrated positioning equipment of robot;Localization method is able to satisfy the requirement positioned in real time not only without accumulated error.
Description
Technical field
The present invention relates to a kind of video monitoring system and a kind of methods of positioning robot, and in particular to a kind of cover ring
The video monitoring system in border and a kind of method of the video monitoring system positioning robot using the overlay environment, belong to machine
Device people's technical field.
Background technique
Term is explained:
1, mobile robot and its environment
In general, a mobile-robot system is by three part groups such as mechanical part, transducing part and control section
At;Or by mechanical system, drive system, sensory perceptual system, robot --- environmental interaction system, man-machine interactive system and control system
Six subsystem compositions such as system.Wherein, mechanical system is the aggregate of many mechanical linkages to be connected together by joint, and formation is opened
Ring kinematics linkwork;Drive system is the device for making various mechanical parts generate movement;Sensory perceptual system is sensed by robot interior
Device module and external sensor module composition, can obtain useful information in inside and outside ambient condition;Robot --- environment
Interactive system is the system for realizing the equipment in robot and external environment and connecting each other and coordinating;Man-machine interactive system be people with
Robot is contacted and is participated in the device of robot control, including instruction setter and information display device;Control system
Task be that the execution machine of robot is dominated according to the job instruction program of robot and the signal returned from sensor feedback
Structure goes to complete defined movement and function.
Environment refers to the spatial position that mobile robot can reach.
2, robot localization and in real time positioning
Robot localization is process of the determining mobile robot in the spatial position of local environment.
Robot positions the localization method for referring to robot use in real time and its hardware and software device can be timely, correctly true
Mobile robot is determined in the ability of the spatial position of local environment, and meeting real-time positioning requirements is that robot control system can be just
Really, one of the precondition of robot motion is controlled in time.
3, the pose of robot
Pose is the abbreviation of position and posture, and the pose of robot includes the position of robot and the posture of robot,
The posture of robot refers to the moving direction of mobile robot in the environment.
4, RGB (Red, Green, Blue) color mode
Rgb color mode is a kind of color standard of industry, and the parameter of color is respectively in this model: red (R),
It is green (G) and blue (B), pass through the variation and their available various colorss of mutual superposition of three kinds of colors.
RGB24 indicates a pixel in color image, 8 binary digit tables of each component of RGB using 24 binary digits
Show, value range 0-255, is most common digital color image sampling configuration.
5, HSV (Hue, Saturation, Value) color model
HSV is a kind of color space created according to the intuitive nature of color, also referred to as hexagonal pyramid model.In this model
The parameter of color is respectively: tone (H), saturation degree (S) and brightness (V).
Mobile robot is in order to realize autonomous in the environment, it is necessary to solve the problems, such as be: the navigation of itself with
Orientation problem, i.e., before mobile robot is mobile, it will solve " where? " " where? " " how going? " these three are asked
Topic.
Solve " where? " this problem is exactly to determine the position of mobile robot in the environment.In other words, " indoor
Localization for Mobile Robot " be exactly mobile robot determine its indoors in environment position process.
The characteristics of its own is located of indoor mobile robot:
One, indoors in environment, since satellite navigation signals (such as GPS, Beidou etc.) covering is bad, so indoor moving
The positioning of robot is not available satellite navigation;
Two, due to the influence of multipath effect, wireless signal positioning method is not particularly suited for indoor mobile robot;
Three, seem narrow compared to outdoor environment due to indoor environment, so the positioning accuracy request of indoor mobile robot
Higher (generally Centimeter Level), and require positioning in real time;
Four, indoor environment has more complex electromagnetic field, so the environment indoors of the inertial navigation set containing magnetic element
In application be restricted.
According to the presence or absence of environmental model, the localization method of mobile robot is divided into three classes: positioning, nothing based on environmental model
Environment and positioning are established in the positioning of environmental model simultaneously.Wherein, the positioning based on environmental model can be divided into three kinds again: part is fixed
Position (also referred to as relative positioning), Global localization (also referred to as absolute fix), integrated positioning (local positioning and Global localization combination).
Local positioning is the positioning that mobile robot only can be realized with self-contained sensor.Currently, having based on inner
Two kinds of dead reckoning methods of journey meter and inertial navigation set, the local locating method applied to indoor mobile robot is to be based on
The dead reckoning method of odometer.
The advantages of local positioning is: 1, the pose of robot is that self is calculated, and does not need the sense to external environment
Know information;2, the positioning interval time is short;3, location data has good continuity.
The shortcomings that local positioning, is: 1, needing the initial posture information of known machine people;2, position error at any time (lead by inertia
Boat mode) or it is accumulative apart from (odometer mode), it is unsuitable for (inertial navigation mode) for a long time or long range (odometer mode)
It is accurately positioned.
Global localization, which is mobile robot, uses self-contained sensor (such as ultrasonic wave, laser radar, visual sensing
Device etc.) the external characteristic information realization of perception positioning.Currently, the global localization method for being applied to Indoor Robot includes road sign
Method and map match method.
Beacon of the localization method based on road sign dependent on known features in environment, and need to install in mobile robot
Sensor is observed beacon by sensor, to obtain the absolute location information of mobile robot.
Localization method based on map match is that global context map is previously known, and is stored in mobile robot, is carried out
When map match positions, the sensor carried using mobile robot is detected ambient enviroment and establishes local environment map, is passed through
The global position of mobile robot is determined with the comparison of global map.
The advantages of Global localization, is: 1, without the initial posture information of known mobile robot;2, location data is accurate;3,
Position error not at any time, distance it is accumulative.
The shortcomings that Global localization, is: 1, needing the perception information to external environment;2, the positioning interval time is long;3, number is positioned
According to discontinuous, jump is big;4, indoor environment is complicated, and alignment sensor is easily blocked.
Integrated positioning is that local positioning is learnt from other's strong points to offset one's weaknesses with Global localization, merges composition.Combined positioning method is current
A kind of most common positioning method.Under this positioning method, output data of the local location data as integrated positioning is global
Location data is then used to eliminate (inertial navigation mode) at any time or the local positioning error accumulative apart from (odometer mode).
Combined positioning method based on environmental model is a kind of most common indoor mobile robot positioning method.
In the combined positioning method based on environmental model, environmental model can be set as indoor two-dimentional ground level world coordinates
The pose of system, the robot moved in the environment can be indicated with triple (x, y, θ), wherein (x, y) indicates mobile robot
Positioned at the position of global coordinate system, θ indicates course of the mobile robot in global coordinate system.
In the combined positioning method based on environmental model, local positioning scheme uses the dead reckoning side based on odometer
Method;In Global localization scheme, since visual sensor is capable of providing the most abundant perception information for other sensors,
Therefore, the robot combined orientation technology based on odometer and visual sensor is most representative Indoor Robot positioning side
Method.
Local locating method based on odometer:
Without loss of generality, indoor mobile robot uses wheeled two wheel guide robot driving method, and left and right two-wheeled is respectively provided with note
Record the odometer of the range ability of driving wheel.As shown in Figure 1, it is assumed that the midpoint of two driving wheel axial connecting lines is M, then in office
The pose of meaning moment robot can be indicated with the pose M (x, y, θ) of M point, wherein (x, y) is position of the M point in xoy coordinate system
It sets, θ indicates the positive angle between M point course of x-axis.In any tnMoment, n=0,1,2 ..., the pose of robot can table
It is shown as Mn(xn,yn,θn), wherein in t0The pose M at moment0(x0,y0,θ0) it is known.
Fig. 2 is that robot pose derives schematic diagram.Referring to Fig. 2, environment global coordinate system is xoy, two drivings of robot
The distance between axles of wheel is 2a, MkIndicate tkThe axis center position of two driving wheels of moment robot.In any tnMoment, n=0,1,
2 ..., the reading of revolver and right wheel odometer is respectively mL(n) and mR(n).From any tn-1Moment is to tnMoment (n=1,
2 ...) time interval be set as constant T, and it is sufficiently small,
The range ability of revolver odometer are as follows:
△mL(n)=mL(n)-mL(n-1) (1)
The range ability of right wheel odometer are as follows:
△mR(n)=mR(n)-mR(n-1) (2)
1: △ m of situationL(n)≠△mR(n), it is known that (xn-1,yn-1,θn-1), seek (xn,yn,θn), n=1,2 ...,
In this case, robot is from tn-1Moment is to tnThe period T at moment, revolver is different with right wheel moving distance,
When T is sufficiently small, it may be reasonably assumed that the track of robot ambulation is one section of circular arc, referring to Fig. 2.From tn-1Moment is to tnMoment
The motion profile M of robotn-1MnIt is one section of circular arc, the center of circle is o ', and radius is o ' Mn-1=o ' Mn=Rn, central angle β;With
O ' is origin, o ' Mn-1For x ' axis, to establish local coordinate system x ' o ' y '.Angle between o ' x ' axis and global coordinate system ox axis
For βn-1, it and θn-1Between relationship are as follows:
βn-1=θn-1-90° (3)
Referring to Fig. 2, have:
△mL(n)=(Rn-a)β (4)
△mR(n)=(Rn+a)β (5)
Formula (5) subtracts formula (4), and abbreviation arranges, and has:
Formula (4) adds formula (5), and formula (6) is brought into, and abbreviation arranges, and has:
Wherein, △ mL(n)≠△mR(n)。
Referring to Fig. 2, MnPosition (x ' of the point in local coordinate systemn,y′n) can be given by:
Origin (the x ' of local coordinate system x ' o ' y '0,y′0) can be given by the coordinate of global coordinate system xoy:
So Mn(xn,yn) can be given by the coordinate of global coordinate system xoy:
Formula (8) and formula (9) are substituted into formula (10), had:
Formula (3) are substituted into formula (11) again, and because sin βn-1=-cos θn-1With cos βn-1=sin θn-1, have:
Wherein, n=1,2 ...,.
Sufficiently small by hypothesis time interval constant T, then β very little, has sin β ≈ β and cos β ≈ 1, then by formula (6) and formula
(7), it can be obtained by formula (12):
When calculating robot location using formula (13), (xn-1,yn-1,θn-1) it is known, △ mL(n) and △ mR(n) divide
It is not obtained by formula (1) and formula (2).
In conjunction with Fig. 2 and formula (6), have:
Conclusion according to formula (3) again, can obtain:
Wherein, n=1,2 ..., θ0It is known.
(x can be acquired by formula (13) and (15)n,yn,θn)。
It is the situation that robot turns to the left shown in Fig. 2, bend to right available identical conclusion, repeats no more.
2: △ m of situationL(n)=△ mR(n), it is known that (xn-1,yn-1,θn-1), seek (xn,yn,θn), n=1,2 ...,
In this case, robot is from tn-1Moment is to tnThe mobile same distance of the period T at moment, revolver and right wheel,
Then the track of robot ambulation is one section of straight line, such as the slave t in Fig. 2nMoment is to tn+1The track M of moment movementnMn+1It is one section straight
Line.Sin (180- θ againn)=sin θnWith cos (180- θn)=- cos θn, have
In summary, it is known that (xn-1,yn-1,θn-1), seek (xn,yn,θn), n=1,2 ..., formula it is as follows:
Wherein, (x0,y0,θ0) it is known.
The global localization method of view-based access control model:
Vision positioning refers to that robot obtains the image of ambient enviroment scenery by camera, to include in image one
A little apparent natures and artificial feature, are identified using the method for image procossing, are obtained according to the location information of these features
The position of robot.
In the localization method of view-based access control model, camera is equipped with two ways: one kind is mounted in robot;
Another kind is mounted in environment.
It is mounted on that the scene change that the camera with robot is seen is more, is easily blocked by barrier, image procossing meter
Calculation amount is big, and stationkeeping ability is limited.
The scene change that the camera of installation in the environment is seen is few, is not easy to be blocked by barrier.
Below by the installation of single video monitoring equipment in the environment and in environment for only one mobile robot, explanation
The Global localization scheme of this view-based access control model.
The video monitoring equipment being mounted on indoor wall is by video image to the robot moved in its monitoring range
It is positioned.In order to meet the requirement positioned in real time to the mobile robot in monitoring range, it is assumed that as follows:
(1) have the red globules of significant difference at the top of robot with ambient enviroment equipped with one, use the red globules as
The marker of robot;
(2) centre of sphere of red globules is denoted as h away from the height of ground level, and the height h of all robots is identical and does not change;
(3) all robots move on the same ground level;
(4) known to the mounting height of video camera and pitching degree;
(5) video camera has been demarcated, i.e. image geometric corrections.
Perspective projection is most common video camera projection model, can be simplified with national forest park in Xiaokeng, as shown in figure 3,
ABCD is the trapezoid area on the ground level that video camera takes, and F point is camera focus, and FO is camera optical axis, and O point is to take the photograph
The intersection point (while being also the trapezoidal cornerwise intersection point of ABCD) of camera optical axis and ground level, OGIt is vertical on ground level for F point
Projection, FOGIt away from ground level distance is H for F point.O, a, b, c, d are respectively O, A, B, C, D as the picture point in plane.FOFor focal length
f。
By habit, camera image plane coordinate system is known as u-v coordinate system, and for origin in the upper left corner, u is axial right for forward direction, v to
Lower is positive;It, must be by u-v coordinate in order to be positioned using as the machine portrait contraposition in plane in the robot on ground level
System's translation obtains photo coordinate system xoy, and origin o is located at the center as plane, and x-axis and u axis are in the same direction, and y-axis and v axis are in the same direction,
As shown in Figure 3.Known each pixel is respectively △ u and △ v, o point in u-v coordinate system in the size of u axis direction and v axis direction
Coordinate be (u0,v0), therefore, pixel coordinate (u, v) is determined in the coordinate (x, y) of xoy coordinate system by following formula:
A robot in camera supervised range, the process positioned using video image to it are as follows:
(1) a width color digital image is shot
The scene image in its video monitoring range of a width is shot, the RGB24 format-pattern under u-v coordinate system is generated:
RGB24=R (i, j), G (i, j), B (i, j) | 0≤i≤m-1,0≤j≤n-1 } (19)
Wherein, m be u axis direction pixel quantity, n be v axis direction pixel quantity, 0≤R (i, j)≤255,0≤G (i,
J)≤255,0≤B (i, j)≤255.
(2) image segmentation
In order to split the RED sector in color image, RGB24 format-pattern is converted into HSV ideograph first
Picture, then the color image that formula (20) indicate is converted into a width black and white binary image by the red set in HSV mode, wherein red
Color is converted to white pixel, other to be converted to black picture element.
HSV=H (i, j), S (i, j), V (i, j) | 0≤i≤m-1,0≤j≤n-1 } (20)
Wherein, 0≤H (i, j)≤360,0%≤V (i, j)≤100%, 0≤R (i, j)≤255.
It enables M=max [R (i, j), G (i, j), B (i, j)], N=min [R (i, j), G (i, j), B (i, j)], then
H (i, j) is determined by following formula:
H (i, j)=0, M=N (21a)
H (i, j)=60 × [G (i, j)-B (i, j)], M ≠ N, M=R (i, j), (21b)
H (i, j)=60 × [B (i, j)-R (i, j)]+120, M ≠ N, M=G (i, j), (21c)
H (i, j)=60 × [R (i, j)-G (i, j)]+240, M ≠ N, M=B (i, j), (21d)
If H (i, j) < 0, H (i, j)=H (i, j)+360, (21e)
S (i, j) is determined by following formula:
S (i, j)=0, M=0 (22a)
S (i, j)=1-N/M, M ≠ 0 (22b)
V (i, j) is determined by following formula:
V (i, j)=100 × M/255, (23)
Then, we determined that the red threshold value set under HSV mode, has
Red={ 0≤H (i, j)≤11.or.341≤H (i, j)≤360;S(i,j)≥0.15;V(i,j)≥18} (24)
According to following formula, the HSV mode image that formula (20) indicate is converted into black and white binary map.
BW=BW (i, j) | 0≤i≤m-1,0≤j≤n-1 } (25)
Wherein,
BW (i, j)=1, { H (i, j), S (i, j), V (i, j) } ∈ Red (26a)
BW (i, j)=0,
(3) robot red top bead (marker) sphere centre coordinate p (x, y) is sought.
Firstly, indicating that black and white binary image does the white pixel statistics with histogram of row, column respectively to formula (25);It asks respectively again
It goes on a journey, the local maximum of row hisgram;White pixel accumulation regions are determined then according to row, column local maximum;Further according to
Robot location region is determined according to the priori knowledge of spherical symbol object at the top of robot;Most back-pushed-type (27) finds out robot top
The sphere centre coordinate of portion's red globules (marker).
Wherein,For all white pixels of white pixel aggregation zone u axial projection arithmetic mean of instantaneous value,For white picture
Arithmetic mean of instantaneous value of the plain all white pixels of aggregation zone in v axial projection.
The priori knowledge of robot red top bead (marker) includes its shape (white picture of aggregation in floor projection
The longest distance of any two pixel in element), size (being converted into pixel quantity range) etc..
(4) robot localization
Robot localization is divided into two steps: the first step finds out corresponding ground level position coordinates P according to pixel coordinate p (x, y)
(PX,PY);Second step finds out robot red top bead and throws in ground level according to robot red top bead centre of sphere height h
Shadow (X, Y).
As the corresponding one point P (P of ground level of one point p (x, y) of planeX,PY), as shown in Figure 4.Wherein, throwing of the point p in ox axis
Shadow is px, in oyAxis is projected as py, and opx=x, opy=y;Point p is in OGXGAxis is projected as PX, in OGYGAxis is projected as PY,
And OGPX=XP, OGPY=YP;FO=f.
In Fig. 4, right angled triangle FpyP is similar to right angled triangle FPYP has
By PYP=OGPX=XP,pyP=opx=x,Substitution formula (28), it is whole
It can be obtained after reason:
In Fig. 4, by right angled triangle FopyIt can obtain:
β=arctan (opy/ Fo)=arctan (y/f) (30)
In right angled triangle FOGPYIn, have
YP=OGPY=H × tan (γ+β)=H × tan [γ+arctan (y/f)] (31)
Formula (31) are substituted into formula (29), abbreviation, and convolution (31), it can obtain and solve P (P according to p (x, y)X,PY) formula:
Again by Fig. 4, robot red top bead marker is located at the point R of spatial position.Point p and point P is space respectively
Location point R is in the projection as plane and ground level, and therefore, the physical location of robot should be location point R hanging down on ground level
Deliver directly shadow Q (X, Y).By Fig. 4 and formula (32), the formula of available solution Q (X, Y) is as follows:
Combined positioning method:
It is fixed using the part based on odometer in the robot integrated positioning equipment based on odometer and visual sensor
Positioning output data of the output data of position device as integrated positioning equipment, with the increase of robot motion's distance, part
The output data of positioning device due to error accumulation make position output data error it is increasing, when robot movement away from
When from reaching certain value, using view-based access control model sensor Global localization device output data to robot localization output data
It is modified, so in this way, completing the positioning function of integrated positioning equipment in cycles.
It can be seen that existing localization method is primarily present following some problems:
(1) robot localization process and environment are without exchanging: it is generally desirable to the Global localization completed by environmental characteristic
For process by robot complete independently, environment does not provide any effective information to robot to assist to complete positioning work;As
Another is extreme, in certain schemes, fully relies on the Global localization that video monitoring equipment completes one or several robots, then
The location data of each robot is sent to robot;It is not suitable for or is difficult to be applicable in multirobot application;View-based access control model
Localization method data processing amount it is big, it is difficult to meet real-time positioning requirements;
(2) the self-contained integrated positioning equipment of robot: in existing integrated positioning scheme, robot is needed to carry
Full integrated positioning equipment, the device is complicated, weight is big, power consumption is high;
(3) robot localization data have accumulated error: robot provides location data using local locating method, and global
Location data then be used to eliminate local positioning at any time, the accumulative error of distance;Between the time that position error twice is eliminated
Every interior, position error is at any time, distance is accumulative and increases.
Summary of the invention
To solve the deficiencies in the prior art, the purpose of the present invention is to provide a kind of video monitoring system of overlay environment,
And the method using video monitoring system positioning robot, wherein the video monitoring system not only forms simply, Er Qieneng
Enough and robot organically combines, and simplifies the self-contained integrated positioning equipment of robot, utilizes the video monitoring system localization machine
The method of device people can satisfy real-time positioning requirements not only without accumulated error.
In order to achieve the above objectives, the present invention adopts the following technical scheme that:
A kind of video monitoring system of overlay environment, which is characterized in that including several video monitoring equipments, each video prison
It controls equipment and covers local environment, which constitutes mobile robot local coordinate system, the covering of adjacent video monitoring device
Local environment overlaps, and entire overlay environment constitutes mobile robot global coordinate system.
The video monitoring system of overlay environment above-mentioned, which is characterized in that aforementioned video monitoring device is by camera, image
Acquisition is formed with processing computer, wireless data transmitter.
The video monitoring system of overlay environment above-mentioned, which is characterized in that aforementioned camera is mounted on indoor wall, peace
Known to dress height and pitch angle.
The video monitoring system of overlay environment above-mentioned, which is characterized in that the scene image that aforementioned camera obtains is
Cross geometric corrections.
A method of utilizing the video monitoring system positioning robot of aforementioned overlay environment, which is characterized in that including with
Lower step:
S1: the color digital image of one width monitoring area of video monitoring equipment timed shooting, the color digital image are used
RGB24 mode indicates;
S2: RGB24 image is converted into HSV image by video monitoring equipment;
S3: threshold value set of the color of robot red top bead marker under HSV mode is determined;
S4: black white image is colored image into;
S5: the white pixel aggregation zone of all robots in monitoring range is obtained;
S6: the location sets of all robots in monitoring range are determined and are sent to each robot within the scope of this.
The method of positioning robot above-mentioned, which is characterized in that in step sl, the shooting interval of video monitoring equipment is
Step S1 is completed to step S4 the time it takes.
The method of positioning robot above-mentioned, which is characterized in that in step s 5, obtain all robots in monitoring range
White pixel aggregation zone method are as follows:
(1) the white pixel quantity of every row transverse direction is counted,
The white pixel quantity of each column longitudinal direction is counted,
Wherein, m is the pixel quantity as plane u axis direction, and n is the pixel quantity as plane v axis direction, and i is that pixel exists
As the abscissa in plane, j is pixel as the ordinate in plane;
(2) all local maximums of W and all local maximums of H are found out respectively, it is assumed that W has m0A local maximum, H have
n0A local maximum then includes m in B/W image0×n0A white pixel aggregation zone, the seat of the geometric center point in each region
Mark constitutes following set: R0={ R (ik,jl)|ik∈Wmax,jl∈Hmax, k=1,2 ..., m0, l=1,2 ..., n0};
(3) size and shape for calculating each white pixel aggregation zone, according to robot red top bead marker
Priori knowledge, size and shape is not met to the coordinate of the geometric center point of the white pixel aggregation zone of priori knowledge from R0
Middle deletion obtains the set R that the white pixel aggregation zone geometric center of all robots in monitoring range is constituted:
R={ R (ik,jk)|R(ik,jk)∈R0, k=1,2 ..., k0,
Wherein, k0For the quantity of monitoring area inner machine people.
The method of positioning robot above-mentioned, which is characterized in that in step s 6, the method for determining the position of robot
Are as follows:
(1) geometric center is asked to the white pixel aggregation zone of each robot, obtains as under plane u-v coordinate system
Pixel coordinateWherein,For all white pixels of white pixel aggregation zone u axial projection arithmetic mean of instantaneous value,For
Arithmetic mean of instantaneous value of all white pixels of white pixel aggregation zone in v axial projection;
(2) it calculates according to following formula as coordinate (x, y) under plane xoy coordinate system:
Wherein, u0、v0, △ u and △ v be known camera parameter;
(3) all robots are obtained as coordinate set under plane xoy coordinate system:
Rxy={ (xk,yk) | k=1,2 ..., k0};
(4) according to (xk,yk) robot is obtained in monitoring device local spatial coordinate system XGOGYGUnder coordinate (Xk,
Yk):
Wherein, h is robot, and H is camera mounting height, and γ is the pitch angle of camera installation, and f is camera
Focal length;
(5) all robots are obtained in camera local spatial coordinate system XGOGYGLower coordinate set:
RXY={ (Xk,Yk) | k=1,2 ..., k0};
(6) by RXYThe world coordinates being converted under the global coordinate system of video monitoring system institute overlay environment:
The method of positioning robot above-mentioned, which is characterized in that in step s 6, video monitoring equipment is by the position of robot
Set the method for being sent to robot are as follows:
(1) Image Acquisition and processing computer willComposition data frame;
(2) wireless data transmitter sends data frame.
The method of positioning robot above-mentioned, which is characterized in that in step s 6, the data that wireless data transmitter is sent
The composition of frame is as follows:
Frame synchronization, monitoring device number, data amount check k0, data 1 ..., data m, verification and,
Wherein, frame synchronization is the mark that wireless data receiver differentiates that a frame data start, and monitoring device number indicates data
It is who sends, data amount check k0The length information of data frame is given, m is valid data quantity, with data format and k0Have
It closes;Verify and be the main foundation of wireless data receiver inspection reception data correctness.
The invention has the beneficial effects that:
One, the video monitoring system of overlay environment:
1, video monitoring system and robot organically combine: each robot in video monitoring system overlay environment
It is all made of the local locating method based on odometer and predicts the location of itself current time, each of video monitoring system
Video monitoring equipment sends the global position information of all robots within the scope of the monitoring of tools, Ge Geji by wireless transmitter
Device people by self-contained wireless data receiver receives this information and as the candidate global position of itself, in conjunction with pre-
Location is set and determines self-position according to most possible criterion;
2, simplify the self-contained integrated positioning equipment of robot: video monitoring of the Global localization data from overlay environment
System, robot itself only need to carry local positioning equipment, and equipment is simple, light-weight, power consumption is low;
3, applied widely: to be suitable for multirobot application.
Two, the method for above-mentioned video monitoring system positioning robot is utilized:
1, robot localization data are derived from Global localization data, no accumulated error;
2, simultaneously in the Global localization algorithm of view-based access control model, due to not needing which clearly each location data definitely belongs to
One robot enormously simplifies the multirobot Global localization algorithm of view-based access control model, meets real-time positioning requirements.
Detailed description of the invention
Fig. 1 is that the pose of wheeled differential steering robot defines schematic diagram;
Fig. 2 is that robot pose derives schematic diagram;
Fig. 3 is perspective projection model;
Fig. 4 is imaging model;
Fig. 5 is the composition schematic diagram of the video monitoring system of overlay environment;
Fig. 6 is the structural schematic diagram of robot of the invention.
The meaning of appended drawing reference in figure: the first wireless data receiver of 1- antenna, 2- right arm, 3- shell, 4- right wheel, 5-
Revolver, 6- left arm, the second wireless data receiver of 7- antenna, 8- red globules marker.
Specific embodiment
Specific introduce is made to the present invention below in conjunction with the drawings and specific embodiments.
First part: the video monitoring system of overlay environment
The video monitoring system of overlay environment includes several video monitoring equipments, and each video monitoring equipment covers local ring
Border, the local environment constitute mobile robot local coordinate system, and the local environment of adjacent video monitoring device covering has part weight
Folded, entire overlay environment constitutes mobile robot global coordinate system.
Video monitoring equipment is made of camera, Image Acquisition and processing computer, wireless data transmitter.Wherein, it takes the photograph
As head is mounted on indoor wall, mounting height and pitch angle it is known that and its scene image obtained have been subjected to geometry and repair
Just.
Fig. 5 is a specific embodiment of the video monitoring system of overlay environment of the invention.The video monitoring system is total
It is made of 6 video monitoring equipments, is denoted as video monitoring equipment CA, video monitoring equipment CB, video monitoring equipment CC, view respectively
Frequency monitoring device CD, video monitoring equipment CE, video monitoring equipment CF, the function of each video monitoring equipment are identical
's.Each video monitoring equipment covers a part of environment (i.e. local environment), which constitutes mobile robot part
Coordinate system, correspondingly, the local environment of 6 video monitoring equipments covering is denoted as local environment A, local environment B, local ring respectively
The local environment of border C, local environment D, local environment E, local environment F, the covering of adjacent video monitoring device overlap (i.e.
Dash area in figure), 6 local environment superposition coverings global context (collectively forming entire overlay environment), entire cover ring
Border constitutes mobile robot global coordinate system.
Shared Global localization equipment of the video monitoring system of overlay environment as robots all in monitoring range.
Second part: the indoor mobile robot based on environmental information
Existing robot includes following six subsystem:
(1) mechanical system of robot;
(2) drive system (including left and right wheel-drive);
(3) sensory perceptual system (including left and right wheel odometer, Global localization equipment);
(4) robot-environmental interaction system;
(5) man-machine interactive system;
(6) control system.
The structure of existing robot is slightly transformed in we, so as to form of the invention based on environmental information
Indoor mobile robot, specifically, referring to Fig. 6:
1, a wireless data receiver is installed in robot, allows the robot to the video prison for receiving overlay environment
The Global localization data-signal that control system issues;
2, cancel the Global localization equipment in robot tracking control, so as to mitigate the weight of robot, reduce energy
Source demand (reduces the power consumption with original Global localization equipment including the movement energy consumption as caused by mitigation weight);
3, there are the red globules of significant difference, the mark as robot with ambient enviroment equipped with one at the top of robot
Object so that the monitoring system of overlay environment to the tracking of mobile robot and positioning become the tracking to red globules marker with
Positioning, enormously simplifies the Global localization algorithm of view-based access control model.
The structure and function of each robot is identical.All robots are indoors on the same ground level of environment
Movement, and identical color (what is selected in the present embodiment is a red) bead conduct is respectively arranged at the top of each robot
The centre of sphere of its marker, each bead is identical away from the height h of ground level and does not change, between each robot keep safety away from
From (superposition phenomenon will not occur between robot).
It is each in video monitoring system in each of indoor environment mobile robot and covering indoor environment
A video monitoring equipment is in synchronous working state, and the real-time fixed of robot may be implemented in the combination of machine human and environment
Position.
Robot according to the present invention, its own carry integrated positioning equipment in only local positioning equipment (covering
Shared Global localization equipment of the video monitoring system of environment as robots all in monitoring range), enormously simplify carrying
Number of devices and type so that robot weight is lighter, power consumption is lower.
Part III: the method for the video monitoring system positioning indoor mobile robot of overlay environment
Localization method of the invention belongs to the combined positioning method based on environmental model, but determines with traditional combine
There is bigger difference in position.
The method that the video monitoring system positioning indoor mobile robot of overlay environment of the invention is described in detail below.
Step 1: shooting color digital image
The color digital image of one width monitoring area of video monitoring equipment (camera) timed shooting, the color digital image
RGB24=R (i, j), G (i, j), B (i, j) | 0≤i≤m-1,0≤j≤n-1 }, wherein m is the pixel as plane u axis direction
Quantity, n are the pixel quantity as plane v axis direction, 0≤R (i, j)≤255,0≤G (i, j)≤255,0≤B (i, j)≤255.
Shooting interval should meet the real-time positioning requirements of robot, and in the present embodiment, shooting interval is set as by we
It completes step 1 and arrives step 4 the time it takes.
Step 2: RGB24 image is converted into HSV image
RGB24 image is converted into HSV image by video monitoring equipment (Image Acquisition and processing computer), and detailed process is such as
Under:
HSV=H (i, j), S (i, j), V (i, j) | 0≤i≤m-1,0≤j≤n-1 }
Wherein, 0≤H (i, j)≤360,0%≤V (i, j)≤100%, 0≤R (i, j)≤255.
It enables M=max [R (i, j), G (i, j), B (i, j)], N=min [R (i, j), G (i, j), B (i, j)], then
H (i, j) is determined by following formula:
H (i, j)=0, M=N
H (i, j)=60 × [G (i, j)-B (i, j)], M ≠ N, M=R (i, j),
H (i, j)=60 × [B (i, j)-R (i, j)]+120, M ≠ N, M=G (i, j),
H (i, j)=60 × [R (i, j)-G (i, j)]+240, M ≠ N, M=B (i, j),
If H (i, j) < 0, H (i, j)=H (i, j)+360,
S (i, j) is determined by following formula:
S (i, j)=0, M=0
S (i, j)=1-N/M, M ≠ 0
V (i, j) is determined by following formula:
V (i, j)=100 × M/255,
Step 3: determining the red threshold value set under HSV mode
Red={ 0≤H (i, j)≤11.or.341≤H (i, j)≤360;S(i,j)≥0.15;V(i,j)≥18}
Step 4: coloring image into black white image
BW=BW (i, j) | 0≤i≤m-1,0≤j≤n-1 }, in which:
BW (i, j)=1, { H (i, j), S (i, j), V (i, j) } ∈ Red
BW (i, j)=0,
Step 5: obtaining the white pixel aggregation zone of all robots in monitoring range
Lateral white pixel statistics:
The white pixel quantity of every row transverse direction is counted,
Longitudinal white pixel statistics:
The white pixel quantity of each column longitudinal direction is counted,
Wherein, m is the pixel quantity as plane u axis direction, and n is the pixel quantity as plane v axis direction, and i is that pixel exists
As the abscissa in plane, j is pixel as the ordinate in plane.
All local maximums of W and all local maximums of H are found out respectively, it is assumed that W has m0A local maximum, H have n0It is a
Local maximum, i.e.,
In this way, including m in B/W image0×n0A white pixel aggregation zone, the coordinate structure of the geometric center point in each region
At following set:
R0={ R (ik,jl)|ik∈Wmax,jl∈Hmax, k=1,2 ..., m0, l=1,2 ..., n0,
Calculate each white pixel aggregation zone size (pixel quantity) and shape (two pixels of lie farthest away away from
From) and priori knowledge according to robot red top bead marker, size and shape is not met to the white of priori knowledge
The coordinate of the geometric center point of pixel aggregation zone is from R0Middle deletion, the white pixel for obtaining all robots in monitoring range are poly-
Collect the set R that region geometry center is constituted:
R={ R (ik,jk)|R(ik,jk)∈R0, k=1,2 ..., k0,
Wherein, k0For the quantity of monitoring area inner machine people.
Step 6: determining the location sets of all robots in monitoring range and be sent to each robot within the scope of this
Firstly, the white pixel aggregation zone to each robot seeks geometric center, obtain as under plane u-v coordinate system
Pixel coordinateWherein,For all white pixels of white pixel aggregation zone u axial projection arithmetic mean of instantaneous value,
For all white pixels of white pixel aggregation zone v axial projection arithmetic mean of instantaneous value.
Then it calculates according to following formula as coordinate (x, y) under plane xoy coordinate system:
Wherein, u0、v0, △ u and △ v be known camera parameter.
All robots are obtained as coordinate set under plane v coordinate system:
Rxy={ (xk,yk) | k=1,2 ..., k0}
Secondly, as shown in figure 4, according to (xk,yk) robot is obtained in monitoring device local spatial coordinate system XGOGYGUnder
Coordinate (Xk,Yk):
Wherein, h is robot, and H is camera mounting height, and γ is the pitch angle of camera installation, and f is camera
Focal length.
All robots are obtained in camera local spatial coordinate system XGOGYGLower coordinate set:
RXY={ (Xk,Yk) | k=1,2 ..., k0}
Again, by RXYThe world coordinates being converted under the global coordinate system of video monitoring system institute overlay environment:
Finally, willComposition data frame is simultaneously sent data frame by the wireless data transmitter of the monitoring device.
Wherein, the composition of data is as follows:
Frame synchronization, monitoring device number, data amount check k0, data 1 ..., data m, verification and.
Wherein, frame synchronization is the mark that wireless data receiver differentiates that a frame data start, and monitoring device number indicates data
It is who sends, data amount check k0The length information of data frame is given, m is valid data quantity, with data format and k0Have
It closes;Verify and be the main foundation of wireless data receiver inspection reception data correctness.
Step 7: the pose at robot predicting current time
Robot is periodically according to the pose data (x of last momentn-1,yn-1,θn-1), predict the position of current time robot
Appearance (xn,yn,θn), n=1,2 ..., predictor formula it is as follows:
Wherein, (x0,y0,θ0) it is known.
The fixed time interval of robot and the shooting interval of video monitoring equipment are identical, and keep synchronous, i.e., robot connects
After receiving the data-signal (data frame) that video monitoring equipment is sent, the pose of current time robot is predicted immediately.
Step 8: robot decodes to obtain the possible position set at current time
Robot receives the data-signal (data frame) that current time video monitoring system is sent, and decoding obtains current time
The possible position set of robot is as follows:
Loc={ (Xi,Yi) | i=1,2 ..., k0}
Wherein, k0>=1, indicate the quantity of the possible position of the robot.
In this step, a robot can receive the data-signal that at least one video monitoring equipment is sent.
If a robot has received the data-signal that two or more video monitoring equipments are sent, illustrate this
Robot is in the overlapping region of monitoring.
The data-signal that some video monitoring equipment that one robot receives is sent, if only including a robot
Position candidate, then the initial position message that this location information can be used as the robot uses.
Step 9: robot selects current location
Robot selects a position as the machine from the location candidate set of the robot according to most possible principle
The current location of people.
It herein, is most possibly the shortest distance, i.e., robot finds out one and (x from location candidate setn,
yn) minimum distance point (Xk,Yk) be used as robot in current tnThe position at moment, it may be assumed that
(xn,yn)=(Xk,Yk), Dk=min { Di| i=1,2 ..., k0}
Wherein,I=1,2 ..., k0, 1≤k≤k0。
Due between robot there are safe distance and time interval it is short enough, it is impossible to have it is identical most
The situation that short distance occurs.
It can be seen that localization method of the invention organically combines robot and video monitoring system together,
In, the local positioning equipment that robot carries provides predicted position, and the video monitoring system of overlay environment provides position candidate collection
It closes, robot determines robot current time position using most possible criterion from location candidate set.
Since what Global localization data acquisition system provided is only the location information of mobile robot in monitoring range, and do not denote that
It is the location information of which robot, so meeting the requirement that robot positions in real time.
Further, since location data is using Global localization data, so location data is without accumulated error.
It should be noted that the above embodiments do not limit the invention in any form, it is all to use equivalent replacement or equivalent change
The mode changed technical solution obtained, falls within the scope of protection of the present invention.
Claims (6)
1. the method for the video monitoring system positioning robot using overlay environment, the video monitoring system includes several monitoring
Equipment, each monitoring device cover local environment, which constitutes mobile robot local coordinate system, adjacent video monitoring
The local environment of equipment covering overlaps, and entire overlay environment constitutes mobile robot global coordinate system, which is characterized in that
The following steps are included:
S1: the color digital image of one width monitoring area of video monitoring equipment timed shooting, color digital image RGB24 mould
Formula indicates;
S2: RGB24 image is converted into HSV image by video monitoring equipment;
S3: threshold value set of the color of robot red top bead marker under HSV mode is determined;
S4: black white image is colored image into;
S5: the white pixel aggregation zone of all robots in monitoring range is obtained;
S6: the location sets of all robots in monitoring range are determined and are sent to each robot within the scope of this.
2. the method for the video monitoring system positioning robot according to claim 1 using overlay environment, feature exist
In in step sl, the shooting interval of video monitoring equipment is to complete step S1 to step S4 the time it takes.
3. the method for the video monitoring system positioning robot according to claim 1 using overlay environment, feature exist
In, in step s 5, the method for obtaining the white pixel aggregation zone of all robots in monitoring range are as follows:
(1) the white pixel quantity of every row transverse direction is counted,
The white pixel quantity of each column longitudinal direction is counted,
Wherein, m is the pixel quantity as plane u axis direction, and n is the pixel quantity as plane v axis direction, and i is pixel as flat
Abscissa in face, j are pixel as the ordinate in plane;
(2) all local maximums of W and all local maximums of H are found out respectively, it is assumed that W has m0A local maximum, H have n0It is a
Local maximum then includes m in B/W image0×n0A white pixel aggregation zone, the coordinate structure of the geometric center point in each region
At following set: R0={ R (ik,jl)|ik∈Wmax,jl∈Hmax, k=1,2 ..., m0, l=1,2 ..., n0};
(3) size and shape for calculating each white pixel aggregation zone, the elder generation according to robot red top bead marker
Knowledge is tested, size and shape is not met to the coordinate of the geometric center point of the white pixel aggregation zone of priori knowledge from R0In delete
It removes, obtains the set R that the white aggregation zone geometric center point of all robots in monitoring range is constituted:
R={ R (ik,jk)|R(ik,jk)∈R0, k=1,2 ..., k0,
Wherein, k0For the quantity of monitoring area inner machine people.
4. the method for the video monitoring system positioning robot according to claim 1 using overlay environment, feature exist
In, in step s 6, the method for determining the position of robot are as follows:
(1) geometric center is asked to the white pixel aggregation zone of each robot, obtains as the pixel under plane u-v coordinate system
CoordinateWherein,For all white pixels of white pixel aggregation zone u axial projection arithmetic mean of instantaneous value,For white
Arithmetic mean of instantaneous value of all white pixels of pixel aggregation zone in v axial projection;
(2) it calculates according to following formula as coordinate (x, y) under plane xoy coordinate system:
Wherein, u0、v0, Δ u and Δ v be known camera parameter;
(3) all robots are obtained as coordinate set under plane xoy coordinate system:
Rxy={ (xk,yk) | k=1,2 ..., k0};
(4) according to (xk,yk) robot is obtained in monitoring device local spatial coordinate system XGOGYGUnder coordinate (Xk,Yk):
Wherein, h is robot, and H is camera mounting height, and γ is the pitch angle of camera installation, and f is that camera is burnt
Away from;
(5) all robots are obtained in camera local spatial coordinate system XGOGYGLower coordinate set:
RXY={ (Xk,Yk) | k=1,2 ..., k0};
(6) by RXYThe world coordinates being converted under the global coordinate system of video monitoring system institute overlay environment:
5. the method for the video monitoring system positioning robot according to claim 4 using overlay environment, feature exist
In, in step s 6, the method that the position of robot is sent to robot by video monitoring equipment are as follows:
(1) Image Acquisition and processing computer willComposition data frame;
(2) wireless data transmitter sends data frame.
6. the method for the video monitoring system positioning robot according to claim 5 using overlay environment, feature exist
In in step s 6, the composition for the data frame that wireless data transmitter is sent is as follows:
Frame synchronization, monitoring device number, data amount check k0, data 1 ..., data m, verification and,
Wherein, frame synchronization is the mark that wireless data receiver differentiates that a frame data start, and monitoring device number indicates that whom data are
It sends, data amount check k0The length information of data frame is given, m is valid data quantity, with data format and k0It is related;School
Test and be the main foundation of wireless data receiver inspection reception data correctness.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610065186.8A CN105554472B (en) | 2016-01-29 | 2016-01-29 | The method of the video monitoring system and its positioning robot of overlay environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610065186.8A CN105554472B (en) | 2016-01-29 | 2016-01-29 | The method of the video monitoring system and its positioning robot of overlay environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105554472A CN105554472A (en) | 2016-05-04 |
CN105554472B true CN105554472B (en) | 2019-02-22 |
Family
ID=55833383
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610065186.8A Active CN105554472B (en) | 2016-01-29 | 2016-01-29 | The method of the video monitoring system and its positioning robot of overlay environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105554472B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107972027B (en) * | 2016-10-25 | 2020-11-27 | 深圳光启合众科技有限公司 | Robot positioning method and device and robot |
KR101963683B1 (en) * | 2018-03-28 | 2019-03-29 | 주식회사 로보메이션 | Instruction Implementing Method for Edge Driving Line Tracing Robot |
CN108710107A (en) * | 2018-05-18 | 2018-10-26 | 百年金海科技有限公司 | Robot Passive Location based on infrared laser and positioning video linked system |
CN109974704B (en) * | 2019-03-01 | 2021-01-08 | 深圳市智能机器人研究院 | Robot capable of calibrating global positioning and local positioning and control method thereof |
CN110362090A (en) * | 2019-08-05 | 2019-10-22 | 北京深醒科技有限公司 | A kind of crusing robot control system |
CN112697127B (en) * | 2020-11-26 | 2024-06-11 | 佛山科学技术学院 | Indoor positioning system and method |
CN113580197B (en) * | 2021-07-30 | 2022-12-13 | 珠海一微半导体股份有限公司 | Mobile robot jamming detection method, system and chip |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102914303A (en) * | 2012-10-11 | 2013-02-06 | 江苏科技大学 | Navigation information acquisition method and intelligent space system with multiple mobile robots |
CN103558856A (en) * | 2013-11-21 | 2014-02-05 | 东南大学 | Service mobile robot navigation method in dynamic environment |
CN104535047A (en) * | 2014-09-19 | 2015-04-22 | 燕山大学 | Multi-agent target tracking global positioning system and method based on video stitching |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100506533B1 (en) * | 2003-01-11 | 2005-08-05 | 삼성전자주식회사 | Mobile robot and autonomic traveling system and method thereof |
US7634336B2 (en) * | 2005-12-08 | 2009-12-15 | Electronics And Telecommunications Research Institute | Localization system and method of mobile robot based on camera and landmarks |
-
2016
- 2016-01-29 CN CN201610065186.8A patent/CN105554472B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102914303A (en) * | 2012-10-11 | 2013-02-06 | 江苏科技大学 | Navigation information acquisition method and intelligent space system with multiple mobile robots |
CN103558856A (en) * | 2013-11-21 | 2014-02-05 | 东南大学 | Service mobile robot navigation method in dynamic environment |
CN104535047A (en) * | 2014-09-19 | 2015-04-22 | 燕山大学 | Multi-agent target tracking global positioning system and method based on video stitching |
Also Published As
Publication number | Publication date |
---|---|
CN105554472A (en) | 2016-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105554472B (en) | The method of the video monitoring system and its positioning robot of overlay environment | |
CN105716611A (en) | Environmental information-based indoor mobile robot and positioning method thereof | |
CN106054929B (en) | A kind of unmanned plane based on light stream lands bootstrap technique automatically | |
JP7263630B2 (en) | Performing 3D reconstruction with unmanned aerial vehicles | |
EP3347789B1 (en) | Systems and methods for detecting and tracking movable objects | |
CN101669144B (en) | Landmark for position determination of mobile robot and apparatus and method using it | |
CN109655825A (en) | Data processing method, device and the multiple sensor integrated method of Multi-sensor Fusion | |
CN108571971A (en) | A kind of AGV vision positioning systems and method | |
CN110174093A (en) | Localization method, device, equipment and computer readable storage medium | |
CN113870343B (en) | Relative pose calibration method, device, computer equipment and storage medium | |
CN109753076A (en) | A kind of unmanned plane vision tracing implementing method | |
Lima et al. | Omni-directional catadioptric vision for soccer robots | |
CN106017458B (en) | Mobile robot combined navigation method and device | |
CN110262507A (en) | A kind of camera array robot localization method and device based on 5G communication | |
CN113085896B (en) | Auxiliary automatic driving system and method for modern rail cleaning vehicle | |
CN110119698A (en) | For determining the method, apparatus, equipment and storage medium of Obj State | |
JPH03201110A (en) | Position azimuth detecting device for autonomous traveling vehicle | |
CN110033411A (en) | The efficient joining method of highway construction scene panoramic picture based on unmanned plane | |
CN102183250A (en) | Automatic navigation and positioning device and method for field road of agricultural machinery | |
CN109976339A (en) | A kind of vehicle-mounted Distribution itineration check collecting method and cruising inspection system | |
CN109213156A (en) | A kind of global guidance system and method for AGV trolley | |
CN114923477A (en) | Multi-dimensional space-ground collaborative map building system and method based on vision and laser SLAM technology | |
US20200118329A1 (en) | Object responsive robotic navigation and imaging control system | |
CN113869422B (en) | Multi-camera target matching method, system, electronic device and readable storage medium | |
CN103076014B (en) | A kind of Work machine self-navigation three mark location device and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |