CN109668551A - Robot localization method, apparatus and computer readable storage medium - Google Patents
Robot localization method, apparatus and computer readable storage medium Download PDFInfo
- Publication number
- CN109668551A CN109668551A CN201710966574.8A CN201710966574A CN109668551A CN 109668551 A CN109668551 A CN 109668551A CN 201710966574 A CN201710966574 A CN 201710966574A CN 109668551 A CN109668551 A CN 109668551A
- Authority
- CN
- China
- Prior art keywords
- robot
- beacon
- position coordinates
- sampling instant
- guidance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 230000004807 localization Effects 0.000 title claims abstract description 27
- 238000003860 storage Methods 0.000 title claims abstract description 18
- 238000005070 sampling Methods 0.000 claims abstract description 124
- 238000005259 measurement Methods 0.000 claims description 26
- 238000004590 computer program Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 abstract description 9
- 230000007613 environmental effect Effects 0.000 abstract description 9
- 238000010586 diagram Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 9
- 230000033001 locomotion Effects 0.000 description 7
- 238000006243 chemical reaction Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 230000000712 assembly Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000012092 media component Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a kind of robot localization method, apparatus and computer readable storage mediums, belong to robotic technology field.This method comprises: obtaining the target image that the camera carried in robot is acquired in current sample time;When fixed beacon being not present in target image and there is guidance beacon, based on the position coordinates and guidance beacon of the robot that a upper sampling instant determines, determine robot in the current position coordinates of current sample time.It that is to say, in embodiments of the present invention, robot need not be positioned in combination with fixed beacon and guidance beacon, as long as there are one in the two in target image, it can realize the positioning to robot, in this way, even if the two is optionally combined when fixed beacon and guidance beacon are arranged in default route, the beacon that robot can also be optionally combined according to this is positioned, and positioning method becomes more flexible, improves the environmental suitability of robot.
Description
Technical field
The present invention relates to robotic technology field, in particular to a kind of robot localization method, apparatus and computer-readable
Storage medium.
Background technique
Currently, robot is widely used in the fields such as warehouse logistics, moving operation, automatic Pilot.In general, working as robot
When work, need to be moved according to default route.During robot is mobile, robot deviates default road in order to prevent
Line needs to position robot in real time.
In the related art, it can be laid with band in the mobile default route of robot, and take setting two dimension in item
Code, wherein equal the distance between per two adjacent two dimensional codes.In addition, equipped with video camera in robot, and robot can
To be communicated with smart machine.As the robot moves, video camera can be with real-time image acquisition, and acquired image is sent out
It send to smart machine.When smart machine receives acquisition image, can in conjunction in image two dimensional code and band to robot
It is positioned, and obtained location information is sent to robot, later, robot can be according to receiving location information and pre-
If route carries out pose adjustment, to guarantee that robot can be continued to move to along default route.
Seen from the above description, smart machine need in combination in image two dimensional code and band could to robot into
Row positioning, and in order to guarantee that video camera in piece image while can collect two dimensional code and band, on default route upper berth
If per the distance between two adjacent two dimensional codes with regard to needing to meet certain rule, it can be seen that, in the related technology to machine
The method that device people is positioned is not flexible in application mode, and environmental suitability is poor.
Summary of the invention
Not flexible in application mode in order to solve robot localization method in the related technology, environmental suitability is poor
Problem, the embodiment of the invention provides a kind of robot localization method, apparatus and computer readable storage mediums.The technical side
Case is as follows:
In a first aspect, providing a kind of robot localization method, which comprises
The camera carried in robot is obtained in the collected target image of current sample time;
When fixed beacon being not present in the target image and there is guidance beacon, determined based on a upper sampling instant
The position coordinates of robot and the guidance beacon determine that the robot is sat in the current location of the current sample time
Mark;
Wherein, the fixed beacon and it is described guidance beacon be arranged on the corresponding default route of the robot,
For the beacon that the position to the robot is positioned, and the fixed beacon includes encoded information, the encoded information
The coding of coordinate including the fixed beacon position.
Optionally, it is described obtain in robot the camera that carries the collected target image of current sample time it
Afterwards, further includes:
It is located at the image-region in the image recognition window of current sample time from acquisition in the target image, it is described to work as
The image recognition window of preceding sampling instant is that the position coordinates determination of the robot determined based on a upper sampling instant is obtained;
When in described image region including the figure of specified profile, determine that there are the fixed letters in the target image
Mark;
When the figure in described image region not including the specified profile and including straight line, the width of the straight line is judged
Whether degree is predetermined width, and judges whether the color of the straight line is pre-set color;
When the color that the width of the straight line is predetermined width and/or the straight line is pre-set color, the mesh is determined
In logo image there is no the fixed beacon and there are the guidance beacons.
Optionally, the position coordinates for the robot that a upper sampling instant determines are collected according to a upper sampling instant
Image in fixed beacon determination obtain;
Alternatively, the position coordinates for the robot that a upper sampling instant determines are collected according to a upper sampling instant
Guidance beacon determination in image obtains.
Optionally, the position coordinates and the guidance beacon of the robot determined based on a upper sampling instant, are determined
Current position coordinates of the robot in the current sample time, comprising:
Based on the encoder being arranged in the robot in the collected measured value of current sample time and described upper one
The position coordinates for the robot that sampling instant determines, calculate the first position coordinate of the robot;
The lateral deviation and angular deviation of the robot are determined based on the guidance beacon;
Determine that the measurement of the encoder misses based on the first position coordinate, the lateral deviation and the angular deviation
Difference;
The first position coordinate is modified by the measurement error of the encoder, is existed with obtaining the robot
The current position coordinates of the current sample time.
Optionally, it is described based on the encoder being arranged in the robot the collected measured value of current sample time with
And the position coordinates of the robot of the upper sampling instant determination, calculate the first position coordinate of the robot, comprising:
It is adopted based on the encoder being arranged in the robot in the collected measured value of current sample time and upper one
Sample moment collected measured value, the first cartwheel roll of the upper sampling instant of calculating to robot described between current sample time
Momentum and the second cartwheel roll momentum;
The machine determined based on the first cartwheel roll momentum, the second cartwheel roll momentum and a upper sampling instant
The position coordinates of people calculate the first position coordinate of the robot.
Optionally, the lateral deviation and angular deviation that the robot is determined based on the guidance beacon, comprising:
Obtain first end point coordinate and second extreme coordinates of the guidance beacon in the target image;
Based on the first end point coordinate and second extreme coordinates, determine the central point of the target image described in
The vertical range of beacon is guided, and determines the lateral deviation of the robot based on the vertical range;
Based on the first end point coordinate and second extreme coordinates, determine the longitudinal centre line of the target image with
The angle of the guidance beacon, and determine based on the angle angular deviation of the robot.
Optionally, the determination robot is after the current position coordinates of the current sample time, further includes:
Current position coordinates and preset position coordinates based on the robot predict the robot in next sampling
The position coordinates at quarter, the preset position coordinates are the terminal point coordinate at the end of the pre-set robot task;
The position coordinates of next sampling instant based on prediction determine the image recognition window of next sampling instant.
Second aspect, provides a kind of robotic positioning device, and described device includes:
First obtains module, for obtaining the camera carried in robot in the collected target figure of current sample time
Picture;
First determining module, for being based on when fixed beacon being not present in the target image and there is guidance beacon
The position coordinates and the guidance beacon for the robot that a upper sampling instant determines, determine the robot in the present sample
The current position coordinates at moment;
Wherein, the fixed beacon and it is described guidance beacon be arranged on the corresponding default route of the robot,
For the beacon that the position to the robot is positioned, and the fixed beacon includes encoded information, the encoded information
The coding of coordinate including the fixed beacon position.
Optionally, described device further include:
Second obtains module, for obtaining in the image recognition window of current sample time from the target image
Image-region, the image recognition window of the current sample time be based on a upper sampling instant determine robot position
Coordinate determination obtains;
Second determining module, for determining the target figure when in described image region including the figure of specified profile
There are the fixed beacons as in;
Judgment module is sentenced when for when the figure in described image region not including the specified profile and including straight line
Whether the width of the straight line of breaking is predetermined width, and judges whether the color of the straight line is pre-set color;
Third determining module is default for the color that the width when the straight line is predetermined width and/or the straight line
When color, determines and the fixed beacon is not present in the target image and there are the guidance beacons.
Optionally, the position coordinates for the robot that a upper sampling instant determines are collected according to a upper sampling instant
Image in fixed beacon determination obtain;
Alternatively, the position coordinates for the robot that a upper sampling instant determines are collected according to a upper sampling instant
Guidance beacon determination in image obtains.
Optionally, first determining module includes:
Computational submodule, for based on the encoder being arranged in the robot in the collected measurement of current sample time
The position coordinates for the robot that value and a upper sampling instant determine, calculate the first position coordinate of the robot;
First determines submodule, for determining that the lateral deviation of the robot and angle are inclined based on the guidance beacon
Difference;
Second determines submodule, for true based on the first position coordinate, the lateral deviation and the angular deviation
The measurement error of the fixed encoder;
Submodule is corrected, the first position coordinate is modified for the measurement error by the encoder, with
The robot is obtained in the current position coordinates of the current sample time.
Optionally, the computational submodule is specifically used for:
It is adopted based on the encoder being arranged in the robot in the collected measured value of current sample time and upper one
Sample moment collected measured value, the first cartwheel roll of the upper sampling instant of calculating to robot described between current sample time
Momentum and the second cartwheel roll momentum;
The machine determined based on the first cartwheel roll momentum, the second cartwheel roll momentum and a upper sampling instant
The position coordinates of people calculate the first position coordinate of the robot.
Optionally, described first determine that submodule is specifically used for:
Obtain first end point coordinate and second extreme coordinates of the guidance beacon in the target image;
Based on the first end point coordinate and second extreme coordinates, determine the central point of the target image described in
The vertical range of beacon is guided, and determines the lateral deviation of the robot based on the vertical range;
Based on the first end point coordinate and second extreme coordinates, determine the longitudinal centre line of the target image with
The angle of the guidance beacon, and determine based on the angle angular deviation of the robot.
Optionally, described device further include:
Prediction module predicts the machine for current position coordinates and preset position coordinates based on the robot
People next sampling instant position coordinates, the preset position coordinates be the pre-set robot task at the end of
Terminal point coordinate;
4th determining module determines next sampling instant for the position coordinates of next sampling instant based on prediction
Image recognition window.
The third aspect, provides a kind of robotic positioning device, and described device includes:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to executing any one method described in above-mentioned first aspect.
Fourth aspect provides a kind of computer readable storage medium, is stored with computer program in the storage medium,
The computer program realizes any one method described in above-mentioned first aspect when being executed by processor.
Technical solution provided in an embodiment of the present invention has the benefit that the camera for obtaining and carrying in robot exists
The collected target image of current sample time, if there is guidance beacon there is no fixed beacon in the target image,
Then robot can be determined in the current position coordinates of current sample time based on the guidance beacon.It that is to say, of the invention real
It applies in example, it is not necessary to robot be positioned in combination with fixed beacon and guidance beacon, as long as there are two in target image
One in person, it can the positioning to robot is realized, in this way, even if fixed beacon and guidance letter are arranged in default route
The two is optionally combined when mark, the beacon that robot can also be optionally combined according to this is positioned, and positioning method becomes more
Flexibly, the environmental suitability of robot is improved.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for
For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other
Attached drawing.
Figure 1A is a kind of application scenario diagram of robot localization method provided in an embodiment of the present invention;
Figure 1B is a kind of system architecture diagram of robot localization method provided in an embodiment of the present invention;
Fig. 2A is a kind of flow chart of robot localization method provided in an embodiment of the present invention;
Fig. 2 B is the flow chart of another robot localization method provided in an embodiment of the present invention;
Fig. 3 A is a kind of structural schematic diagram of robotic positioning device provided in an embodiment of the present invention;
Fig. 3 B is a kind of structural schematic diagram of robotic positioning device provided in an embodiment of the present invention;
Fig. 3 C is a kind of structural schematic diagram of first determining module provided in an embodiment of the present invention;
Fig. 3 D is a kind of structural schematic diagram of robotic positioning device provided in an embodiment of the present invention;
Fig. 4 is a kind of structural schematic diagram of robotic positioning device provided in an embodiment of the present invention.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention
Formula is described in further detail.
Before carrying out detailed explanation to the embodiment of the present invention, first the application scenarios of the embodiment of the present invention are given
It introduces.Currently, robot is widely used in the fields such as warehouse logistics, automatic Pilot, moving operation.For example, in warehouse logistics
Aspect, robot can be moved according to default route, either be picked to cargo with this to transport goods.Automatic
Aspect is driven, robot can be carried in the car, detects whether vehicle deviates default route with this.Either by robot
For warehouse logistics field or automatic Pilot field, when robot is worked, robot is required according to default route
It is moved.In general, robot deviates default route in moving process in order to prevent, beacon can be set in default route
To be positioned in real time to robot.For example, the robot carried in warehouse logistics field for cargo, is moved in the robot
In dynamic default route, it can be generally provided with continuous band from the origin-to-destination of default route, and take in this and then can
The fixed beacon that is provided with such as two dimensional code, one-dimension code etc at interval positions robot with this in real time, with guarantee
Robot can accurately be moved along default route.And robot localization method provided in an embodiment of the present invention can be used for it is all
In warehouse logistics transport as described above or other needs are carried out according to default route in mobile scene, to realize
Real-time positioning to robot.
Specifically, Figure 1A is a kind of application scenario diagram of robot localization method provided in an embodiment of the present invention, such as Figure 1A
It is shown, include multiple fixed beacons 001 and multiple guidance beacons 002 in the application scenarios, multiple fixed beacon 001 with it is more
A guidance beacon 002 be laid in advance robot it is mobile will by way of default route ground on.The default route
Starting point is O1, terminal O2.Wherein, fixed beacon 001 and guidance beacon 002 can be spaced setting, the fixed letter per adjacent two
The distance between mark 001 can be equal, can not also wait, also, guidance beacon 002 is laid on the center line of the default route
On.Multiple guidance beacons 002 can be the linear bands of width having the same and/or identical color.Multiple fixed beacons
001 is the beacon with encoded information, and can have identical profile, wherein is wrapped in the encoded information of each fixed beacon
Include the coordinate of the fixed beacon position.For example, multiple fixed beacon 001 can uniformly be two with circular contour
Dimension code either has the one-dimension code etc. of square contour.
After the application scenarios to the embodiment of the present invention are introduced, next to the present embodiments relate to be
System framework is introduced.Figure 1B is a kind of system architecture diagram of robot localization method provided in an embodiment of the present invention, such as Figure 1B
Shown, which includes robot 101, camera 102 and smart machine 103, wherein robot 101 can be with camera
102, smart machine 103 is communicated, and also can communicate between camera 102 and smart machine 103.
In embodiments of the present invention, camera 102 can be mounted in the bottom of robot 101, with robot 101 together into
Row movement.In the process of moving, camera 102 can acquire ground image in real time, and acquired image is sent to machine
The either smart machine 103 of device people 101.If acquired image is sent to robot 101 by camera 102, machine
The image can be forwarded to smart machine 103 by people 101.When to receive camera current time collected for smart machine 103
When image, which can be handled, thus according to the image to robot localization.Later, smart machine 103 can incite somebody to action
The location information is sent to robot 101, so that robot 101 adjusts position and attitude according to the location information.It that is to say, this hair
The robot localization method that bright embodiment provides can execute completion by smart machine 103.
It should be noted that the smart machine 103 can be industrial computer, industrial personal computer either other with image recognition
And the smart machine of operation positioning function.
In the system architecture of another exemplary provided by the invention, robot 101 and camera can also be only included
102, without including smart machine 103.Wherein, which can directly be sent to by camera 102 after collecting image
Robot 101 determines location information according to the image by robot 101, and controls displacement according to location information.Namely
It is that robot localization method provided in an embodiment of the present invention can execute completion by robot 101.
It should be noted that the positional relationship in Figure 1B between robot 101 and camera 102 does not represent practical application
In positional relationship between the two, it is only mutual between each composition to briefly explain forming for the system architecture with Figure 1B herein
Connection.
In the application scenarios to the embodiment of the present invention and after system architecture is introduced, next to the embodiment of the present invention
The robot localization method of offer carries out detailed explanation.
Fig. 2A is a kind of flow chart of robot localization method provided in an embodiment of the present invention, and this method can be applied to
In above-mentioned application scenarios shown in figure 1A, also, based on two kinds of previously described system architectures it is found that the executing subject of this method
It can be smart machine, or robot, A referring to fig. 2, method includes the following steps:
Step 201a: the camera carried in robot is obtained in the collected target image of current sample time.
In embodiments of the present invention, camera can be mounted in the center of robot bottom, also, the camera can
To carry out Image Acquisition according to certain sampling period.Smart machine either robot can pass through the place to the target image
Reason, carries out robot localization by subsequent step.
Step 202a: when fixed beacon being not present in target image and there is guidance beacon, it is based on a upper sampling instant
The position coordinates of determining robot and the guidance beacon, determine robot in the current position coordinates of current sample time.
Wherein, fixed beacon and guidance beacon be arranged on the corresponding default route of robot, for robot
The beacon that is positioned of position, and fixed beacon includes encoded information, and encoded information includes the seat of fixed beacon position
Target coding.
It should be noted that the fixed beacon can be the geometries such as round or rectangular.The guidance beacon can be
Straight line with one fixed width.When fixed beacon and guidance beacon is arranged, the two can be optionally combined and be configured.Namely
It is that in the corresponding default route of robot, the interval between every two adjacent fixed beacons can be identical, it can also not
Together.
In addition, the position coordinates for the robot that a upper sampling instant determines can be and adopt in a upper sampling instant according to upper one
The sample moment collects what the determination of the fixed beacon in image obtained, is also possible in a upper sampling instant according to a upper sampling instant
What the guidance beacon determination in acquired image obtained.
In embodiments of the present invention, the camera carried in robot or the available robot of smart machine is current
The collected target image of sampling instant can if there is guidance beacon there is no fixed beacon in the target image
To determine robot in the current position coordinates of current sample time based on the guidance beacon.It that is to say, in the embodiment of the present invention
In, it is not necessary to robot is positioned in combination with fixed beacon and guidance beacon, as long as there are in the two in target image
One, it can positioning to robot is realized, in this way, even if when fixed beacon and guidance beacon are arranged in default route
The two is optionally combined, the beacon that robot can also be optionally combined according to this is positioned, and positioning method becomes more flexible,
Improve the environmental suitability of robot.
Fig. 2 B is a kind of flow chart of robot localization method provided in an embodiment of the present invention, and this method can be applied to
In above-mentioned application scenarios shown in figure 1A, also, based on two kinds of previously described system architectures it is found that the executing subject of this method
It can be smart machine, or robot will be explained using robot as executing subject in the embodiment of the present invention
It is bright.B referring to fig. 2, method includes the following steps:
Step 201b: the camera carried in robot is obtained in the collected target image of current sample time.
In embodiments of the present invention, camera can be mounted in the center of robot bottom, as the robot moves,
Camera can be with real-time image acquisition.Under normal circumstances, the image that 50 frames can be acquired in camera 1s, whenever camera acquires
When to a frame image, it can the frame image is sent to robot.
Step 202b: judge in the target image with the presence or absence of fixed beacon or guidance beacon.
It, can be to the mesh after robot gets the current sample time collected target image of camera transmission
Logo image carries out whole scan, either guides beacon with the presence or absence of fixed beacon to detect in the target image.Certainly, due to
The time for needing to expend to target image progress whole scan is relatively long, and therefore, image recognition window also can be set in robot
Mouthful, only the partial image region in target image in the image recognition window is scanned, to detect the parts of images
Beacon is either guided with the presence or absence of fixed beacon in region.Wherein, which is a upper sampling instant in determination
After the position coordinates of robot, the position coordinates of the robot of current sample time are predicted, and obtained according to prediction
What the position coordinates of the robot of current sample time determined.Since the image recognition window of current sample time is according to upper one
What the position coordinates of the robot for the current sample time that sampling instant is predicted determined, therefore, if in upper one sampling
After quarter, robot continues on default route movement, then, even if there is subtle deviation in the mobile route of robot,
In the target image of current sample time, the probability that fixed beacon or guidance beacon appear in the image recognition window is still
It is maximum, and the scanning and identification of partial image region are carried out according to the image recognition window, in the accuracy rate that ensure that identification
While, recognition time will shorten, and improve image recognition efficiency.
When robot is scanned the partial region in target image by image recognition window, robot can be from
The image-region being located in the image recognition window of current sample time is obtained in target image, later, robot can detecte
It whether include the figure for specifying profile in the image-region, when including the figure of specified profile in the image-region, it is determined that
There are fixed beacons in target image.When the figure in the image-region not including specified profile when including straight line, robot
Whether the width that may determine that the straight line is predetermined width, and judges whether the color of the straight line is pre-set color, when this is straight
The width of line is the color of predetermined width and/or the straight line when being pre-set color, it is determined that there is no fix in the target image
Beacon and exist guidance beacon.
Wherein, due to include in fixed beacon the fixed beacon present position coordinate, and camera is located at machine
The center of people bottom, the target image of shooting are exactly the ground image of robot present position, that is to say, that as long as
It is able to detect that fixed beacon, just illustrates that robot is current just at the position where the fixed beacon, robot can be straight
The current position coordinates that itself is determined according to the coordinate of the fixed beacon position are connect, therefore, robot is getting image
After image-region corresponding to identification window, it can scan first in the image-region with the presence or absence of fixed beacon.In addition, by
It can be the profiles such as round or rectangular in the profile of fixed beacon, therefore, robot, can be by this when being identified
Image-region is handled, and detects whether to include specified profile in image-region after treatment, wherein when pre-set
The profile of fixed beacon is circle, which is circle, when the profile of pre-set fixed beacon be it is rectangular, this refers to
Fixed wheel exterior feature is correspondingly also just rectangular.
When it includes specified profile that robot, which detects in the image-region, it can determine in the target image and exist admittedly
Determine beacon, it is on the contrary, it is determined that fixed beacon is not present in the target image.If fixed beacon is not present in the target image,
So, robot cannot also rely on fixed beacon and be positioned, and draw at this point, robot then needs to search in the target image
Beacon is led to be positioned.
It, can the image corresponding to the image recognition window when robot search index beacon in the target image
Detect whether that there are straight lines in region.After detecting straight line, since the straight line may not be pre-set guidance letter
Therefore mark for exclusive PCR, guarantees that the straight line detected is to guide coordinate, robot can determine the straight line in target
Width in image.Since width can be indicated by pixel in the picture, robot can be according to camera shooting leader
Relationship between the pixel of timing determination and actual physics distance in the real world, conversion obtain the straight line in real world
In width.Later, robot may determine that whether the width for the straight line that conversion obtains is predetermined width, wherein this is default
Width is the width that beacon is guided in real world.If being not predetermined width, illustrate that the straight line is not preset guidance
Beacon illustrates that the straight line is preset guidance beacon, at this point, can determine in the target image if it is predetermined width
In the presence of guidance beacon.
Optionally, whether robot not only can judge the straight line by the way that whether the width of detection straight line is predetermined width
To guide beacon, robot can also judge whether the straight line is to draw by the way that whether detect the color of the straight line be pre-set color
Lead beacon, wherein the pre-set color is the color for the guidance beacon being laid in advance.Certainly, in order to further increase accuracy, machine
Above two method can also be combined by device people, that is to say, both judge whether the width of the straight line is predetermined width, is also wanted
Whether the color for judging the straight line is pre-set color, only when the width of the straight line is predetermined width, and the color of the straight line
When for pre-set color, it can just determine that the straight line is to guide beacon.
It should be noted that being obtained when robot just starts to work due to and there is no upper sampling instant determination
Whether the first frame image received can be carried out whole scan by image recognition window, therefore, robot, wherein deposited with detection
In fixed beacon or guidance beacon.Certainly, for the acquired image of each sampling instant, machine per capita can be by right
Image carries out whole scan to detect wherein with the presence or absence of fixed beacon or guidance beacon.
In addition, in embodiments of the present invention, robot can start to work from the starting point of default route, it can also be default from this
It starts to work at any other fixed beacon in route in addition to starting point, when robot starts to work, which can be with
Initialization registration is carried out at the fixed beacon being presently in, later, can acquire by camera includes the fixed beacon
Image, and the position coordinates in the fixed beacon are read, to carry out initial alignment to robot.
If determined in the target image by the above method there are fixed beacon, robot can then pass through step
Rapid 203b is positioned, if it is determined that and fixed beacon is not present in the target image, but there is guidance beacon, then, machine
Device people can be positioned by step 204b.
Step 203b: when there are when fixed beacon, being based on fixed beacon in target image, determine robot in present sample
The current position coordinates at moment, the fixed beacon are the beacon with encoded information arranged in advance, which includes solid
Determine the coding of the coordinate of beacon position.
When determining in target image there are when fixed beacon, robot can then believe the coding read in the fixed beacon
Breath, and the encoded information is decoded, to obtain the coordinate of the fixed beacon position.Based on foregoing description it is found that by
It is mounted in robot bottom centre position in camera, therefore, the target figure for the current sample time that camera takes
Picture, the as ground image of current time robot position, therefore, when decoding obtains the seat of the fixed beacon position
After mark, the coordinate of the fixed beacon position can be determined as to the current position coordinates of robot.
It optionally, not only may include the coordinate of the fixed beacon position in the encoded information of the fixed beacon, and
It and can also include other specific position beacon informations either special letter designation map information.For example, in the encoded information
It may include the title in the section where the fixed beacon, alternatively, for belonging to some extreme terrains, fixed beacon present position
Region such as room number, floor, can preset alphabetical designation.For example, if the landform of the fixed beacon present position
For extreme terrain, then, it can also include the word for being used to indicate the landform of the location of the fixed beacon in the encoded information
Female mark number.It certainly, can also include the alphabetical designation in region belonging to the fixed beacon present position, example in the encoded information
Such as, when the fixed beacon present position belongs to second floor, the corresponding alphabetical designation of second floor is B, then, it can wrap in the encoded information
Include the alphabetical designation.In addition, the coding in the fixed beacon can be two dimensional code, one-dimension code etc., the embodiment of the present invention is herein not
It limits.
It should be noted that above-mentioned is only the information that may include according to several encoded informations that practical application is enumerated
Example more information can also flexibly be added as needed in the encoded information, increase application in practical applications
Flexibility and adaptability.
Step 204b: when fixed beacon being not present in target image and there is guidance beacon, it is based on a upper sampling instant
The position coordinates and guidance beacon of determining robot, determine the current position coordinates of robot.
When determine in target image there is no fixed beacon and exist guidance beacon when, robot can then be adopted according to upper one
The position coordinates and guidance beacon of the robot that the sample moment determines determine the current position coordinates of robot.
2041: based on the encoder being arranged in robot in the collected measured value of current sample time and a upper sampling
The position coordinates for the robot that moment determines, the first position coordinate of calculating robot.
Wherein, robot can be based on encoder in the collected measured value of current sample time and in upper one sampling
It carves collected measured value, calculates a upper sampling instant to the first cartwheel roll momentum of robot between current sample time and the
Two cartwheel roll momentum;Based on the position for the robot that the first cartwheel roll momentum, the second cartwheel roll momentum and a upper sampling instant determine
Set coordinate, the first position coordinate of calculating robot.
Specifically, the available encoder of robot is collected in current sample time when detecting guidance beacon
Measured value and in the upper collected measured value of a sampling instant, wherein can be in the collected measured value of current sample time
The moving distance of robot until current sample time, and be then to be adopted to upper one in the upper collected measured value of a sampling instant
The moving distance of robot until the sample moment.Also, since the moving distance of two wheels of robot may be different,
Above-mentioned moving distance includes first movement distance and the second moving distance, wherein first movement distance can be left for robot
The moving distance of wheel, the second moving distance then can be the moving distance of the right wheel of robot.By the shifting of current sample time
Dynamic distance and the moving distance of a upper sampling instant are subtracted each other, it can are obtained from a upper sampling instant to current sample time
The two cartwheel roll momentum namely the first cartwheel roll momentum of robot and the second cartwheel roll momentum.
After the first cartwheel roll momentum and the second cartwheel roll momentum is calculated, robot can be according to the first cartwheel roll
Robot location's variable quantity and angle variable quantity, and root is calculated by following formula (1) in momentum and the second cartwheel roll momentum
According to the position coordinates for the robot that the location variation, angle variable quantity and a upper sampling instant determine, by formula (2) and
(3) the first position coordinate of robot is calculated.
θm=norm (θm-1+dθ) (3)
Wherein, dsFor location variation, dθFor angle variable quantity, d1For the first cartwheel roll momentum, d2For the second wheel rolling
Amount, spacing of the B between two wheels of robot.(Xm,Ym,θm) it is first position coordinate, (Xm-1,Ym-1,θm-1) adopted for upper one
The position coordinates for the robot that the sample moment determines.
2042: the lateral deviation and angular deviation of robot are determined based on guidance beacon.
In embodiments of the present invention, since camera is located at robot center, it is collected for camera
Target image, the central point of the target image are the current position coordinates for corresponding to robot, the longitudinal center of the target image
The direction of line is the current moving direction of robot.If robot currently has deviated from default route, it takes
Guidance beacon in target image will be inclined relative to the longitudinal centre line of the target image.At this point, robot can be true
The central point of the fixed target image to the guidance beacon vertical range and guide angle of the beacon relative to longitudinal centre line,
And lateral deviation is determined based on the vertical range, the angular deviation of the robot is determined based on the angle.
Wherein, when robot determines vertical range, and determines lateral deviation based on the vertical range, robot can be obtained
Take first end point coordinate, the second extreme coordinates and the midpoint coordinates of guidance beacon, wherein the first end point coordinate is guidance letter
The position coordinates of one endpoint of target in the target image, the second extreme coordinates are to guide another endpoint of beacon in target figure
Position coordinates as in, midpoint coordinates is the position coordinates at the midpoint of the guidance beacon in the target image.It needs to illustrate
It is, it, can be according to the central axes of the guidance beacon come really when the width of the guidance beacon in the target image cannot be ignored
The fixed first end point coordinate, the second extreme coordinates and midpoint coordinates.When get first end point coordinate, the second extreme coordinates and in
After point coordinate, robot can determine hanging down for robot based on the first end point coordinate, the second extreme coordinates and midpoint coordinates
Straight distance.Since the vertical range is actually the distance in the target image, robot is also needed according to video camera
The relationship between the pixel and actual physics distance of timing determination is marked, which is scaled actual vertical range,
The vertical range obtained after conversion is the lateral deviation of the robot.
Specifically, can distinguish after robot gets first end point coordinate, the second extreme coordinates and midpoint coordinates
Component of the vertical range in the x-axis and y-axis of image coordinate system is calculated by following formula (4) and (5), and under passing through
It states formula (6) and the vertical range is scaled actual lateral deviation.
Wherein, IFxFor vertical range component on x-axis of the central point to the guidance beacon of target image, IFyFor mesh
Vertical range component on the y axis of the central point of logo image to the guidance beacon.(Lx1,Ly1) it is first end point coordinate.(ICx,
ICy) be the guidance beacon midpoint coordinates.dLxFor component of the lateral deviation in world coordinate system x-axis, dLyIt is lateral inclined for this
Component of the difference in world coordinate system y-axis, R are preset coordinate conversion matrix, k be it is preset remove distortion factor, K is according to the
The value that end point coordinate and the second extreme coordinates determine, specifically, K can be calculated by following formula (7).
Wherein, (Lx2,Ly2) it is the second extreme coordinates.
When angle of the determining guidance beacon of robot relative to longitudinal centre line, and angular deviation is determined based on the angle
When, what first end point coordinate, the second extreme coordinates and a upper sampling instant for storage for the available guidance beacon determined
Angular deviation, later, the angle that robot can be determined based on the first end point coordinate, the second extreme coordinates and a upper sampling instant
Deviation is spent, the angular deviation of current sample time robot is calculated by following formula (8).
Wherein, θLFor angular deviation, θL-1The angular deviation determined for a upper sampling instant.
Optionally, in order to reduce computation complexity, robot can also be directly by guidance beacon relative to longitudinal centre line
Angle, be determined as the angular deviation of the robot.
2043: determining the measurement error of encoder based on first position coordinate, lateral deviation and angular deviation, and pass through volume
The measurement error of code device is modified first position coordinate, is sat with obtaining robot in the current location of current sample time
Mark.
After lateral deviation and angular deviation has been determined, robot can by be calculated by image this is lateral inclined
Difference and angular deviation are compared with the first position coordinate by the determining obtained robot of encoder measurement, to obtain
The measurement error of encoder, and the first position coordinate being calculated by encoder measurement is carried out using the measurement error
Amendment.
Wherein, robot can be after determining measurement error, to the location variation d in abovementioned steps 2031sThe angle and
Spend variable quantity dθIt is modified, and according to revised location variation and angle variable quantity, again by formula (2) and (3)
Calculating position coordinate, the position coordinates recalculated are revised first position coordinate, be that is to say, robot exists
The current position coordinates of current sample time.
In embodiments of the present invention, first position coordinate is modified by the measurement error, that is, is compiled to passing through
Code device measured value calculate during the coordinate of first position the measurement error that may be present as caused by system, mechanical parameter etc. into
Row amendment improves the standard of current position coordinates in this way, will be determined as current position coordinates by modified first position coordinate
True property.In addition, robot can also be according to the lateral deviation and angle if slipping phenomenon occurs during the motion for robot
Deviation is modified measurement error existing for the inertial navigation sensors being arranged in the robot, and according to revised measurement
Error combines the moving distance got to determine robot current position coordinates.In this way, even if robot in moving process by
To environmental disturbances, such as the gradient is skidded or shock of jolting, and also can be guided amendment by the guidance beacon, be improved machine
The environmental suitability and robustness of people.
Step 205b: current position coordinates and preset position coordinates based on robot, when predicting a sampling under robot
The position coordinates at quarter, and the position coordinates of next sampling instant based on prediction, determine the image recognition window of next sampling instant
Mouthful, which is the terminal point coordinate at the end of pre-set robot task.
After robot determines current position coordinates, moving direction can be adjusted according to preset position coordinates and
Control, so that robot can continue to move along default route towards preset position coordinates.When the moving direction to robot into
After row adjustment, robot can predict the next of robot according to the data such as the moving direction and current movement speed
The position coordinates of sampling instant.
After prediction obtains the position coordinates of next sampling instant, robot can be according to the position of next sampling instant
Coordinate is set, the image recognition window of next sampling instant is set.Wherein, robot can determine that the position of next sampling instant is sat
In the position coordinates of the transversal discrepancy and subsequent time between lateral coordinates in mark in lateral coordinates and current position coordinates
Longitudinal difference between longitudinal coordinate in longitudinal coordinate and current position coordinates.Later, robot can be by the transversal discrepancy
Horizontal pixel difference in image and longitudinal pixel difference are converted into respectively with longitudinal difference, and by the image recognition window at current time
The movement for carrying out opposite direction according to the horizontal pixel difference and longitudinal pixel difference respectively, to obtain the image of next sampling instant
Identification window.Since the image recognition window of next sampling instant is to determine current position coordinates in robot, and according to repairing
Positive moving direction, which is predicted to obtain, to be determined after the position coordinates of next sampling instant and to obtain, therefore, using the image recognition
Window is scanned determining region to determine the region to be identified in next sampling instant acquired image,
In regular hour, recognizes fixed beacon or the probability of beacon is guided to greatly increase, improve image recognition efficiency.
In embodiments of the present invention, the camera carried in robot is obtained in the collected target figure of current sample time
Picture can determine the current position of robot if there are fixed beacons in the target image based on the fixed beacon
If fixed beacon is not present in the target image, but there is guidance beacon in coordinate, then, then it can be believed based on the guidance
Mark the position coordinates for determining that robot is current.It that is to say, in embodiments of the present invention, it is not necessary in combination with fixed beacon and guidance
Beacon positions robot, as long as there are one in the two in target image, it can robot is determined in realization
Position, in this way, robot can also root even if being optionally combined the two when fixed beacon and guidance beacon are arranged in default route
It is positioned according to the beacon that this is optionally combined, positioning method becomes more flexible, improves the environmental suitability of robot.
In addition, robot can determine lateral deviation according to guidance beacon when fixed beacon is not present in target image
And angular deviation, and then the measurement error of encoder and/or inertial navigation sensors is modified, improving accurate positioning
Property while, can also solve the problems, such as because skid caused by position distortion, enhance the environmental suitability of robot.
It should also be noted that, since in embodiments of the present invention, robot can measure in advance according to a upper sampling instant
To the position coordinates of current sample time the image recognition window of current sample time is set, therefore, when current using this
The image recognition window of sampling instant is come when identifying the partial image region in target image, in certain recognition time
It is interior, the probability for recognizing fixed beacon either beacon is considerably increased, recognition efficiency is improved.
After robot localization method provided in an embodiment of the present invention is introduced, next, implementing to the present invention
Device provided by example is introduced.
Fig. 3 A is a kind of structural schematic diagram of robotic positioning device 300 provided in an embodiment of the present invention, as shown in Figure 3A,
The device 300 includes the first acquisition module 301, the first determining module 302 and the second determining module 303:
First obtains module 301, for obtaining the camera carried in robot in the collected mesh of current sample time
Logo image;
First determining module 302, for being based on upper when fixed beacon being not present in target image and there is guidance beacon
The position coordinates and guidance beacon for the robot that one sampling instant determines, determine robot in the current location of current sample time
Coordinate;
Wherein, fixed beacon and guidance beacon be arranged on the corresponding default route of robot, for robot
The beacon that is positioned of position, and fixed beacon includes encoded information, which includes fixed beacon position
The coding of coordinate.
Optionally, referring to Fig. 3 B, the device 300 further include:
Second obtains module 303, for obtaining in the image recognition window of current sample time from target image
Image-region, the image recognition window of current sample time be based on a upper sampling instant determine robot position coordinates
Determination obtains;
Second determining module 304, for determining in target image and depositing when in image-region including the figure of specified profile
In fixed beacon;
Judgment module 305, for judging straight line when the figure in image-region not including specified profile and when including straight line
Width whether be predetermined width, and judge whether the color of straight line is pre-set color;
Third determining module 306 is pre-set color for the color that the width when straight line is predetermined width and/or straight line
When, determine there is guidance beacon in target area there is no fixed beacon.
Optionally, the position coordinates for the robot that a upper sampling instant determines are according to the upper collected figure of a sampling instant
Fixed beacon determination as in obtains;
Alternatively, the position coordinates for the robot that a upper sampling instant determines are according to upper sampling instant acquired image
In guidance beacon determination obtain.
Optionally, referring to Fig. 3 C, the first determining module 302 includes:
Computational submodule 3031, for based on the encoder being arranged in robot in the collected measurement of current sample time
The position coordinates for the robot that value and a upper sampling instant determine, the first position coordinate of calculating robot;
First determines submodule 3032, for determining the lateral deviation and angular deviation of robot based on guidance beacon;
Second determines submodule 3033, for determining encoder based on first position coordinate, lateral deviation and angular deviation
Measurement error;
Submodule 3034 is corrected, first position coordinate is modified for the measurement error by encoder, to obtain
Current position coordinates of the robot in current sample time.
Optionally, computational submodule 3031 is specifically used for:
Based on the encoder being arranged in robot in the collected measured value of current sample time and in upper one sampling
It carves collected measured value, calculates a upper sampling instant to the first cartwheel roll momentum of robot between current sample time and the
Two cartwheel roll momentum;
It is sat based on the position for the robot that the first cartwheel roll momentum, the second cartwheel roll momentum and a upper sampling instant determine
Mark, the first position coordinate of calculating robot.
Optionally, first determine that submodule is specifically used for:
Obtain the first end point coordinate and the second extreme coordinates of guidance beacon in the target image;
Based on first end point coordinate and the second extreme coordinates, determine the central point of target image to guidance beacon it is vertical away from
From, and determine based on vertical range the lateral deviation of robot;
Based on first end point coordinate and the second extreme coordinates, the longitudinal centre line of target image and the folder of guidance beacon are determined
Angle, and determine based on angle the angular deviation of robot.
Optionally, referring to Fig. 3 D, the device 300 further include:
Prediction module 307 is predicted one under robot for current position coordinates and preset position coordinates based on robot
The position coordinates of sampling instant, preset position coordinates are the terminal point coordinate at the end of pre-set robot task;
4th determining module 308 determines next sampling instant for the position coordinates of next sampling instant based on prediction
Image recognition window.
In conclusion in embodiments of the present invention, obtaining the camera carried in robot and being acquired in current sample time
The target image arrived can determine robot based on the fixed beacon if there are fixed beacons in the target image
If fixed beacon is not present in the target image, but there is guidance beacon in current position coordinates, then, then it can be with base
The current position coordinates of robot are determined in the guidance beacon.It that is to say, in embodiments of the present invention, it is not necessary in combination with fixation
Beacon positions robot with guidance beacon, as long as there are one in the two in target image, it can realization pair
The positioning of robot, in this way, even if being optionally combined the two when fixed beacon and guidance beacon are arranged in default route, machine
The beacon that people can also be optionally combined according to this positions, and positioning method becomes more flexible, improves the environment of robot
Adaptability.
Fig. 4 is a kind of block diagram of device 400 for robot localization shown in the embodiment of the present invention.For example, the device
400 can be such as computer, the smart machines such as industrial computer, or robot.
Referring to Fig. 4, device 400 may include following one or more components: processing component 402, memory 404, power supply
Component 406, multimedia component 408, audio component 410, the interface 412 of input/output (I/O), sensor module 414, and
Communication component 416.
The integrated operation of the usual control device 400 of processing component 402, such as with display, telephone call, data communication, phase
Machine operation and record operate associated operation.Processing component 402 may include that one or more processors 420 refer to execute
It enables, to perform all or part of the steps of the methods described above.In addition, processing component 402 may include one or more modules, just
Interaction between processing component 402 and other assemblies.For example, processing component 402 may include multi-media module, it is more to facilitate
Interaction between media component 408 and processing component 402.
Memory 404 is configured as storing various types of data to support the operation in device 400.These data are shown
Example includes the instruction of any application or method for operating on device 400, contact data, and telephone book data disappears
Breath, picture, video etc..Memory 404 can be by any kind of volatibility or non-volatile memory device or their group
It closes and realizes, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile
Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash
Device, disk or CD.
Power supply module 406 provides power supply for the various assemblies of device 400.Power supply module 406 may include power management system
System, one or more power supplys and other with for device 400 generate, manage, and distribute the associated component of power supply.
Multimedia component 408 includes the screen of one output interface of offer between described device 400 and user.One
In a little embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen
Curtain may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensings
Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action
Boundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakers
Body component 408 includes a front camera and/or rear camera.When device 400 is in operation mode, such as screening-mode or
When video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and
Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 410 is configured as output and/or input audio signal.For example, audio component 410 includes a Mike
Wind (MIC), when device 400 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone is matched
It is set to reception external audio signal.The received audio signal can be further stored in memory 404 or via communication set
Part 416 is sent.In some embodiments, audio component 410 further includes a loudspeaker, is used for output audio signal.
I/O interface 412 provides interface between processing component 402 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock
Determine button.
Sensor module 414 includes one or more sensors, and the state for providing various aspects for device 400 is commented
Estimate.For example, sensor module 414 can detecte the state that opens/closes of device 400, and the relative positioning of component, for example, it is described
Component is the display and keypad of device 400, and sensor module 414 can be with 400 1 components of detection device 400 or device
Position change, the existence or non-existence that user contacts with device 400,400 orientation of device or acceleration/deceleration and device 400
Temperature change.Sensor module 414 may include proximity sensor, be configured to detect without any physical contact
Presence of nearby objects.Sensor module 414 can also include optical sensor, such as CMOS or ccd image sensor, at
As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors
Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 416 is configured to facilitate the communication of wired or wireless way between device 400 and other equipment.Device
400 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.In an exemplary implementation
In example, communication component 416 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel.
In one exemplary embodiment, the communication component 416 further includes near-field communication (NFC) module, to promote short range communication.Example
Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology,
Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 400 can be believed by one or more application specific integrated circuit (ASIC), number
Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing shown in above-mentioned Fig. 2A and Fig. 2 B
The method that embodiment provides.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided
It such as include the memory 404 of instruction, above-metioned instruction can be executed by the processor 420 of device 400 to complete the above method.For example,
The non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk
With optical data storage devices etc..
A kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by smart machine or machine
When the processor of people executes, so that smart machine or robot are able to carry out a kind of robot localization method, which comprises
The camera carried in robot is obtained in the collected target image of current sample time;
When fixed beacon being not present in the target image and there is guidance beacon, determined based on a upper sampling instant
The position coordinates of robot and the guidance beacon determine that the robot is sat in the current location of the current sample time
Mark;
Wherein, the fixed beacon and it is described guidance beacon be arranged on the corresponding default route of the robot,
For the beacon that the position to the robot is positioned, and the fixed beacon includes encoded information, the encoded information
The coding of coordinate including the fixed beacon position.
Optionally, it is described obtain in robot the camera that carries the collected target image of current sample time it
Afterwards, further includes:
It is located at the image-region in the image recognition window of current sample time from acquisition in the target image, it is described to work as
The image recognition window of preceding sampling instant is that the position coordinates determination of the robot determined based on a upper sampling instant is obtained;
When in described image region including the figure of specified profile, determine that there are the fixed letters in the target image
Mark;
When the figure in described image region not including the specified profile and including straight line, the width of the straight line is judged
Whether degree is predetermined width, and judges whether the color of the straight line is pre-set color;
When the color that the width of the straight line is predetermined width and/or the straight line is pre-set color, the mesh is determined
In logo image there is no the fixed beacon and there are the guidance beacons.
Optionally, the position coordinates for the robot that a upper sampling instant determines are collected according to a upper sampling instant
Image in fixed beacon determination obtain;
Alternatively, the position coordinates for the robot that a upper sampling instant determines are collected according to a upper sampling instant
Guidance beacon determination in image obtains.
Optionally, the position coordinates and the guidance beacon of the robot determined based on a upper sampling instant, are determined
Current position coordinates of the robot in the current sample time, comprising:
Based on the encoder being arranged in the robot in the collected measured value of current sample time and described upper one
The position coordinates for the robot that sampling instant determines, calculate the first position coordinate of the robot;
The lateral deviation and angular deviation of the robot are determined based on the guidance beacon;
Determine that the measurement of the encoder misses based on the first position coordinate, the lateral deviation and the angular deviation
Difference;
The first position coordinate is modified by the measurement error of the encoder, is existed with obtaining the robot
The current position coordinates of the current sample time.
Optionally, it is described based on the encoder being arranged in the robot the collected measured value of current sample time with
And the position coordinates of the robot of the upper sampling instant determination, calculate the first position coordinate of the robot, comprising:
It is adopted based on the encoder being arranged in the robot in the collected measured value of current sample time and upper one
Sample moment collected measured value, the first cartwheel roll of the upper sampling instant of calculating to robot described between current sample time
Momentum and the second cartwheel roll momentum;
The machine determined based on the first cartwheel roll momentum, the second cartwheel roll momentum and a upper sampling instant
The position coordinates of people calculate the first position coordinate of the robot.
Optionally, the lateral deviation and angular deviation that the robot is determined based on the guidance beacon, comprising:
Obtain first end point coordinate and second extreme coordinates of the guidance beacon in the target image;
Based on the first end point coordinate and second extreme coordinates, determine the central point of the target image described in
The vertical range of beacon is guided, and determines the lateral deviation of the robot based on the vertical range;
Based on the first end point coordinate and second extreme coordinates, determine the longitudinal centre line of the target image with
The angle of the guidance beacon, and determine based on the angle angular deviation of the robot.
Optionally, the determination robot is after the current position coordinates of the current sample time, further includes:
Current position coordinates and preset position coordinates based on the robot predict the robot in next sampling
The position coordinates at quarter, the preset position coordinates are the terminal point coordinate at the end of the pre-set robot task;
The position coordinates of next sampling instant based on prediction determine the image recognition window of next sampling instant.
It should be understood that machine positioning device provided by the above embodiment is when carrying out robot localization, only with above-mentioned
The division progress of each functional module can according to need and for example, in practical application by above-mentioned function distribution by different
Functional module is completed, i.e., the internal structure of equipment is divided into different functional modules, with complete it is described above whole or
Partial function.In addition, robotic positioning device provided by the above embodiment and robot localization embodiment of the method belong to same structure
Think, specific implementation process is detailed in embodiment of the method, and which is not described herein again.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware
It completes, relevant hardware can also be instructed to complete by program, the program can store in a kind of computer-readable
In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and
Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.
Claims (16)
1. a kind of robot localization method, which is characterized in that the described method includes:
The camera carried in robot is obtained in the collected target image of current sample time;
When fixed beacon being not present in the target image and there is guidance beacon, based on the determining machine of a upper sampling instant
The position coordinates of people and the guidance beacon, determine the robot in the current position coordinates of the current sample time;
Wherein, the fixed beacon and the guidance beacon are arranged on the corresponding default route of the robot, are used for
To the beacon that the position of the robot is positioned, and the fixed beacon includes encoded information, and the encoded information includes
The coding of the coordinate of the fixed beacon position.
2. the method according to claim 1, wherein the camera carried in robot that obtains is adopted currently
After sample moment collected target image, further includes:
It is located at the image-region in the image recognition window of current sample time from acquisition in the target image, it is described currently to adopt
The image recognition window at sample moment is that the position coordinates determination of the robot determined based on a upper sampling instant is obtained;
When in described image region including the figure of specified profile, determine that there are the fixed beacons in the target image;
When the figure in described image region not including the specified profile and including straight line, judge that the width of the straight line is
No is predetermined width, and judges whether the color of the straight line is pre-set color;
When the color that the width of the straight line is predetermined width and/or the straight line is pre-set color, the target figure is determined
As in there is no the fixed beacon and there are the guidance beacons.
3. the method as described in claim 1, which is characterized in that the position coordinates for the robot that a upper sampling instant determines
It is to be obtained according to the fixed beacon determination in upper sampling instant acquired image;
Alternatively, the position coordinates for the robot that a upper sampling instant determines are according to upper sampling instant acquired image
In guidance beacon determination obtain.
4. method according to claim 1 to 3, which is characterized in that the machine determined based on a upper sampling instant
The position coordinates of people and the guidance beacon determine the robot in the current position coordinates of the current sample time, packet
It includes:
Based on the encoder being arranged in the robot in the collected measured value of current sample time and a upper sampling
The position coordinates for the robot that moment determines, calculate the first position coordinate of the robot;
The lateral deviation and angular deviation of the robot are determined based on the guidance beacon;
The measurement error of the encoder is determined based on the first position coordinate, the lateral deviation and the angular deviation;
The first position coordinate is modified by the measurement error of the encoder, to obtain the robot described
The current position coordinates of current sample time.
5. according to the method described in claim 4, it is characterized in that, described worked as based on the encoder being arranged in the robot
The position coordinates for the robot that the preceding collected measured value of sampling instant and a upper sampling instant determine, calculate the machine
The first position coordinate of device people, comprising:
Based on the encoder being arranged in the robot in the collected measured value of current sample time and in upper one sampling
Carve collected measured value, the first cartwheel roll momentum of the upper sampling instant of calculating to robot described between current sample time
With the second cartwheel roll momentum;
The robot determined based on the first cartwheel roll momentum, the second cartwheel roll momentum and the upper sampling instant
Position coordinates calculate the first position coordinate of the robot.
6. according to the method described in claim 4, it is characterized in that, described determine the robot based on the guidance beacon
Lateral deviation and angular deviation, comprising:
Obtain first end point coordinate and second extreme coordinates of the guidance beacon in the target image;
Based on the first end point coordinate and second extreme coordinates, determine the central point of the target image to the guidance
The vertical range of beacon, and determine based on the vertical range lateral deviation of the robot;
Based on the first end point coordinate and second extreme coordinates, determine the longitudinal centre line of the target image with it is described
The angle of beacon is guided, and determines the angular deviation of the robot based on the angle.
7. according to the method described in claim 2, it is characterized in that, the determination robot is in the current sample time
Current position coordinates after, further includes:
Current position coordinates and preset position coordinates based on the robot predict the robot in next sampling instant
Position coordinates, the preset position coordinates are the terminal point coordinate at the end of the pre-set robot task;
The position coordinates of next sampling instant based on prediction determine the image recognition window of next sampling instant.
8. a kind of robotic positioning device, which is characterized in that described device includes:
First obtains module, for obtaining the camera carried in robot in the collected target image of current sample time;
First determining module, for being based on upper one when fixed beacon being not present in the target image and there is guidance beacon
The position coordinates and the guidance beacon for the robot that sampling instant determines, determine the robot in the current sample time
Current position coordinates;
Wherein, the fixed beacon and the guidance beacon are arranged on the corresponding default route of the robot, are used for
To the beacon that the position of the robot is positioned, and the fixed beacon includes encoded information, and the encoded information includes
The coding of the coordinate of the fixed beacon position.
9. device according to claim 8, which is characterized in that described device further include:
Second obtains module, for being located at the figure in the image recognition window of current sample time from acquisition in the target image
As region, the image recognition window of the current sample time is the position coordinates of the robot determined based on a upper sampling instant
Determination obtains;
Second determining module, for determining in the target image when in described image region including the figure of specified profile
There are the fixed beacons;
Judgment module judges institute when for when the figure in described image region not including the specified profile and including straight line
Whether the width for stating straight line is predetermined width, and judges whether the color of the straight line is pre-set color;
Third determining module is pre-set color for the color that the width when the straight line is predetermined width and/or the straight line
When, it determines and the fixed beacon is not present in the target image and there are the guidance beacons.
10. device as claimed in claim 8, which is characterized in that the position for the robot that a upper sampling instant determines is sat
Mark is obtained according to the fixed beacon determination in upper sampling instant acquired image;
Alternatively, the position coordinates for the robot that a upper sampling instant determines are according to upper sampling instant acquired image
In guidance beacon determination obtain.
11. according to any device of claim 8-10, which is characterized in that first determining module includes:
Computational submodule, for based on the encoder being arranged in the robot the collected measured value of current sample time with
And the position coordinates of the robot of the upper sampling instant determination, calculate the first position coordinate of the robot;
First determines submodule, for determining the lateral deviation and angular deviation of the robot based on the guidance beacon;
Second determines submodule, for determining institute based on the first position coordinate, the lateral deviation and the angular deviation
State the measurement error of encoder;
Submodule is corrected, the first position coordinate is modified for the measurement error by the encoder, to obtain
Current position coordinates of the robot in the current sample time.
12. device according to claim 11, which is characterized in that the computational submodule is specifically used for:
Based on the encoder being arranged in the robot in the collected measured value of current sample time and in upper one sampling
Carve collected measured value, the first cartwheel roll momentum of the upper sampling instant of calculating to robot described between current sample time
With the second cartwheel roll momentum;
The robot determined based on the first cartwheel roll momentum, the second cartwheel roll momentum and the upper sampling instant
Position coordinates calculate the first position coordinate of the robot.
13. according to right want 11 described in device, which is characterized in that it is described first determine submodule be specifically used for:
Obtain first end point coordinate and second extreme coordinates of the guidance beacon in the target image;
Based on the first end point coordinate and second extreme coordinates, determine the central point of the target image to the guidance
The vertical range of beacon, and determine based on the vertical range lateral deviation of the robot;
Based on the first end point coordinate and second extreme coordinates, determine the longitudinal centre line of the target image with it is described
The angle of beacon is guided, and determines the angular deviation of the robot based on the angle.
14. device according to claim 9, which is characterized in that described device further include:
Prediction module predicts that the robot exists for current position coordinates and preset position coordinates based on the robot
The position coordinates of next sampling instant, the preset position coordinates are the terminal point coordinate at the end of the robot task;
4th determining module determines the image of next sampling instant for the position coordinates of next sampling instant based on prediction
Identification window.
15. a kind of robotic positioning device, which is characterized in that described device includes:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to perform claim requires any one method described in 1-7.
16. a kind of computer readable storage medium, which is characterized in that computer program is stored in the storage medium, it is described
Method as claimed in claim 1 to 7 is realized when computer program is executed by processor.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710966574.8A CN109668551B (en) | 2017-10-17 | 2017-10-17 | Robot positioning method, device and computer readable storage medium |
PCT/CN2018/110667 WO2019076320A1 (en) | 2017-10-17 | 2018-10-17 | Robot positioning method and apparatus, and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710966574.8A CN109668551B (en) | 2017-10-17 | 2017-10-17 | Robot positioning method, device and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109668551A true CN109668551A (en) | 2019-04-23 |
CN109668551B CN109668551B (en) | 2021-03-26 |
Family
ID=66140420
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710966574.8A Active CN109668551B (en) | 2017-10-17 | 2017-10-17 | Robot positioning method, device and computer readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109668551B (en) |
WO (1) | WO2019076320A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111551176A (en) * | 2020-04-09 | 2020-08-18 | 成都双创时代科技有限公司 | Robot indoor positioning method based on double-color bar and two-dimensional code |
CN111583338A (en) * | 2020-04-26 | 2020-08-25 | 北京三快在线科技有限公司 | Positioning method and device for unmanned equipment, medium and unmanned equipment |
CN112697127A (en) * | 2020-11-26 | 2021-04-23 | 佛山科学技术学院 | Indoor positioning system and method |
CN113237464A (en) * | 2021-05-07 | 2021-08-10 | 郑州比克智能科技有限公司 | Positioning system, positioning method, positioner, and storage medium |
WO2022105420A1 (en) * | 2020-11-23 | 2022-05-27 | 江苏省新通智能交通科技发展有限公司 | Lock gate misalignment detection method and detection system |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110186459B (en) * | 2019-05-27 | 2021-06-29 | 深圳市海柔创新科技有限公司 | Navigation method, mobile carrier and navigation system |
CN113034604A (en) * | 2019-12-25 | 2021-06-25 | 南京极智嘉机器人有限公司 | Calibration system and method and self-guided robot |
CN113450414A (en) * | 2020-03-24 | 2021-09-28 | 阿里巴巴集团控股有限公司 | Camera calibration method, device, system and storage medium |
CN112580380B (en) * | 2020-12-11 | 2024-04-19 | 北京极智嘉科技股份有限公司 | Positioning method and device based on graphic code, electronic equipment and storage medium |
CN112834764B (en) * | 2020-12-28 | 2024-05-31 | 深圳市人工智能与机器人研究院 | Sampling control method and device for mechanical arm and sampling system |
CN112712502B (en) * | 2020-12-29 | 2024-04-19 | 凌云光技术股份有限公司 | Method for correlating front and back synchronization detection results |
CN112819884B (en) * | 2021-01-08 | 2024-07-12 | 苏州华兴源创科技股份有限公司 | Coordinate correction method and device, electronic equipment and computer readable medium |
CN113175932A (en) * | 2021-04-27 | 2021-07-27 | 上海景吾智能科技有限公司 | Robot navigation automation test method, system, medium and equipment |
CN114187349B (en) * | 2021-11-03 | 2022-11-08 | 深圳市正运动技术有限公司 | Product processing method and device, terminal device and storage medium |
CN114136306B (en) * | 2021-12-01 | 2024-05-07 | 浙江大学湖州研究院 | Expandable device and method based on relative positioning of UWB and camera |
CN116002323B (en) * | 2022-12-29 | 2024-05-14 | 北京斯普脉生物技术有限公司 | Intelligent biological laboratory carrying method and system based on mechanical arm |
CN117824666B (en) * | 2024-03-06 | 2024-05-10 | 成都睿芯行科技有限公司 | Two-dimensional code pair for fusion positioning, two-dimensional code calibration method and fusion positioning method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100152944A1 (en) * | 2008-12-11 | 2010-06-17 | Kabushiki Kaisha Yaskawa Denki | Robot system |
CN103064417A (en) * | 2012-12-21 | 2013-04-24 | 上海交通大学 | Global localization guiding system and method based on multiple sensors |
CN103294059A (en) * | 2013-05-21 | 2013-09-11 | 无锡普智联科高新技术有限公司 | Hybrid navigation belt based mobile robot positioning system and method thereof |
CN104298240A (en) * | 2014-10-22 | 2015-01-21 | 湖南格兰博智能科技有限责任公司 | Guiding robot and control method thereof |
CN105651286A (en) * | 2016-02-26 | 2016-06-08 | 中国科学院宁波材料技术与工程研究所 | Visual navigation method and system of mobile robot as well as warehouse system |
CN106708051A (en) * | 2017-01-10 | 2017-05-24 | 上海极络智能科技有限公司 | Two-dimensional code-based navigation system and method, navigation marker and navigation controller |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6256560B1 (en) * | 1999-02-25 | 2001-07-03 | Samsung Electronics Co., Ltd. | Method for correcting position of automated-guided vehicle and apparatus therefor |
JP6601208B2 (en) * | 2015-12-21 | 2019-11-06 | 株式会社デンソー | Automated guided vehicle |
CN206075136U (en) * | 2016-08-29 | 2017-04-05 | 深圳市劲拓自动化设备股份有限公司 | Vision navigation control system based on fuzzy algorithmic approach |
CN106950972B (en) * | 2017-05-15 | 2020-09-08 | 上海音锋机器人股份有限公司 | Automatic Guided Vehicle (AGV) and route correction method thereof |
-
2017
- 2017-10-17 CN CN201710966574.8A patent/CN109668551B/en active Active
-
2018
- 2018-10-17 WO PCT/CN2018/110667 patent/WO2019076320A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100152944A1 (en) * | 2008-12-11 | 2010-06-17 | Kabushiki Kaisha Yaskawa Denki | Robot system |
CN103064417A (en) * | 2012-12-21 | 2013-04-24 | 上海交通大学 | Global localization guiding system and method based on multiple sensors |
CN103294059A (en) * | 2013-05-21 | 2013-09-11 | 无锡普智联科高新技术有限公司 | Hybrid navigation belt based mobile robot positioning system and method thereof |
CN104298240A (en) * | 2014-10-22 | 2015-01-21 | 湖南格兰博智能科技有限责任公司 | Guiding robot and control method thereof |
CN105651286A (en) * | 2016-02-26 | 2016-06-08 | 中国科学院宁波材料技术与工程研究所 | Visual navigation method and system of mobile robot as well as warehouse system |
CN106708051A (en) * | 2017-01-10 | 2017-05-24 | 上海极络智能科技有限公司 | Two-dimensional code-based navigation system and method, navigation marker and navigation controller |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111551176A (en) * | 2020-04-09 | 2020-08-18 | 成都双创时代科技有限公司 | Robot indoor positioning method based on double-color bar and two-dimensional code |
CN111583338A (en) * | 2020-04-26 | 2020-08-25 | 北京三快在线科技有限公司 | Positioning method and device for unmanned equipment, medium and unmanned equipment |
CN111583338B (en) * | 2020-04-26 | 2023-04-07 | 北京三快在线科技有限公司 | Positioning method and device for unmanned equipment, medium and unmanned equipment |
WO2022105420A1 (en) * | 2020-11-23 | 2022-05-27 | 江苏省新通智能交通科技发展有限公司 | Lock gate misalignment detection method and detection system |
CN112697127A (en) * | 2020-11-26 | 2021-04-23 | 佛山科学技术学院 | Indoor positioning system and method |
CN112697127B (en) * | 2020-11-26 | 2024-06-11 | 佛山科学技术学院 | Indoor positioning system and method |
CN113237464A (en) * | 2021-05-07 | 2021-08-10 | 郑州比克智能科技有限公司 | Positioning system, positioning method, positioner, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2019076320A1 (en) | 2019-04-25 |
CN109668551B (en) | 2021-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109668551A (en) | Robot localization method, apparatus and computer readable storage medium | |
CN108596116B (en) | Distance measuring method, intelligent control method and device, electronic equipment and storage medium | |
EP3163254B1 (en) | Method and device for intelligently guiding user to ride elevator/escalator | |
EP3038345B1 (en) | Auto-focusing method and auto-focusing device | |
US10514708B2 (en) | Method, apparatus and system for controlling unmanned aerial vehicle | |
AU2014306813A1 (en) | Visual-based inertial navigation | |
CN103680193A (en) | Parking guidance method and related device | |
KR20160059376A (en) | Electronic appartus and method for controlling the same | |
CN109326136A (en) | Parking navigation method, equipment and computer readable storage medium | |
US10623625B2 (en) | Focusing control device, imaging device, focusing control method, and nontransitory computer readable medium | |
EP3945043A1 (en) | Warehousing line, warehousing management method and device | |
CN107211088A (en) | Camera apparatus, camera system, control method and program | |
CN107194968A (en) | Recognition and tracking method, device, intelligent terminal and the readable storage medium storing program for executing of image | |
CN111669208A (en) | Antenna selection method, first electronic device and storage medium | |
CN103997567A (en) | Method and device for acquiring graphic code information | |
TW201947190A (en) | Optical label network-based navigation method and corresponding computing device | |
CN109308072A (en) | The Transmission Connection method and AGV of automated guided vehicle AGV | |
KR101493353B1 (en) | Parking Area Guidance System and Method | |
CN115407355B (en) | Library position map verification method and device and terminal equipment | |
CN109657198A (en) | Robot calibration method, device and computer readable storage medium | |
CN111832338A (en) | Object detection method and device, electronic equipment and storage medium | |
CN115861741A (en) | Target calibration method and device, electronic equipment, storage medium and vehicle | |
CN116051636A (en) | Pose calculation method, device and equipment | |
KR101358064B1 (en) | Method for remote controlling using user image and system of the same | |
CN113627276A (en) | Method and device for detecting parking space |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address |
Address after: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province Patentee after: Hangzhou Hikvision Robot Co.,Ltd. Address before: 310051 5th floor, building 1, building 2, no.700 Dongliu Road, Binjiang District, Hangzhou City, Zhejiang Province Patentee before: HANGZHOU HIKROBOT TECHNOLOGY Co.,Ltd. |
|
CP03 | Change of name, title or address |