Invention content
In view of the foregoing, it is necessary to which a kind of central controller and the system using the central controller mobile navigation are provided
And method, the central controller controls robot navigation can be assisted by the camera in navigation space.
A kind of mobile navigation system, including central controller, robot and camera, the central controller respectively with institute
It states and is communicated to connect between robot and camera, which includes:
The central controller, for after identifying the robot, receiving the starting point and ending point information of input;
The central controller is additionally operable to according to the starting point and ending point information on the virtual map of navigation space
A virtual route is generated, wherein the virtual route includes multiple location points;
The robot, the move sent for receiving the central controller, and should according to move edge
Virtual route is walked;
The central controller is additionally operable to judge whether the robot reaches the virtual route by the camera
In next location point;And
When the robot reaches next location point in the virtual route, described in the central controller judgement
Whether next location point is terminating point, and terminates to navigate when next location point is terminating point.
Preferably, which further includes:The central controller judge the robot next location point whether
It needs to turn to, and when the robot is when next location point needs to turn to, the central controller sends steering order
To the robot.
Preferably, the central controller includes a memory, is prestored on the virtual route in the memory
Position of the position coordinates, the robot of each location point in the location drawing picture of each location point and each location drawing picture
Set coordinate, wherein the position coordinates of each position point refer to being based on a navigation space region institute on the virtual route
Coordinate in the first coordinate system established, the position coordinates in each location drawing picture refer to being established based on the location drawing picture
The second coordinate system in coordinate.
Preferably, the memory also prestores the correspondence of first coordinate system and second coordinate system,
The central controller is according to the correspondence of first coordinate system and second coordinate system, by the robot in the position
It sets the position coordinates in image and is scaled coordinate of the robot in first coordinate system, so that it is determined that the robot is in institute
State the location point in virtual route.
Preferably, the central controller is according between next location point in the starting point and the virtual route
Distance and the robot movement speed, calculate the robot and be moved to time needed for next location point, in described
Centre controller judges whether the robot will reach next location point according to the time.
Preferably, when the robot will reach next location point, this is next for the central controller controls
The corresponding camera of location point shoots the location drawing picture of the robot, and sends the location drawing picture to the center control
Device.
Preferably, the central controller judges the position coordinates of the location drawing picture and the pre-stored robot
When the position coordinates in location drawing picture corresponding to next location point match, determine that the robot moves to reach under this
One location point.
A kind of mobile navigation method, be applied to central controller in, the central controller respectively with robot and camera shooting
Head communication connection, this method include:
After central controller identifies the robot, the starting point and ending point information of input is received;
The central controller generates one according to the starting point and ending point information on the virtual map of navigation space
Virtual route;
The robot receives the move that the central controller is sent, and according to the move along the virtual road
Diameter is walked;
The central controller judges whether the robot reaches next location point in the virtual route;
When the robot reaches next location point in the virtual route, described in the central controller judgement
Whether robot needs to turn in next location point;
When the robot is when next location point needs to turn to, the robot receives the central controller
The steering order of transmission, and continue to walk along the virtual route after being turned to according to the steering order, until reaching the terminating point.
Preferably, in step, " central controller judges whether the robot reaches the virtual route to this method
In next location point " include:
The central controller is according to the distance between next location point in the starting point and the virtual route
And the movement speed of the robot, it calculates the robot and is moved to time needed for next location point, the center control
Device judges whether the robot will reach next location point according to the time;
When the robot will reach next location point, the central controller controls next location point pair
The camera answered shoots the location drawing picture of the robot, and sends the location drawing picture to the central controller;And
When the position coordinates of the location drawing picture and the pre-stored robot are corresponding to next location point
Location drawing picture in position coordinates when matching, the central controller determines that the robot moves to reach next position
Point.
A kind of central controller, for controlling robot mobile navigation, which includes:
Input/output interface, the starting point and ending point information for receiving input;
Network element, for providing communication connection between the central controller and robot and camera;
Memory, for storing data;
Processor, for generating a void on the virtual map of a navigation space according to the starting point and ending point information
Quasi- path, wherein the virtual route includes multiple location points;
The processor is additionally operable to send move to the robot, to control the robot according to the move
It walks along the virtual route;
The processor is additionally operable to judge whether the robot reaches in the virtual route by the camera
Next location point;And when the robot reaches next location point in the virtual route, judge described next
Whether location point is terminating point, and terminates to navigate when next location point is terminating point.
Preferably, the processor is additionally operable to judge whether the robot needs to turn in next location point, and
When the robot is when next location point needs to turn to, steering order is sent to the robot.
Preferably, position coordinates, the machine of each location point on the virtual route are prestored in the memory
Position coordinates of the device people in the location drawing picture of each location point and each location drawing picture, wherein on the virtual route
The position coordinates of each position point refer to the coordinate in the first coordinate system established based on a navigation space region, institute
It refers to the coordinate in the second coordinate system established based on the location drawing picture to state the position coordinates in each location drawing picture.
Preferably, the memory also prestores the correspondence of first coordinate system and second coordinate, institute
Correspondence of the central controller according to first coordinate system and second coordinate system is stated, in the position by the robot
Position coordinates in image are scaled coordinate of the robot in first coordinate system, so that it is determined that the robot is described
Location point in virtual route.
Preferably, the processor according between next location point in the starting point and the virtual route away from
From and the robot movement speed, calculate the robot and be moved to time needed for next location point, the center control
Device processed judges whether the robot will reach next location point according to the time.
Preferably, when the robot will reach next location point, the processor controls next position
The corresponding camera of point shoots the location drawing picture of the robot, and sends the location drawing picture to the central controller.
Preferably, the processor judges the position coordinates of the location drawing picture with the pre-stored robot at this
When the position coordinates in location drawing picture corresponding to next location point match, it is next to determine that the robot moves to reach this
Location point.
Compared to the prior art, the central controller in this case and the system using the central controller mobile navigation and side
Method can assist the central controller controls robot to lead using existing multiple cameras in navigation space
Boat solves robot navigation in the prior art and the devices such as navigation sensor, ground magnetic stripe and laser sensor is needed to assist and band
The cost problem come.
Specific implementation mode
Referring to FIG. 1, showing the schematic diagram of mobile navigation system 100 in an embodiment of the present invention.The mobile navigation
System 100 includes, but are not limited to central controller 10, robot 20 and multiple cameras 30.In the present embodiment, described
It can be communicatively coupled by wired or wireless mode between central controller 10 and the multiple camera 30, in described
Centre controller 10 can also be wirelessly communicatively coupled between the robot 20.The central controller 10
The robot 20 navigation can be assisted to control by the multiple camera 30 in navigation space.When the center
When controller 10 receives the instruction that control robot 20 input by user navigates in the navigation space, the center control
Device 10 generates a virtual route according to terminating point input by user and 20 position of the robot, and sends move extremely
The robot 20 is walked with controlling the robot 20 according to the virtual route.In the present embodiment, the navigation space can
Can also be the exterior space to be the interior space (such as workshop).
Referring to FIG. 2, the central controller 10 showing in an embodiment of the present invention in mobile navigation system 100 shows
It is intended to.In the present embodiment, the central controller 10 is include but are not limited to, input/output interface 110, network element
111, memory 112 and processor 113.Above-mentioned input/output interface 110, network element 111, memory 112 and processor 113
Between be electrically connected.
In the present embodiment, user can be interacted by the input/output interface 110 and central controller 10.
The input/output interface 110 can use contactless input mode, such as action input, outside acoustic control etc. either one
The remote control unit set sends control command to processor 113 by way of wirelessly or non-wirelessly communicating.Input/output interface
110 can also be the either mechanical key input unit such as capacitive touch screen, resistive touch screen, other optical touch screens,
Such as keyboard, driving lever, flywheel enter key etc..
In the present embodiment, the network element 111 is used to through wired or wireless network transmission mode be that center is controlled
Device 10 processed provides network communication function.To which the central controller 10 can be logical with robot 20 and 30 network of multiple cameras
Letter connection.
The cable network can be any types of traditional wire communication, such as internet, LAN.The wireless network can
Think conventional wireless communication any types, such as radio, Wireless Fidelity (Wireless Fidelity, WIFI), honeycomb,
Satellite, broadcast etc..Wireless communication technique may include, but be not limited to, global system for mobile communications (Global System for
Mobile Communications, GSM), General Packet Radio Service (General Packet Radio Service,
GPRS), CDMA (Code Division Multiple Access, CDMA), wideband code division multiple access (W-CDMA),
CDMA2000, IMT single carrier (IMT Single Carrier), enhanced data rates for gsm evolution (Enhanced Data
Rates for GSM Evolution, EDGE), it is Long Term Evolution (Long-Term Evolution, LTE), advanced long-term
Evolution technology, time-division Long Term Evolution (Time-Division LTE, TD-LTE), the 5th third-generation mobile communication technology (5G), height
Performance radio lan (High Performance Radio Local Area Network, HiperLAN), high-performance without
Line electricity wide area network (High Performance Radio Wide Area Network, HiperWAN), local multiple spot distribute industry
Be engaged in (Local Multipoint Distribution Service, LMDS), full micro-wave access global inter communication (Worldwide
Interoperability for Microwave Access, WiMAX), ZigBee protocol (ZigBee), bluetooth, orthogonal frequency division multiplexing
It is empty with technology (Flash Orthogonal Frequency-Division Multiplexing, Flash-OFDM), large capacity
Division multiple access (High Capacity Spatial Division Multiple Access, HC-SDMA), General Mobile electricity
Letter system (Universal Mobile Telecommunications System, UMTS), Universal Mobile Telecommunications System time-division
Duplexing (UMTS Time-Division Duplexing, UMTS-TDD), evolved high-speed packet access (Evolved High
Speed Packet Access, HSPA+), TD SDMA (Time Division Synchronous Code
Division Multiple Access, TD-SCDMA), evolution data optimization (Evolution-Data Optimized,
EV-DO), Digital Enhanced Cordless Communications (Digital Enhanced Cordless Telecommunications, DECT) and
Other.
In the present embodiment, the memory 112 for store the software program being installed in the central controller 10 and
Data.In the present embodiment, which can be the internal storage unit of the central controller 10, such as the center
The hard disk or memory of controller 10.In other embodiments, the outside of the memory 112 or the central controller 10
The plug-in type hard disk being equipped in storage device, such as the central controller 10, intelligent memory card (Smart Media Card,
SMC), secure digital (Secure Digital, SD) blocks, flash card (Flash Card) etc..
In the present embodiment, the memory 112 is also stored with the virtual map (such as electronic map) of the navigation space,
The virtual map includes a plurality of virtual route.The virtual route is closed by the connection between location point and location point and location point
System's composition.The virtual route defines multiple location points and each location point of the robot 20 on entire virtual route
Corresponding position coordinates, and each robot 20 is by the sequencing of the multiple location point.Multiple location point includes
Turning point and dwell point.In the present embodiment, the dwell point is defined as the robot 20 in the virtual route
The point for needing one preset time of stop, for example, starting point and ending point.The turning point is defined as the robot 20 in institute
The point of direction of travel can be changed by stating in virtual route.
In one embodiment, also prestored in the memory 112 robot 20 each location point figure
As the position coordinates of (for convenience of description, hereinafter referred to as " location drawing picture ") and robot 20 in each location drawing picture.
The virtual route is that the central controller 10 is defeated according to 20 place starting point of robot and user on the virtual map
The shortest path that the terminating point entered generates.The central controller 10 plans the shortest path using diameter algorithm is sought.It is described
Virtual route includes multiple dwell points and multiple turning points.
In present embodiment, the position coordinates of each location point on the virtual route can refer to based on entire navigation
Coordinate in the first coordinate system (XOY) that space region is established.And the robot 20 is in each location drawing picture
Position coordinates refer to the coordinate in the second coordinate system (X'O'Y') established based on the location drawing picture.Second coordinate system
(X'O'Y') coordinate in corresponds to the pixel of the location drawing picture.In present embodiment, also deposited in advance in the memory 112
Store up the position coordinates corresponding to the scanning range of each camera 30, wherein the position coordinates can refer to being sat described first
Coordinate in mark system.
In the present embodiment, it is empty mounted on the navigation to be also stored with the multiple camera 30 for the memory 112
Between location information, and the location point information on the virtual route that can scan of camera 30 in the position.
In the present embodiment, the processor 113 can be central processing unit (Central Processing
Unit, CPU) or other microprocessors or other data processing chips for being able to carry out control function.The processor 113 is used
In execution software program code or operational data etc..
In the present embodiment, the multiple camera 30 is mounted on the eminence of navigation space.The navigation space can be with
Be the interior space can also be the exterior space.When the navigation space is the interior space, the multiple camera 30 can pacify
Mounted in indoor ceiling;When the navigation space is the exterior space, the multiple camera 30 may be mounted at outdoor pillar
On.It is understood that the range that the multiple camera 30 can scan needs to cover the navigation space, so as to scan
The robot 20 in the navigation space.
In present embodiment, the central controller 10 can be computer, smart mobile phone, tablet computer, individual digital
Assistant, notebook computer etc..
Referring to FIG. 3, the schematic diagram for the robot showing in mobile navigation system described in an embodiment of the present invention.
In the present embodiment, the robot 20 can be one or more.The robot 20 includes, but are not limited to battery
210, walking unit 211, radio-cell 212 and controller 213.Above-mentioned battery 210, walking unit 211, radio-cell 212 and
It is electrically connected between controller 213.
In the present embodiment, which is used for the walking unit 211, radio-cell 212 and controller 213
It is powered.The walking unit 211 is used to be walked according to the move that the robot 20 receives.The walking unit 211
It can be wheeled, caterpillar or leg formula.The radio-cell 212 be used for for the robot 20 and central controller 10 it
Between network connection is provided.The controller 213 is walked for controlling the walking unit 211 according to the virtual route, the control
Device 213 processed can also control the speed of travel and the direction of the robot 20.
In the present embodiment, the robot 20 can also include a charhing unit (not shown), the charging
Unit is used to provide electricity for the battery 210.
In the present embodiment, the central controller 10 shoots the image of the robot 20 by the camera 30
To identify the robot 20.Specifically, the robot 20 possesses unique shape, the camera 30 obtains the machine
The image can be sent after the image of device people 20 to the central controller 10, the central controller 10 utilizes image recognition skill
It is described to identify that art compares the image that the image prestored to the robot 20 in memory 112 is shot with the camera 30
Robot 20.When the similarity between the image of the robot 20 of the image and shooting of pre-stored robot 20 is more than or equal to
When one preset value, the central controller 10 can identify the robot 20, and can send control instruction to control the machine
Device people 20;When the similarity between the image of the robot 20 of the image and shooting of pre-stored robot 20 is less than described pre-
If when value, 10 None- identified of the central controller robot 20.
In another embodiment, the central controller 10 scans spraying by the camera 30 or is pasted onto described
The Quick Response Code on 20 surface of robot identifies the robot 20.Specifically, the camera 30 can scan the robot
The coding information is sent to the center by the Quick Response Code of 20 top surfaces to obtain the coding information of the robot 20
Controller 10.The central controller 10 by compare the coding information prestored to the robot 20 in memory 112 with
The camera 30 scans the two-dimensional code the coding information of acquisition to identify the robot 20.When pre-stored robot 20
When coding information is consistent with the coding information of acquisition, the central controller 10 can identify the robot 20, and can send out
Control instruction is sent to control the robot;When the coding information of pre-stored robot 20 and the coding information of acquisition are inconsistent
When, 10 None- identified of the central controller robot 20.
It is understood that 20 surface of the robot can also include different colours or shape by spraying or pasting
Marker, identify the robot 20 to facilitate the central controller 10 to pass through the camera 30 and scan the marker.
It can be identified it should be noted that the central controller 10 scans the robot 20 by the camera 30
The robot 20, while the robot 20 is positioned according to the image of the scanning of the camera 30, it is situated between behind detail
It continues.
In the present embodiment, after the central controller 10 identifies the robot 20, user is received from described defeated
Enter the starting point and ending point information of the input of output interface 110.The central controller 10 according to the starting point of the input and
Terminating point information generates a virtual route on the virtual map of the navigation space.The robot 20 receives the center control
The move that device 10 processed is sent, and walked along the virtual route according to the move.The central controller 10 judges institute
It states robot 20 and whether reaches next location point in the virtual route, and confirming that the robot 20 reaches the void
After next location point in quasi- path, determine whether the robot 20 needs in next position according to the virtual route
Set a steering.When determining that the robot 20 is needed in next location point steering, direction information is sent to the machine
People 20.The robot 20 is after receiving the direction information, after being turned to according to the direction information, walks on to the void
Next location point in quasi- path, until the robot 20 reaches terminating point.
In the present embodiment, the central controller 10 judges whether the robot 20 reaches in the virtual route
The scheme of next location point specifically include:
The central controller 10 is placed with that can scan the image forward direction that the camera 30 of next location point is shot
When the lower left corner be center of circle O', be laterally X' axis, longitudinal is Y' axis, establishes second coordinate system (X'O'Y').The center
Controller 10 determines position coordinates (X', Y') of the robot 20 in second coordinate system (X'O'Y').The position is sat
Mark the pixel that (X', Y') corresponds in described image.The central controller 10 is according to first coordinate system (XOY) and institute
The correspondence for stating the second coordinate system (X'O'Y') converts the position coordinates (X', Y') of the robot 20 in the images
For coordinate (X, Y) of the robot 20 in first coordinate system (XOY), thus the central controller 10 can confirm this
Whether robot has arrived at next location point.
Specifically, the central controller 10 obtains the current captured image of the camera 30, when robot 20
The position coordinates (X'1, Y'1) of captured image and position of the robot 20 corresponding to next location point in this prior
When position coordinates (X'2, Y'2) in image match, the central controller 10 can determine that the robot 20 moves to reach
Next location point.Location drawing picture and robot 20 corresponding to next location point is corresponding to next location point
Location drawing picture in position coordinates (X'2, Y'2) be stored in advance in the memory 112.
In one embodiment, the position coordinates (X'1, Y'1) match with position coordinates (X'2, Y'2) to be
Refer to when " X'1 " is located at section [X'2-M, X'2+M], and " Y'1 " is located at section [Y'2-M, Y'2+M], you can be determined as position
Coordinate (X'1, Y'1) matches with position coordinates (X'2, Y'2).The value of " M " and " N " can be preset, for example, M and N
Value be equal to 2,3 or other values.
It navigates in a navigation space referring to FIG. 4, showing mobile navigation system 100 in an embodiment of the present invention
Schematic diagram.In the present embodiment, the navigation space is indoor workshop, and the interior workshop includes six producing lines, respectively
For producing line 1 to producing line 6.Eminence (such as ceiling) in the navigation space is equipped with multiple cameras 30.For example, in Fig. 3
Four corners of the interior workshop ceiling are installed by one camera 30 respectively.Wherein, it is mounted on the upper left of the indoor workshop
The range A1 that the camera 30 at angle can scan covering includes producing line 1 and producing line 3, is mounted on the upper right corner of the indoor workshop
The range that camera 30 can scan covering includes producing line 2, and the camera 30 for being mounted on the lower left corner of the indoor workshop can be with
The range of scanning covering includes producing line 5, and the camera 30 mounted on the lower right corner of the indoor workshop can scan the model of covering
It includes producing line 4 and producing line 6 to enclose A2.
In the present embodiment, the robot 20 is needed from the starting point (stop where the navigation space upper left corner
Point) run to terminating point (dwell point in the lower right corner of the navigation space).The machine is identified in the central controller 10
After people 20, the starting point and ending point information that user inputs from the input/output interface 110 is received.The central controller
10 generate a virtual route according to the starting point and ending point information on the virtual map of the navigation space.Such as Fig. 3 institutes
Show, which includes two dwell points and four turning points.Described two dwell points are respectively first stop S1 and
Two dwell point S2, four turning points be respectively the first turning point T1, the second turning point T2, third turning point T3 and the 4th turn
Curved point T4.
The robot 20 receives the move of the transmission of the central controller 10, and according to the move along the void
Quasi- path walking.The move includes translational speed information.The central controller 10 is according to the first stop S1
Whether the distance between first turning point T1 and velocity estimated robot 20 of the robot 20 movement will reach first
Turning point T1.In the present embodiment, the central controller 10 according to the first stop S1 and the first turning point T1 it
Between distance and the robot 20 movement speed, can calculate the robot 20 be moved to needed for first turning point T1 when
Between.The central controller 10 can judge whether the robot 20 will reach first turning according to the time as a result,
Point T1.For example, the central controller 10 starts timing after sending move to the robot 20, when the timing time
When difference between the time calculated the central controller 10 is less than preset value, illustrate the robot 20 will reach this
One turning point T1.
When the robot 20 will reach first turning point T1, central controller 10 is according to pre-stored camera
The location information of 30 location informations, the first turning point T1 determines that camera corresponding with the first turning point T1 30 (is mounted on institute
State the upper left corner of indoor workshop), and the camera 30 for controlling the determination shoots the image of the robot 20 and sends the machine
The image of device people 20 is to the central controller 10.The central controller 10 is according to the image of the shooting and prestores to depositing
Location drawing picture of the robot 20 in first turning point T1 in reservoir 112 is compared, whether to confirm the robot 20
Have arrived at first turning point T1.After the robot 20 has arrived at first turning point T1, the central controller 10
The direction information turned left is sent to the robot 20.The robot 20 is after receiving the direction information, according to the steering
Information starts to turn left, and walks on to the second turning point T2.
The central controller 10 is according to the distance between the first turning point T1 and the second turning point T2 and the machine
Whether velocity estimated robot 20 that people 20 moves will reach the second turning point T2.Due to the second turning point T2
In the scanning range mounted on the camera 30 in the lower right corner, therefore when the robot 20 will reach the second turning point T2
When, camera 30 corresponding second turning point T2 (lower right corner for being mounted on navigation space) shoots the figure of the robot 20
Picture, and send shooting the robot 20 image to the central controller 10, the central controller 10 is according to the bat
The image taken the photograph and prestoring to the robot 20 in memory 112 is compared in the location drawing picture of second turning point T2
It is right, to confirm whether the robot 20 has arrived at second turning point T2.When the central controller 10 confirms the robot
After 20 have arrived at second turning point T2, right-handed direction information is sent to the robot 20.The robot 20 exists
After receiving the direction information, start to turn right according to the direction information, and walks on to third turning point T3.So cycle, directly
The second dwell point S2 (terminating point) is run to the robot 20.
Referring to FIG. 5, showing the flow chart of mobile navigation method in an embodiment of the present invention.According to different demands,
The sequence of step can change in the flow chart, and certain steps can be omitted or merge.
Step S51 receives starting point input by user and end after the central controller 10 identifies the robot 20
Stop information.
In the present embodiment, the central controller 10 shoots the image of the robot 20 by the camera 30
To identify the robot 20.Specifically, the robot 20 possesses unique shape, the camera 30 obtains the machine
The image can be sent after the image of device people 20 to the central controller 10, the central controller 10 is deposited in advance by comparing
It stores up and identifies the robot 20 to the image of the robot 20 in memory 112 and the image of the shooting of the camera 30.
The central controller 10 can also scan spraying by the camera 30 or be pasted onto 20 table of the robot
The Quick Response Code in face identifies the robot 20.
It is understood that 20 surface of the robot can also include different colours or shape by spraying or pasting
Marker, identify the robot 20 to facilitate the central controller 10 to pass through the camera 30 and scan the marker.
It can be identified it should be noted that the central controller 10 scans the robot 20 by the camera 30
The robot 20, while the robot 20 is positioned according to the image of the scanning of the camera 30.
Step S52, the central controller 10 generate a virtual route according to the starting point and ending point information.It is described
Virtual route is made of the connection relation between location point and location point and location point.The location point includes dwell point and turning
Point.In the present embodiment, the dwell point be defined as the robot 20 need to stop one in the virtual route it is pre-
If the point of time, for example, starting point and ending point.The turning point is defined as the robot 20 in the virtual route
The point of direction of travel can be changed.
Step S53, the robot 20 receive the move of the transmission of the central controller 10, and are referred to according to the movement
It enables and walking along the virtual route.The move includes speed of travel information.In the present embodiment, the robot 20
Controller 213 controls the walking unit 211 and is walked along the virtual route according to the move.
Step S54, it is next in the virtual route that the central controller 10 judges whether the robot 20 reaches
A location point.When the robot 20 reaches next location point on the virtual route, flow enters step S55;When
When the robot 20 is without reaching next location point on the virtual route, flow return to step S53.
In the present embodiment, the central controller 10 according to the starting point with it is next in the virtual route
The movement speed of the distance between location point and the robot 20 calculates the robot 20 and is moved to needed for next location point
Time, the central controller 10 judges whether the robot 10 will reach next location point according to the time.
For example, the central controller 10 starts timing after sending move to the robot 20, when the timing time and institute
When stating the difference between the time of the calculating of central controller 10 and being less than preset value, illustrate that this will be reached by the robot 20 that this is next
A location point.
When the robot 20 will reach next location point, the central controller 10 controls next position
It puts corresponding camera 30 and shoots the location drawing picture of the robot 20, and send the location drawing picture to the central controller
10。
The central controller 10 is placed with that can scan the image forward direction that the camera 30 of next location point is shot
When the lower left corner be center of circle O', be laterally X' axis, longitudinal is Y' axis, establishes second coordinate system (X'O'Y').The center
Controller 10 determines position coordinates (X', Y') of the robot 20 in second coordinate system (X'O'Y').The position is sat
Mark the pixel that (X', Y') corresponds in described image.The central controller 10 is according to first coordinate system (XOY) and institute
The correspondence for stating the second coordinate system (X'O'Y') converts the position coordinates (X', Y') of the robot 20 in the images
For coordinate (X, Y) of the robot 20 in first coordinate system (XOY), thus the central controller 10 can confirm this
Whether robot has arrived at next location point.
Specifically, the central controller 10 obtains the current captured image of the camera 30, when robot 20
The position coordinates (X'1, Y'1) of captured image and position of the robot 20 corresponding to next location point in this prior
When position coordinates (X'2, Y'2) in image match, the central controller 10 can determine that the robot 20 moves to reach
Next location point.Location drawing picture and robot 20 corresponding to next location point is corresponding to next location point
Location drawing picture in position coordinates (X'2, Y'2) be stored in advance in the memory 112.
Step S55, the central controller 10 judge whether the robot 20 needs to turn in next location point
To.When the robot 20 is when next location point needs to turn to, flow enters step S56;When the robot 20
It need not be turned in next location point, flow return to step S53.
In the present embodiment, the central controller 10 can judge the robot 20 according to the virtual route
Whether need to turn in next location point.For example, joining Fig. 4 it is found that the robot 20 is in the first turning point T1, second
Turning point T2 and base-leg turn point T4 are required for turning to, and the robot 20 need not be turned in third turning point T3.
Step S56, the robot 20 receive the steering order of the transmission of the central controller 10, and are referred to according to the steering
It enables and continues to walk along the virtual route after turning to.
Step S57, the central controller 10 judge whether the robot reaches terminating point.When the robot 20 arrives
Up to terminating point, flow terminates;When the robot 20 is without reaching terminating point, flow return to step S53.
Embodiment of above is merely illustrative of the technical solution of the present invention and unrestricted, although with reference to the above preferable embodiment party
Formula describes the invention in detail, it will be understood by those of ordinary skill in the art that, it can be to technical scheme of the present invention
The spirit and scope of technical solution of the present invention should not be all detached from by being modified or replaced equivalently.