CN111060110A - Robot navigation method, robot navigation device and robot - Google Patents

Robot navigation method, robot navigation device and robot Download PDF

Info

Publication number
CN111060110A
CN111060110A CN202010014890.7A CN202010014890A CN111060110A CN 111060110 A CN111060110 A CN 111060110A CN 202010014890 A CN202010014890 A CN 202010014890A CN 111060110 A CN111060110 A CN 111060110A
Authority
CN
China
Prior art keywords
robot
point
navigation
card punching
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010014890.7A
Other languages
Chinese (zh)
Inventor
刘翔
庞建新
熊友军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202010014890.7A priority Critical patent/CN111060110A/en
Publication of CN111060110A publication Critical patent/CN111060110A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0219Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory ensuring the processing of the whole working surface

Abstract

The application is applicable to the technical field of robots, and provides a robot navigation method, a robot navigation device and a robot, which comprises the following steps: when a navigation instruction is monitored, acquiring a target position corresponding to the navigation instruction, and determining the current position of the robot as an initial position; generating a first navigation route from the starting position to the target position according to a preset map, and determining the arrival sequence of each card punching point on the first navigation route, wherein the preset map comprises a plurality of mark points and card punching positions corresponding to each mark point, and the card punching points are mark points on the first navigation route; and respectively determining a second navigation route between every two adjacent punching points according to the arrival sequence and the punching positions corresponding to the punching points, and controlling the robot to sequentially go to the punching points according to the arrival sequence and the second navigation route. By the method, the navigation precision of the robot can be effectively improved.

Description

Robot navigation method, robot navigation device and robot
Technical Field
The present application relates to the field of robotics, and in particular, to a robot navigation method, a robot navigation apparatus, and a robot.
Background
With the continuous development of the robot technology, the functions of the service robot are gradually diversified. Such as a mobile robot, is a common service robot. This type of robot relies on its own onboard sensors to achieve collision-free motion from a starting position to a target position in a particular environment. The navigation problem is the key to the robot to realize autonomous movement.
In the prior art, common navigation methods include satellite navigation, inertial navigation, visual navigation, sensor data navigation, and the like. The satellite navigation cost is high, and the application range is narrow; the positioning accuracy of inertial navigation is poor, and deviation is easy to generate and accumulate in the motion process; the visual navigation is easily influenced by environments such as illumination and the like, and the calculation amount is large; sensor data navigation is susceptible to interference from obstacles or moving objects within the environment. Therefore, the navigation precision of the existing robot navigation method is low, and the navigation effect is poor.
Disclosure of Invention
The embodiment of the application provides a robot navigation method, a robot navigation device and a robot, and can solve the problem of low navigation precision of the existing robot navigation method.
In a first aspect, an embodiment of the present application provides a robot navigation method, including:
when a navigation instruction is monitored, acquiring a target position corresponding to the navigation instruction, and determining the current position of the robot as an initial position;
generating a first navigation route from the starting position to the target position according to a preset map, and determining the arrival sequence of each card punching point on the first navigation route, wherein the preset map comprises a plurality of mark points and card punching positions corresponding to each mark point, and the card punching points are mark points on the first navigation route;
and respectively determining a second navigation route between every two adjacent punching points according to the arrival sequence and the punching positions corresponding to the punching points, and controlling the robot to sequentially go to the punching points according to the arrival sequence and the second navigation route.
In a possible implementation manner of the first aspect, the controlling the robot to sequentially go to the respective card punching points according to the arrival sequence and the second navigation route includes:
when the robot reaches a card punching point, correcting the position of the robot;
and after the position is corrected, if the current time stamp point is not the last time stamp point, controlling the robot to move forward to the next time stamp point according to a second navigation route between the current time stamp point and the next time stamp point.
In a possible implementation manner of the first aspect, the performing position correction on the robot each time the robot reaches a card hitting point includes:
when the robot reaches a card punching point, judging whether the current position of the robot is consistent with the card punching position corresponding to the current card punching point;
if the current position of the robot is inconsistent with the current card punching position corresponding to the card punching point, determining a third navigation route from the current position of the robot to the current card punching position corresponding to the card punching point, and controlling the robot to move to the card punching position corresponding to the current card punching point according to the third navigation route;
and finishing position correction after the robot reaches the card punching position corresponding to the current card punching point.
In a possible implementation manner of the first aspect, the performing position correction on the robot each time the robot reaches a card hitting point includes:
when the robot reaches a card punching point, judging whether the current position of the robot is consistent with the card punching position corresponding to the current card punching point;
and if the current position of the robot is consistent with the current card punching position corresponding to the card punching point, finishing position correction.
In a possible implementation manner of the first aspect, the determining whether the current position of the robot is consistent with the current card punch position corresponding to the current card punch point includes:
acquiring at least one environment image of the current environment of the robot;
carrying out image recognition on the at least one environmental image by using the trained image recognition model to obtain a mark point corresponding to the at least one environmental image;
if the marking point corresponding to the at least one environment image is the current card punching point, judging that the current position of the robot is consistent with the card punching position corresponding to the current card punching point;
and if the mark point corresponding to the at least one environment image is not the current card punching point, judging that the current position of the robot is inconsistent with the card punching position corresponding to the current card punching point.
In a possible implementation manner of the first aspect, before performing image recognition on the at least one environmental image by using the trained image recognition model, the method further includes:
acquiring a training set, wherein the training set comprises environment images corresponding to all mark points in the preset map;
and training a preset image recognition model by using the training set to obtain the trained image recognition model.
In one possible implementation manner of the first aspect, the image recognition model includes:
the device comprises a convolution layer, a characteristic region pooling layer, a first full-connection layer, two second full-connection layers, a first loss layer and a second loss layer;
the input of convolution layer does the input of image recognition model, the output of convolution layer is connected the input of the regional pooling layer of characteristic, the output of the regional pooling layer of characteristic is connected the input of first full articulamentum, the input of the first full articulamentum of second and the input of the second full articulamentum of second are connected respectively to the output of first full articulamentum, the output of the first full articulamentum of second is connected the input of first loss layer, the output of the second full articulamentum of second is connected the input of second loss layer, the output of first loss layer does the first output of image recognition model, the output of second loss layer does the second output of image recognition model.
In a second aspect, an embodiment of the present application provides a robot navigation device, including:
the acquisition unit is used for acquiring a target position corresponding to a navigation instruction when the navigation instruction is monitored, and determining the current position of the robot as an initial position;
the planning unit is used for generating a first navigation route from the starting position to the target position according to a preset map, and determining the arrival sequence of each card punching point on the first navigation route, wherein the preset map comprises a plurality of mark points and card punching positions corresponding to each mark point, and the card punching points are mark points on the first navigation route;
and the navigation unit is used for respectively determining a second navigation route between every two adjacent punching points according to the arrival sequence and the punching positions corresponding to the punching points, and controlling the robot to sequentially go to the punching points according to the arrival sequence and the second navigation route.
In a third aspect, an embodiment of the present application provides a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, implements the robot navigation method according to any one of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, and the present application provides a computer-readable storage medium, where a computer program is stored, where the computer program, when executed by a processor, implements the robot navigation method according to any one of the foregoing first aspects.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the robot navigation method according to any one of the first aspect.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Compared with the prior art, the embodiment of the application has the advantages that:
according to the embodiment of the application, when a navigation instruction is monitored, a target position corresponding to the navigation instruction is obtained, and the current position of the robot is determined to be an initial position; generating a first navigation route from the starting position to the target position according to a preset map, and determining the arrival sequence of each card punching point on the first navigation route, wherein the preset map comprises a plurality of mark points and card punching positions corresponding to each mark point, and the card punching points are mark points on the first navigation route; then, according to the arrival sequence and the corresponding card punching positions of the card punching points, second navigation routes between every two adjacent card punching points are respectively determined, and by means of the method, the first navigation routes are divided into a plurality of shorter second navigation routes, so that the accumulation of deviation in the navigation process is avoided; and finally, controlling the robot to sequentially go to each card punching point according to the arrival sequence and the second navigation route, namely realizing segmented navigation control on the robot, so that the robot can guarantee higher navigation precision within each segment of shorter distance, and further guarantee the long-distance navigation precision from the initial position to the target position. By the method, the navigation precision of the robot is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic diagram of a robot navigation system provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a robot navigation method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a preset map according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a training method of an image recognition model according to an embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of an image recognition model provided in an embodiment of the present application;
fig. 6 is a block diagram of a robot navigation device provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a robot according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when.. or" upon "or" in response to a determination "or" in response to a detection ". Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise.
An application scenario of the robot navigation method provided by the embodiment of the application is introduced first. Referring to fig. 1, a schematic diagram of a robot navigation system according to an embodiment of the present application is provided. As shown in fig. 1, the robot navigation system may include a robot 101 and a robot navigation apparatus 102. The robot and the robot navigation can be in communication connection in a wired or wireless mode. When the robot and the robot navigation device are connected by wire, the robot navigation device and the robot may be installed together, for example, the robot navigation device is built in the robot; when the robot and the robot navigation device are wirelessly connected, the robot navigation device can be used as an independent terminal device or as a module to be built in a certain terminal device (such as a mobile phone, a computer and the like). When navigating, the robot navigation device controls the robot to move by using the robot navigation method provided by the embodiment of the application.
Fig. 2 shows a flow chart of a robot navigation method provided by an embodiment of the present application, which may include, by way of example and not limitation, the following steps:
s201, when a navigation instruction is monitored, acquiring a target position corresponding to the navigation instruction, and determining the current position of the robot as an initial position.
In practical application, the robot navigation device can be in communication connection with other control terminals, a user can send a navigation instruction to the robot navigation device through the control terminals, and the robot navigation device carries out navigation control on the robot when monitoring the navigation instruction. Of course, the user may input a navigation command through an input device on the robot, the input device is in communication connection with the robot navigation device, and the robot navigation device performs navigation control on the robot when monitoring the input navigation command.
The current position of the robot may be determined by a positioning device on the robot. The environment image of the current environment of the robot can be obtained through a shooting device on the robot, and the environment image is identified by using the image identification model trained in the embodiment of the application, so that the current position of the robot is obtained.
S202, generating a first navigation route from the starting position to the target position according to a preset map, and determining the arrival sequence of each card hitting point on the first navigation route.
The preset map comprises a plurality of marking points and a card punching position corresponding to each marking point, and the card punching points are the marking points located on the first navigation route.
The mark point may refer to a place, a building, and the like, and is not limited to a coordinate point. Accordingly, the punch position corresponding to the mark point may be a range including the mark point.
Referring to fig. 3, a schematic diagram of a preset map provided in the embodiment of the present application is shown. In fig. 3, a mall is taken as an example, the marks in the drawing are stores (11 marks in total as 01 to 11 in fig. 3), and the card punching position corresponding to each mark may be a position occupied by one store, or may be specified as an area in front of one store or an area around one store.
Since the punch-out position may be an area range, the start position and the target position may correspond to a punch-out point, respectively. Illustratively, assume that the robot is currently at the door of store 01 and is going to store 04. The front door position of the shop 01 belongs to the card punching position corresponding to the mark point 01, so that the initial position of the robot corresponds to the card punching point 01, the target position is the card punching point 04, and the first navigation route planned according to the preset map is 30m from east to west. Then the arrival order of the stuck points passed by the first navigation route is 01- >02- >03- > 04.
In practical application, a map can be established in advance, and then the map is marked manually to obtain a preset map. For example, taking fig. 3 as an example, a shopping mall map is first established, and then names or shopping malls (i.e., mark points) of various shops are marked on the established shopping mall map; correspondingly, the positions of all shops in the shopping mall can be obtained according to the shopping mall map, and the card punching position can be determined according to the positions of the shops in the shopping mall.
S203, respectively determining a second navigation route between every two adjacent punching points according to the arrival sequence and the punching positions corresponding to the punching points, and controlling the robot to sequentially go to the punching points according to the arrival sequence and the second navigation route.
Continuing with the example in step S202, the arrival order of the respective click points in the first navigation route is 01- >02- >03- > 04. A second navigation route of 01 to 02, a navigation route of 02 to 03, and a navigation route of 03 to 04 need to be planned, respectively. In other words, the second navigation route between the point a and the point B is a navigation route with the point a as the starting point and the point B as the target point.
In one embodiment, the step S203 of controlling the robot to sequentially go to the respective card punching points according to the arrival sequence and the second navigation route may include the following steps:
and S11, performing position correction on the robot every time the robot reaches a card hitting point.
And dividing the first navigation route into a plurality of second navigation routes, which is equivalent to the sectional control of the robot. For example, the control process may be that the robot navigation device sends a forward instruction to the robot, the forward instruction includes a second navigation route, and the robot advances according to the second navigation route. When the robot reaches the end point of the second navigation route, an arrival response is returned to the robot navigation device. The reach response is used to indicate that the robot has reached a point of punch. Navigation deviation may exist in the robot traveling process, so that the actual card punching point reached by the robot may not be the card punching point which should be reached, and position correction is needed.
And S12, after the position is corrected, if the current trip point is not the last trip point, controlling the robot to move forward to the next trip point according to a second navigation route between the current trip point and the next trip point.
In one embodiment, the performing position correction on the robot every time the robot reaches a point of punch in S11 may include:
and S111, judging whether the current position of the robot is consistent with the current card punching position corresponding to the current card punching point when the robot reaches one card punching point.
Optionally, the determining, in S111, whether the current position of the robot is consistent with the current card punch position corresponding to the current card punch point may include:
I. and acquiring at least one environment image of the current environment of the robot.
The environment image may be an image of a building corresponding to the current point of credit. As in the example shown in FIG. 3, the environmental image may be an image of a store's facade, a billboard image, or the like.
In practical application, the robot is provided with a shooting device, and an environment image can be acquired through the shooting device. In order to ensure the accuracy of identification, the shooting device can be controlled to shoot in different directions and angles to obtain a plurality of environment images.
II. And carrying out image recognition on the at least one environmental image by using the trained image recognition model to obtain a mark point corresponding to the at least one environmental image.
And III, if the mark point corresponding to the at least one environment image is the current card punching point, judging that the current position of the robot is consistent with the card punching position corresponding to the current card punching point.
And IV, if the mark point corresponding to the at least one environment image is not the current card punching point, judging that the current position of the robot is inconsistent with the card punching position corresponding to the current card punching point.
And S112, if the current position of the robot is inconsistent with the current card punching position corresponding to the card punching point, determining a third navigation route from the current position of the robot to the current card punching position corresponding to the card punching point, and controlling the robot to move to the current card punching position corresponding to the card punching point according to the third navigation route.
For example, assuming that the robot should reach the click point a and actually reach the click point B, a third navigation route from B to a is determined with B as the starting position and a as the target position.
And S113, after the robot reaches the card punching position corresponding to the current card punching point, finishing position correction.
And S114, if the current position of the robot is consistent with the current card punching position corresponding to the card punching point, finishing position correction.
According to the embodiment of the application, when a navigation instruction is monitored, a target position corresponding to the navigation instruction is obtained, and the current position of the robot is determined to be an initial position; generating a first navigation route from the starting position to the target position according to a preset map, and determining the arrival sequence of each card punching point on the first navigation route, wherein the preset map comprises a plurality of mark points and card punching positions corresponding to each mark point, and the card punching points are mark points on the first navigation route; then, according to the arrival sequence and the corresponding card punching positions of the card punching points, second navigation routes between every two adjacent card punching points are respectively determined, and by means of the method, the first navigation routes are divided into a plurality of shorter second navigation routes, so that the accumulation of deviation in the navigation process is avoided; and finally, controlling the robot to sequentially go to each card punching point according to the arrival sequence and the second navigation route, namely realizing segmented navigation control on the robot, so that the robot can guarantee higher navigation precision within each segment of shorter distance, and further guarantee the long-distance navigation precision from the initial position to the target position. By the method, the navigation precision of the robot is effectively improved.
Referring to fig. 4, a flowchart of a training method for an image recognition model provided in the embodiment of the present application is schematically shown. As shown in fig. 4, before performing image recognition on the at least one environmental image by using the trained image recognition model in step S111, a training process of the image recognition model may be further included, as follows:
s401, a training set is obtained, wherein the training set comprises environment images corresponding to all mark points in the preset map.
In order to ensure the identification accuracy of the image identification model, for each marking point, a plurality of environment images corresponding to the marking point can be acquired. For example, the marker points may be photographed at different photographing angles, respectively, images of other buildings around the marker points may be photographed, and the like.
And then labeling each environmental image in the training set in advance, namely labeling the corresponding mark points of the environmental images.
S402, training a preset image recognition model by using the training set to obtain the trained image recognition model.
Optionally, referring to fig. 5, a schematic structural diagram of an image recognition model provided in the embodiment of the present application is shown in fig. 5, where the image recognition model may include:
the device comprises a convolution layer, a characteristic region pooling layer, a first full-connection layer, two second full-connection layers, a first loss layer and a second loss layer.
The input of convolution layer does the input of image recognition model, the output of convolution layer is connected the input of the regional pooling layer of characteristic, the output of the regional pooling layer of characteristic is connected the input of first full articulamentum, the input of the first full articulamentum of second and the input of the second full articulamentum of second are connected respectively to the output of first full articulamentum, the output of the first full articulamentum of second is connected the input of first loss layer, the output of the second full articulamentum of second is connected the input of second loss layer, the output of first loss layer does the first output of image recognition model, the output of second loss layer does the second output of image recognition model.
The convolution layer performs convolution processing on the input image; the characteristic region pooling layer is used for pooling the characteristic region of the image after the convolution processing; the first full-connection layer performs first full-connection processing on the image subjected to the pooling processing, and then the image subjected to the first full-connection processing is respectively sent to the two second full-connection layers; the first second full-connection layer carries out second full-connection processing on the image subjected to the first full-connection processing, and the first loss layer carries out recognition on the image subjected to the second full-connection processing and outputs a recognition result; meanwhile, the second full-connection layer performs third full-connection processing on the image subjected to the first full-connection processing, and the second loss layer performs regression fine-tuning processing on the image subjected to the third full-connection processing.
Alternatively, the image recognition model may be a Fast R-CNN network.
With the above network, there are two parallel processing networks, namely an identification network and a regression trim network. In the training process, recognition and regression fine tuning can be processed in parallel; in the identification process, the identification result can be directly obtained according to the identification network.
According to the embodiment of the application, a training set is obtained, wherein the training set comprises environment images corresponding to all mark points in a preset map; and then, training a preset image recognition model by using a training set, and obtaining the image recognition model trained in advance by the method. When the robot reaches a card-punching point, the trained image recognition model can be used for quickly recognizing the environment around the robot, so that the robot can be quickly positioned, and the navigation efficiency is effectively improved; in addition, the trained image recognition model has high recognition precision, can accurately recognize the surrounding environment of the robot, ensures the positioning accuracy and further ensures the navigation accuracy.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 6 shows a block diagram of a robot navigation device according to an embodiment of the present application, which corresponds to the robot navigation method according to the above embodiment, and only shows portions related to the embodiment of the present application for convenience of description.
Referring to fig. 6, the apparatus includes:
the acquiring unit 61 is configured to acquire a target position corresponding to a navigation instruction when the navigation instruction is monitored, and determine that the current position of the robot is an initial position.
The planning unit 62 is configured to generate a first navigation route from the starting position to the target position according to a preset map, and determine an arrival sequence of each punch-off point on the first navigation route, where the preset map includes a plurality of mark points and a punch-off position corresponding to each mark point, and the punch-off point is a mark point located on the first navigation route.
And the navigation unit 63 is configured to determine a second navigation route between every two adjacent punching points according to the arrival sequence and the punching positions corresponding to the punching points, and control the robot to sequentially move to the punching points according to the arrival sequence and the second navigation route.
Optionally, the navigation unit 63 includes:
and the correction subunit is used for correcting the position of the robot when the robot reaches a card hitting point.
And the control subunit is used for controlling the robot to move forward to the next trip point according to a second navigation route between the current trip point and the next trip point if the current trip point is not the last trip point after the position is corrected.
Optionally, the modifying subunit includes:
and the judging module is used for judging whether the current position of the robot is consistent with the corresponding card punching position of the current card punching point when the robot reaches one card punching point.
And the determining module is used for determining a third navigation route from the current position of the robot to the current card punching position corresponding to the card punching point if the current position of the robot is inconsistent with the card punching position corresponding to the current card punching point, and controlling the robot to move to the current card punching position corresponding to the current card punching point according to the third navigation route.
And the ending module is used for ending the position correction after the robot reaches the card punching position corresponding to the current card punching point.
Optionally, the modifying subunit further includes:
and the ending module is also used for ending the position correction if the current position of the robot is consistent with the current card punching position corresponding to the card punching point.
Optionally, the determining module includes:
and the acquisition submodule is used for acquiring at least one environment image of the current environment of the robot.
And the recognition submodule is used for carrying out image recognition on the at least one environmental image by using the trained image recognition model to obtain a mark point corresponding to the at least one environmental image.
And the first judgment sub-module is used for judging that the current position of the robot is consistent with the current card punching position corresponding to the current card punching point if the mark point corresponding to the at least one environment image is the current card punching point.
And the second judging submodule is used for judging that the current position of the robot is inconsistent with the current card punching position corresponding to the current card punching point if the mark point corresponding to the at least one environment image is not the current card punching point.
Optionally, the apparatus 6 further comprises:
and the data acquisition unit is used for acquiring a training set, wherein the training set comprises environment images corresponding to all mark points in the preset map.
And the training unit is used for training a preset image recognition model by using the training set to obtain the trained image recognition model.
Optionally, the image recognition model includes:
the device comprises a convolution layer, a characteristic region pooling layer, a first full-connection layer, two second full-connection layers, a first loss layer and a second loss layer.
The input of convolution layer does the input of image recognition model, the output of convolution layer is connected the input of the regional pooling layer of characteristic, the output of the regional pooling layer of characteristic is connected the input of first full articulamentum, the input of the first full articulamentum of second and the input of the second full articulamentum of second are connected respectively to the output of first full articulamentum, the output of the first full articulamentum of second is connected the input of first loss layer, the output of the second full articulamentum of second is connected the input of second loss layer, the output of first loss layer does the first output of image recognition model, the output of second loss layer does the second output of image recognition model.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
The robot navigation device shown in fig. 6 may be a software unit, a hardware unit, or a combination of software and hardware unit built in an existing terminal device, may be integrated into the terminal device as an independent pendant, or may exist as an independent terminal device.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 7 is a schematic structural diagram of a robot according to an embodiment of the present application. As shown in fig. 7, the robot 7 of this embodiment includes: at least one processor 70 (only one shown in fig. 7), a memory 71, and a computer program 72 stored in the memory 71 and executable on the at least one processor 70, the processor 70 implementing the steps in any of the various robot navigation method embodiments described above when executing the computer program 72.
The robot may include, but is not limited to, a processor, a memory. Those skilled in the art will appreciate that fig. 7 is merely an example of the robot 7, and does not constitute a limitation on the robot 7, and may include more or less components than those shown, or combine some of the components, or different components, such as input and output devices, network access devices, etc.
The Processor 70 may be a Central Processing Unit (CPU), and the Processor 70 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may in some embodiments be an internal storage unit of the robot 7, such as a hard disk or a memory of the robot 7. In other embodiments, the memory 71 may also be an external storage device of the robot 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the robot 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the robot 7. The memory 71 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 71 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a robot, recording medium, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method of robot navigation, comprising:
when a navigation instruction is monitored, acquiring a target position corresponding to the navigation instruction, and determining the current position of the robot as an initial position;
generating a first navigation route from the starting position to the target position according to a preset map, and determining the arrival sequence of each card punching point on the first navigation route, wherein the preset map comprises a plurality of mark points and card punching positions corresponding to each mark point, and the card punching points are mark points on the first navigation route;
and respectively determining a second navigation route between every two adjacent punching points according to the arrival sequence and the punching positions corresponding to the punching points, and controlling the robot to sequentially go to the punching points according to the arrival sequence and the second navigation route.
2. The robot navigation method of claim 1, wherein said controlling the robot to travel to the respective landing points in sequence according to the arrival order and the second navigation route comprises:
when the robot reaches a card punching point, correcting the position of the robot;
and after the position is corrected, if the current time stamp point is not the last time stamp point, controlling the robot to move forward to the next time stamp point according to a second navigation route between the current time stamp point and the next time stamp point.
3. The robot navigation method of claim 2, wherein said correcting the position of said robot each time said robot reaches a point of a punch, comprises:
when the robot reaches a card punching point, judging whether the current position of the robot is consistent with the card punching position corresponding to the current card punching point;
if the current position of the robot is inconsistent with the current card punching position corresponding to the card punching point, determining a third navigation route from the current position of the robot to the current card punching position corresponding to the card punching point, and controlling the robot to move to the card punching position corresponding to the current card punching point according to the third navigation route;
and finishing position correction after the robot reaches the card punching position corresponding to the current card punching point.
4. The robot navigation method of claim 2, wherein said correcting the position of said robot each time said robot reaches a point of a punch, comprises:
when the robot reaches a card punching point, judging whether the current position of the robot is consistent with the card punching position corresponding to the current card punching point;
and if the current position of the robot is consistent with the current card punching position corresponding to the card punching point, finishing position correction.
5. The robot navigation method according to claim 3 or 4, wherein the determining whether the current position of the robot coincides with the current card punch position corresponding to the current card punch point includes:
acquiring at least one environment image of the current environment of the robot;
carrying out image recognition on the at least one environmental image by using the trained image recognition model to obtain a mark point corresponding to the at least one environmental image;
if the marking point corresponding to the at least one environment image is the current card punching point, judging that the current position of the robot is consistent with the card punching position corresponding to the current card punching point;
and if the mark point corresponding to the at least one environment image is not the current card punching point, judging that the current position of the robot is inconsistent with the card punching position corresponding to the current card punching point.
6. The robotic navigation method of claim 5, wherein prior to image recognizing the at least one environmental image using the trained image recognition model, the method further comprises:
acquiring a training set, wherein the training set comprises environment images corresponding to all mark points in the preset map;
and training a preset image recognition model by using the training set to obtain the trained image recognition model.
7. The robotic navigation method of claim 6, wherein the image recognition model comprises:
the device comprises a convolution layer, a characteristic region pooling layer, a first full-connection layer, two second full-connection layers, a first loss layer and a second loss layer;
the input of convolution layer does the input of image recognition model, the output of convolution layer is connected the input of the regional pooling layer of characteristic, the output of the regional pooling layer of characteristic is connected the input of first full articulamentum, the input of the first full articulamentum of second and the input of the second full articulamentum of second are connected respectively to the output of first full articulamentum, the output of the first full articulamentum of second is connected the input of first loss layer, the output of the second full articulamentum of second is connected the input of second loss layer, the output of first loss layer does the first output of image recognition model, the output of second loss layer does the second output of image recognition model.
8. A robotic navigation device, comprising:
the acquisition unit is used for acquiring a target position corresponding to a navigation instruction when the navigation instruction is monitored, and determining the current position of the robot as an initial position;
the planning unit is used for generating a first navigation route from the starting position to the target position according to a preset map, and determining the arrival sequence of each card punching point on the first navigation route, wherein the preset map comprises a plurality of mark points and card punching positions corresponding to each mark point, and the card punching points are mark points on the first navigation route;
and the navigation unit is used for respectively determining a second navigation route between every two adjacent punching points according to the arrival sequence and the punching positions corresponding to the punching points, and controlling the robot to sequentially go to the punching points according to the arrival sequence and the second navigation route.
9. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202010014890.7A 2020-01-07 2020-01-07 Robot navigation method, robot navigation device and robot Pending CN111060110A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010014890.7A CN111060110A (en) 2020-01-07 2020-01-07 Robot navigation method, robot navigation device and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010014890.7A CN111060110A (en) 2020-01-07 2020-01-07 Robot navigation method, robot navigation device and robot

Publications (1)

Publication Number Publication Date
CN111060110A true CN111060110A (en) 2020-04-24

Family

ID=70306580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010014890.7A Pending CN111060110A (en) 2020-01-07 2020-01-07 Robot navigation method, robot navigation device and robot

Country Status (1)

Country Link
CN (1) CN111060110A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111811522A (en) * 2020-09-01 2020-10-23 中国人民解放军国防科技大学 Unmanned vehicle autonomous navigation method and device, computer equipment and storage medium
CN116412830A (en) * 2023-06-06 2023-07-11 深圳市磅旗科技智能发展有限公司 IHDR-based logistics robot self-adaptive navigation method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001021375A (en) * 1999-07-09 2001-01-26 Nissan Motor Co Ltd Navigation system
CN102955478A (en) * 2012-10-24 2013-03-06 深圳一电科技有限公司 Unmanned aerial vehicle flying control method and unmanned aerial vehicle flying control system
CN103335652A (en) * 2013-06-24 2013-10-02 陕西科技大学 Dining room path navigation system and method of robot
CN109357683A (en) * 2018-10-26 2019-02-19 杭州睿琪软件有限公司 A kind of air navigation aid based on point of interest, device, electronic equipment and storage medium
CN110471409A (en) * 2019-07-11 2019-11-19 深圳市优必选科技股份有限公司 Robot method for inspecting, device, computer readable storage medium and robot
CN110554697A (en) * 2019-08-15 2019-12-10 北京致行慕远科技有限公司 Travel method, travel-enabled device, and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001021375A (en) * 1999-07-09 2001-01-26 Nissan Motor Co Ltd Navigation system
CN102955478A (en) * 2012-10-24 2013-03-06 深圳一电科技有限公司 Unmanned aerial vehicle flying control method and unmanned aerial vehicle flying control system
CN103335652A (en) * 2013-06-24 2013-10-02 陕西科技大学 Dining room path navigation system and method of robot
CN109357683A (en) * 2018-10-26 2019-02-19 杭州睿琪软件有限公司 A kind of air navigation aid based on point of interest, device, electronic equipment and storage medium
CN110471409A (en) * 2019-07-11 2019-11-19 深圳市优必选科技股份有限公司 Robot method for inspecting, device, computer readable storage medium and robot
CN110554697A (en) * 2019-08-15 2019-12-10 北京致行慕远科技有限公司 Travel method, travel-enabled device, and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111811522A (en) * 2020-09-01 2020-10-23 中国人民解放军国防科技大学 Unmanned vehicle autonomous navigation method and device, computer equipment and storage medium
CN116412830A (en) * 2023-06-06 2023-07-11 深圳市磅旗科技智能发展有限公司 IHDR-based logistics robot self-adaptive navigation method and system
CN116412830B (en) * 2023-06-06 2023-08-11 深圳市磅旗科技智能发展有限公司 IHDR-based logistics robot self-adaptive navigation method and system

Similar Documents

Publication Publication Date Title
CN110160542B (en) Method and device for positioning lane line, storage medium and electronic device
CN108827307B (en) Navigation method, navigation device, terminal and computer readable storage medium
CN108303720B (en) Vehicle positioning method and device and terminal equipment
CN109668551B (en) Robot positioning method, device and computer readable storage medium
CN111442722A (en) Positioning method, positioning device, storage medium and electronic equipment
CN111121754A (en) Mobile robot positioning navigation method and device, mobile robot and storage medium
CN111238496B (en) Robot posture confirming method, device, computer equipment and storage medium
CN110726417B (en) Vehicle yaw identification method, device, terminal and storage medium
JP7436655B2 (en) Vehicle parking management method, electronic device, and computer storage medium
CN111381586A (en) Robot and movement control method and device thereof
CN111805535B (en) Positioning navigation method, device and computer storage medium
US20220076469A1 (en) Information display device and information display program
CN107972027B (en) Robot positioning method and device and robot
CN112146682B (en) Sensor calibration method and device for intelligent automobile, electronic equipment and medium
CN111060110A (en) Robot navigation method, robot navigation device and robot
CN105387857A (en) Navigation method and device
CN114461740A (en) Map updating method, map updating device, computer device, and storage medium
CN111145634B (en) Method and device for correcting map
CN112689234B (en) Indoor vehicle positioning method, device, computer equipment and storage medium
CN109029418A (en) A method of vehicle is positioned in closed area
KR20190081334A (en) Method for tracking moving trajectory based on complex positioning and apparatus thereof
CN112447058A (en) Parking method, parking device, computer equipment and storage medium
CN111157012B (en) Robot navigation method and device, readable storage medium and robot
CN110554697A (en) Travel method, travel-enabled device, and storage medium
CN116033544A (en) Indoor parking lot positioning method, computer device, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200424