CN111113404A - Method for robot to obtain position service and robot - Google Patents

Method for robot to obtain position service and robot Download PDF

Info

Publication number
CN111113404A
CN111113404A CN201811294619.2A CN201811294619A CN111113404A CN 111113404 A CN111113404 A CN 111113404A CN 201811294619 A CN201811294619 A CN 201811294619A CN 111113404 A CN111113404 A CN 111113404A
Authority
CN
China
Prior art keywords
current frame
optical flow
frame
map
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811294619.2A
Other languages
Chinese (zh)
Other versions
CN111113404B (en
Inventor
宋亚斐
郑艺强
李名杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuzhou Online E Commerce Beijing Co ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811294619.2A priority Critical patent/CN111113404B/en
Publication of CN111113404A publication Critical patent/CN111113404A/en
Application granted granted Critical
Publication of CN111113404B publication Critical patent/CN111113404B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks

Abstract

The application discloses a method for a robot to obtain position service and the robot. The method for obtaining the location service comprises the following steps: the robot obtains an occupancy map and a visible map of the current frame for the position of the obstacle; the robot extracts features of the occupation map and the visible map to obtain a feature map of the current frame; the robot captures the dependency relationship between the previous frame and the current frame by using a gate control cycle unit neural network according to the feature map of the current frame; and the robot obtains at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame. By adopting the method provided by the application, the problem that only forward optical flow can be estimated for optical flow estimation of the two-dimensional laser radar in the prior art is solved.

Description

Method for robot to obtain position service and robot
Technical Field
The application relates to the technical field of robots, in particular to a method for a robot to obtain position service and the robot.
Background
With the rapid development of intelligent technology, robots have been increasingly used in commercial activities, such as the distribution of logistics using robots. The two-dimensional laser radar is a common technical means for the robot to measure the distance between obstacles.
The optical flow is used for describing the motion information of each pixel point in two continuous two-dimensional images a and b, and for the current image a, the forward optical flow f records the position of each point in the image b; the backward optical flow b records the position of each point in the next frame image b in the current image a. The robot moving track prediction is carried out by utilizing the two-dimensional laser radar optical flow, and support can be provided for robot positioning, navigation and obstacle avoidance.
For the optical flow estimation of the two-dimensional laser radar, the prior art adopts a circulating flow network to estimate the forward optical flow of the two-dimensional laser radar. The method can only estimate the forward optical flow, but cannot estimate the backward optical flow.
Disclosure of Invention
The application provides a method and a device for a robot to obtain position service, which aim to solve the problem that in the prior art, only forward optical flow can be estimated aiming at optical flow estimation of a two-dimensional laser radar.
The application provides a method for a robot to obtain position service, which comprises the following steps:
the robot obtains an occupancy map and a visible map of the current frame for the position of the obstacle;
the robot extracts features of the occupation map and the visible map to obtain a feature map of the current frame;
the robot captures the dependency relationship between the previous frame and the current frame by using a gate control cycle unit neural network according to the feature map of the current frame;
the robot obtains at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame and the current frame before the current frame;
the robot obtains a location service according to at least one of a forward optical flow and a backward optical flow between the current frame and a next frame.
Optionally, the acquiring, by the robot, an occupancy map and a visibility map for the obstacle position of the current frame includes:
the robot scans the obstacles to obtain scanning data;
the robot generates an occupancy map and a visibility map for the obstacle position of the current frame from the scan data.
Optionally, the performing, by the robot, feature extraction on the occupancy map and the visible map to obtain a feature map of the current frame includes:
and performing feature extraction on the occupation map and the visible map by using the translation convolution operation of the feature detector in the first convolution layer to obtain the feature map of the current frame.
Optionally, the robot captures a dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network according to the feature map of the current frame, including;
inputting the feature map of the current frame into a gated cyclic unit neural network;
and utilizing the memory characteristic of the neural network of the gating circulation unit to perform characteristic mining between the characteristic diagram of the previous frame and the characteristic diagram of the current frame and capture the dependency relationship between the previous frame and the current frame.
Optionally, the method for the robot to obtain the location service further includes:
obtaining estimation characteristic information of a next frame by using the forward optical flow;
acquiring a first difference value of real characteristic information of a next frame and estimated characteristic information of the next frame;
and obtaining the optimized parameters of the neural network of the gating cycle unit by taking the minimized first difference as a target.
Optionally, the method for the robot to obtain the location service further includes:
obtaining estimated characteristic information of the current frame by utilizing the backward optical flow;
acquiring a second difference value between the real characteristic information of the current frame and the estimated characteristic information of the current frame;
and obtaining the optimized parameters of the neural network of the gating cycle unit by taking the minimized second difference as a target.
Optionally, the acquiring, by the robot, a forward optical flow and a backward optical flow between the current frame and the next frame according to a dependency relationship between a previous frame before the current frame and the current frame includes:
the robot obtains optical flow information between the current frame and the next frame according to the dependency relationship between the previous frame and the current frame before the current frame;
and the robot utilizes a second convolution layer to perform feature extraction on the optical flow information to obtain a forward optical flow and a backward optical flow between the current frame and the next frame.
Optionally, the robot obtains the location service according to the forward optical flow and the backward optical flow between the current frame and the next frame, and includes:
the robot obtains position information of the obstacle according to the forward optical flow and the backward optical flow between the current frame and the next frame;
and generating the traveling route information of the robot according to the position information of the obstacle.
The application provides a robot, which comprises a robot body and a two-dimensional laser radar arranged on the robot body; the two-dimensional laser radar is used for acquiring the scanning data of the current frame; the robot body is used for executing the following operations:
obtaining an occupancy map and a visible map of the current frame for the position of the obstacle according to the scanning data;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
according to the feature map of the current frame, capturing the dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network;
obtaining at least one of a forward optical flow and a backward optical flow between the current frame and a next frame according to the dependency relationship between a previous frame before the current frame and the current frame;
obtaining a location service according to at least one of a forward optical flow and a backward optical flow between the current frame and a next frame.
The application provides a method for obtaining optical flow, which comprises the following steps:
obtaining an occupancy map and a visibility map for the obstacle position of the current frame;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
according to the feature map of the current frame, capturing the dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network;
and obtaining at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
Optionally, the obtaining an occupancy map and a visibility map of the current frame for the obstacle position includes:
scanning the obstacle to obtain scanning data;
and generating an occupancy map and a visible map of the current frame for the position of the obstacle according to the scanning data.
Optionally, the performing feature extraction on the occupancy map and the visible map to obtain the feature map of the current frame includes:
and performing feature extraction on the occupation map and the visible map by using the translation convolution operation of the feature detector in the first convolution layer to obtain the feature map of the current frame.
Optionally, capturing a dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network according to the feature map of the current frame, including;
inputting the feature map of the current frame into a gated cyclic unit neural network;
and utilizing the memory characteristic of the neural network of the gating circulation unit to perform characteristic mining between the characteristic diagram of the previous frame and the characteristic diagram of the current frame and capture the dependency relationship between the previous frame and the current frame.
Optionally, the method for obtaining optical flow further includes:
obtaining estimation characteristic information of a next frame by using the forward optical flow;
acquiring a first difference value of real characteristic information of a next frame and estimated characteristic information of the next frame;
and obtaining the optimized parameters of the neural network of the gating cycle unit by taking the minimized first difference as a target.
Optionally, the method for obtaining optical flow further includes:
obtaining estimated characteristic information of the current frame by utilizing the backward optical flow;
acquiring a second difference value between the real characteristic information of the current frame and the estimated characteristic information of the current frame;
and obtaining the optimized parameters of the neural network of the gating cycle unit by taking the minimized second difference as a target.
Optionally, obtaining a forward optical flow and a backward optical flow between the current frame and the next frame according to a dependency relationship between a previous frame before the current frame and the current frame, includes:
acquiring optical flow information between the current frame and the next frame according to the dependency relationship between the previous frame and the current frame before the current frame;
and performing feature extraction on the optical flow information by using a second convolution layer to obtain a forward optical flow and a backward optical flow between the current frame and the next frame.
The application provides an optical flow obtaining device, comprising:
a binary image obtaining unit for obtaining an occupancy map and a visibility map for the obstacle position of the current frame;
a current frame feature extraction unit, configured to perform feature extraction on the occupancy map and the visible map to obtain a feature map of the current frame;
the dependency relationship capturing unit is used for capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gate control circulation unit neural network according to the characteristic diagram of the current frame;
and the optical flow obtaining unit is used for obtaining at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
Optionally, the binary image obtaining unit is specifically configured to:
scanning the obstacle to obtain scanning data;
and generating an occupancy map and a visible map of the current frame for the position of the obstacle according to the scanning data.
Optionally, the current frame feature extraction unit is specifically configured to:
and performing feature extraction on the occupation map and the visible map by using the translation convolution operation of the feature detector in the first convolution layer to obtain the feature map of the current frame.
Optionally, the dependency relationship capturing unit is specifically configured to;
inputting the feature map of the current frame into a gated cyclic unit neural network;
and capturing the dependency relationship between the previous frame and the current frame by utilizing the characteristic that the output of the gating circulation unit neural network at the current moment depends on the input of the current moment and the state quantity of the hidden layer at the previous moment.
Optionally, the apparatus for obtaining an optical flow further includes a first training unit, where the first training unit is configured to:
obtaining estimation characteristic information of a next frame by using the forward optical flow;
acquiring a first difference value of real characteristic information of a next frame and estimated characteristic information of the next frame;
and obtaining the optimized parameters of the neural network of the gating cycle unit by taking the minimized first difference as a target.
Optionally, the apparatus for obtaining an optical flow further includes a second training unit, where the second training unit is configured to:
obtaining estimated characteristic information of the current frame by utilizing the backward optical flow;
acquiring a second difference value between the real characteristic information of the current frame and the estimated characteristic information of the current frame;
and obtaining the optimized parameters of the neural network of the gating cycle unit by taking the minimized second difference as a target.
Optionally, the optical flow obtaining unit is specifically configured to:
the robot obtains optical flow information between the current frame and the next frame according to the dependency relationship between the previous frame and the current frame before the current frame;
and the robot utilizes a second convolution layer to perform feature extraction on the optical flow information to obtain a forward optical flow and a backward optical flow between the current frame and the next frame.
The application provides a robot, which comprises a robot body and a two-dimensional laser radar arranged on the robot body; the two-dimensional laser radar is used for acquiring the scanning data of the current frame; the robot body is used for executing the following operations:
obtaining an occupancy map and a visible map of the current frame for the position of the obstacle according to the scanning data;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
according to the feature map of the current frame, capturing the dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network;
and obtaining at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
The application provides an electronic device, the electronic device includes:
a processor;
a memory for storing a program that, when read and executed by the processor, performs the following:
obtaining an occupancy map and a visibility map for the obstacle position of the current frame;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
according to the feature map of the current frame, capturing the dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network;
and obtaining at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
The present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
obtaining an occupancy map and a visibility map for the obstacle position of the current frame;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
according to the feature map of the current frame, capturing the dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network;
and obtaining at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
The application provides a method for a robot to transport goods, comprising the following steps:
the robot obtains goods to be transported and determines destination information of the goods;
the robot obtains a feature map of a current frame for representing the feature information of the position of the obstacle;
the robot captures the dependency relationship between the previous frame and the current frame by using a gated cyclic unit neural network according to the feature map;
the robot obtains at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame and the current frame before the current frame;
the robot generating a travel route from a current position of the robot to a destination indicated by the destination information, based on at least one of a forward optical flow and a backward optical flow between the current frame and a next frame and the destination information of the goods;
the robot transports the goods according to the travel route.
By adopting the method for acquiring the position service by the robot, the forward optical flow and the backward optical flow between the current frame and the next frame are estimated by utilizing the gate control circulation unit neural network according to the data of the current frame and the previous frame, and the problem that only the forward optical flow can be estimated aiming at the optical flow estimation of the two-dimensional laser radar in the prior art is solved.
Drawings
Fig. 1 is a flowchart of a first embodiment of a method for a robot to obtain location services provided by the present application.
Fig. 2 is an internal structural view of a GRU according to a first embodiment of the present application.
Fig. 3 is a schematic diagram of an application example related to the first embodiment of the present application.
FIG. 4 is a flowchart of a third embodiment of a method for obtaining optical flow provided by the present application.
Fig. 5 is a structural diagram of a fourth embodiment of an optical flow obtaining apparatus according to the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The first embodiment of the application provides a method for a robot to obtain position service. Please refer to fig. 1, which is a flowchart illustrating a first embodiment of the present application. A method for a robot to obtain location services according to a first embodiment of the present application is described in detail below with reference to fig. 1. The method comprises the following steps:
step S101: the robot obtains an occupancy map and a visibility map for the obstacle position for the current frame.
This step is used for the robot to obtain the occupancy map and the visibility map for the obstacle position of the current frame.
The robot obtains an occupancy map and a visibility map for an obstacle position of a current frame, including:
the robot scans the obstacle to obtain scan data. In this embodiment, the robot may scan the obstacle using the two-dimensional laser radar to obtain scan data. The two-dimensional laser radar is also called a single-line laser radar, and the working principle of the two-dimensional laser radar is that the distance between an obstacle and the laser radar is measured by emitting laser and receiving laser signals reflected by the obstacle. The two-dimensional laser radar can only emit one laser beam at a single moment, and one-time scanning is completed by offsetting the emitting angle of the laser radar. The scanning angle of the two-dimensional laser radar changes only on one plane, so that the distance of the laser radar to surrounding objects on the plane can be measured.
The robot generates an occupancy map and a visibility map for the obstacle position of the current frame from the scan data. The two-dimensional laser radar can measure and obtain the distance from the laser radar to the obstacle at a plurality of angles, and a one-dimensional vector is obtained. On the basis, according to the working principle of the laser radar, the measurement result is converted into two binary graphs, in the occupation graph, the point occupied by the barrier is set as 1, and the other points are set as 0; as can be seen, the point that can be observed by the laser radar is set to 1, and the point that is blocked by the obstacle is set to 0.
Step S102: and the robot performs feature extraction on the occupation map and the visible map to obtain a feature map of the current frame.
The step is used for the robot to extract the characteristics of the occupation map and the visible map so as to obtain the characteristic map of the current frame.
The robot performs feature extraction on the occupation map and the visible map to obtain a feature map of the current frame, and the feature extraction includes:
and performing feature extraction on the occupation map and the visible map by using the translation convolution operation of the feature detector in the first convolution layer to obtain the feature map of the current frame.
The feature detector may adopt a 3 × 3 matrix, and perform a shift convolution operation on the occupancy map and the visibility map by using the matrix to obtain the feature map of the current frame.
Step S103: and the robot captures the dependency relationship between the previous frame and the current frame by using a gated circulation unit neural network according to the characteristic diagram of the current frame.
The step is used for capturing the dependency relationship between the previous frame before the current frame and the current frame by the robot by using a gated loop unit neural network according to the characteristic diagram of the current frame.
The robot captures the dependency relationship between the previous frame and the current frame by using a gate control cycle unit neural network according to the feature map of the current frame, including;
inputting the feature map of the current frame into a gated cyclic unit neural network;
and utilizing the memory characteristic of the neural network of the gating circulation unit to perform characteristic mining between the characteristic diagram of the previous frame and the characteristic diagram of the current frame and capture the dependency relationship between the previous frame and the current frame.
The gated cyclic unit neural network may be constructed from a stack of three gated cyclic units. The Gated cycle Unit, namely the Gated Current Unit, is abbreviated as GRU. The GRU neural network is one of Recurrent Neural Networks (RNN), and RNN can better process tasks input as time series than a conventional neural network. Because the RNN neural network is able to retain the effects of previous inputs to the model and participate together in the calculation of the next step. In theory, RNN neural networks can utilize time series information of arbitrary length, but in practice it appears that the gradient vanishes very quickly when the step size between two inputs is too large, and is difficult to implement. As a variant of RNN, GRU neural networks have a special gate structure that can effectively solve the problem of variation over long time sequences.
Please refer to fig. 2, which is a diagram of an internal structure of a GRU. h ist-1Represented as the state at the previous time for the current time t. x is the number oftAnd htRepresenting the input and output of the current GRU network, respectively. r istAnd ztRepresenting two key structures in the GRU network, namely a reset gate and an update gate, each gate is a simple neural network, and in order to fix the output of the gate between 0 and 1, the activation function of the neural network adopts a sigmoid function.
The GRU network may form a multi-layer GRU neural network by stacking, and in this embodiment, a 3-layer GRU neural network may be constructed.
And based on the memory characteristics of the GRU neural network, performing characteristic mining between the characteristic diagram of the previous frame and the characteristic diagram of the current frame, and capturing the dependency relationship between the previous frame and the current frame. In this embodiment, the feature mining includes location feature mining of an obstacle.
Step S104: and the robot obtains at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
The step is used for the robot to obtain at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
The optical flow is the instantaneous speed of the pixel motion of a space moving object on an observation imaging plane, and is a method for finding the corresponding relation between the previous frame and the current frame by using the change of the pixels in an image sequence on a time domain and the correlation between adjacent frames so as to calculate the motion information of the object between the adjacent frames.
Describing the motion information of each pixel point in two continuous two-dimensional images a and b, and recording the position of each point in a in an image b by a forward optical flow f for a current image a; the backward optical flow b records the position of each point in the next frame image b in the current image a.
The robot obtains a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame, and the method comprises the following steps:
and the robot acquires optical flow information between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
In step S103, the robot obtains a dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network.
In the gate cycle unit neural network, according to the dependency relationship, feature information of a forward optical flow and a backward optical flow between a current frame and a next frame is obtained through prediction.
And the robot performs feature extraction on the optical flow information by using a second convolution layer to obtain at least one of a forward optical flow and a backward optical flow between the current frame and the next frame.
Step S105: the robot obtains a location service according to at least one of a forward optical flow and a backward optical flow between the current frame and a next frame.
This step is for the robot to obtain a location service according to at least one of a forward optical flow and a backward optical flow between the current frame and the next frame.
According to the forward optical flow and the backward optical flow, the moving track of the robot can be predicted, so that support is provided for robot positioning, navigation and obstacle avoidance.
The method for obtaining the location service further comprises the following steps:
obtaining estimation characteristic information of a next frame by using the forward optical flow;
acquiring a first difference value of real characteristic information of a next frame and estimated characteristic information of the next frame;
and obtaining the optimized parameters of the neural network of the gating cycle unit by taking the minimized first difference as a target.
The method for obtaining the location service further comprises the following steps:
obtaining estimated characteristic information of the current frame by utilizing the backward optical flow;
acquiring a second difference value between the real characteristic information of the current frame and the estimated characteristic information of the current frame;
and obtaining the optimized parameters of the neural network of the gating cycle unit by taking the minimized second difference as a target.
In order to effectively train the proposed neural network, the present embodiment provides an auto-supervision strategy, thereby avoiding manual labeling of data. Specifically, according to the forward optical flow estimated by the neural network, the current frame is deformed, the next frame is estimated, the difference with the true value of the next frame is calculated, the difference is minimized, the network parameters are updated, and the network is trained. For backward optical flow, the next frame data is deformed, the current frame is obtained through estimation, the difference between the true value of the current frame and the true value of the current frame is calculated, the difference is minimized, the network parameters are updated, and the network is trained.
Fig. 3 is a schematic diagram of an application example of a method for obtaining location services by using a robot according to the present embodiment. In fig. 3, three layers of gated cyclic units are stacked to form a gated cyclic neural network.
In fig. 3, laser radar scan data is first obtained, and data format conversion is performed on the laser radar scan data to obtain an occupancy map and a visibility map. And then, inputting the occupation map and the visible map into the convolutional layer for feature extraction, and obtaining a feature map of the current frame corresponding to the scanning data of the laser radar. And then, inputting the feature map of the current frame into a gated cyclic unit neural network formed by stacking three gated cyclic unit layers. In the gated cyclic unit neural network, feature mining is carried out between a feature map of a past frame and a feature map of the current frame by using the memory characteristics of the gated cyclic unit neural network, and the dependency relationship between the past frame and the current frame is captured. In the gate cycle unit neural network, according to the dependency relationship, feature information of a forward optical flow and a backward optical flow between a current frame and a next frame is obtained through prediction. And carrying out feature extraction on the feature information through a convolution layer to obtain a forward optical flow and a backward optical flow.
The second embodiment of the application provides a robot, which comprises a robot body and a two-dimensional laser radar arranged on the robot body; the two-dimensional laser radar is used for acquiring the scanning data of the current frame; the robot body is used for executing the following operations:
obtaining an occupancy map and a visible map of the current frame for the position of the obstacle according to the scanning data;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
according to the feature map of the current frame, capturing the dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network;
obtaining at least one of a forward optical flow and a backward optical flow between the current frame and a next frame according to the dependency relationship between a previous frame before the current frame and the current frame;
obtaining a location service according to at least one of a forward optical flow and a backward optical flow between the current frame and a next frame.
A third embodiment of the present application provides a method for obtaining an optical flow, please refer to fig. 4, which is a flowchart of the third embodiment of the present application. The implementation steps of the optical flow obtaining method comprise:
step S401: an occupancy map and a visibility map for the obstacle position of the current frame are obtained.
This step is used to obtain an occupancy map and a visibility map for the obstacle position of the current frame.
The obtaining of the occupancy map and the visibility map for the obstacle position of the current frame includes:
scanning the obstacle to obtain scanning data;
and generating an occupancy map and a visible map of the current frame for the position of the obstacle according to the scanning data.
In this embodiment, the two-dimensional laser radar may be used to scan the obstacle to obtain the scan data.
Step S402: and performing feature extraction on the occupation map and the visible map to obtain a feature map of the current frame.
The step is used for extracting features of the occupation map and the visible map to obtain a feature map of the current frame.
The extracting features of the occupation map and the visible map to obtain the feature map of the current frame includes:
and performing feature extraction on the occupation map and the visible map by using the translation convolution operation of the feature detector in the first convolution layer to obtain the feature map of the current frame.
The feature detector may adopt a 3 × 3 matrix, and perform a shift convolution operation on the occupancy map and the visibility map by using the matrix to obtain the feature map of the current frame.
Step S403: and capturing the dependency relationship between the previous frame and the current frame by using a gated loop unit neural network according to the feature map of the current frame.
The step is used for capturing the dependency relationship between the previous frame and the current frame by using a gate control circulation unit neural network according to the characteristic diagram of the current frame.
According to the feature map of the current frame, capturing the dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network, wherein the dependency relationship comprises the following steps;
inputting the feature map of the current frame into a gated cyclic unit neural network;
and utilizing the memory characteristic of the neural network of the gating circulation unit to perform characteristic mining between the characteristic diagram of the previous frame and the characteristic diagram of the current frame and capture the dependency relationship between the previous frame and the current frame.
The gated cyclic unit neural network may be constructed from a stack of three gated cyclic units.
Step S404: and obtaining at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
Obtaining a forward optical flow and a backward optical flow between the current frame and a next frame according to a dependency relationship between a previous frame before the current frame and the current frame, including:
acquiring optical flow information between the current frame and the next frame according to the dependency relationship between the previous frame and the current frame before the current frame;
and performing feature extraction on the optical flow information by using a second convolution layer to obtain a forward optical flow and a backward optical flow between the current frame and the next frame.
The method for obtaining optical flow further comprises:
obtaining estimation characteristic information of a next frame by using the forward optical flow;
acquiring a first difference value of real characteristic information of a next frame and estimated characteristic information of the next frame;
and obtaining the optimized parameters of the neural network of the gating cycle unit by taking the minimized first difference as a target.
The method for obtaining optical flow further comprises:
obtaining estimated characteristic information of the current frame by utilizing the backward optical flow;
acquiring a second difference value between the real characteristic information of the current frame and the estimated characteristic information of the current frame;
and obtaining the optimized parameters of the neural network of the gating cycle unit by taking the minimized second difference as a target.
In order to effectively train the proposed neural network, the present embodiment provides an auto-supervision strategy, thereby avoiding manual labeling of data. Specifically, according to the forward optical flow estimated by the neural network, the current frame is deformed, the next frame is estimated, the difference with the true value of the next frame is calculated, the difference is minimized, the network parameters are updated, and the network is trained. For backward optical flow, the next frame data is deformed, the current frame is obtained through estimation, the difference between the true value of the current frame and the true value of the current frame is calculated, the difference is minimized, the network parameters are updated, and the network is trained.
In the above embodiments, a method for obtaining an optical flow is provided, and correspondingly, an apparatus for obtaining an optical flow is also provided. Please refer to fig. 5, which is a flowchart of an embodiment of an optical flow obtaining apparatus according to the present application. Since this embodiment, i.e., the fourth embodiment, is substantially similar to the method embodiment, it is relatively simple to describe, and reference may be made to some descriptions of the method embodiment for relevant points. The device embodiments described below are merely illustrative.
A fourth embodiment of the present application provides an optical flow obtaining apparatus, including:
a binary image obtaining unit 501 for obtaining an occupancy map and a visibility map for an obstacle position of a current frame;
a current frame feature extraction unit 502, configured to perform feature extraction on the occupancy map and the visible map to obtain a feature map of the current frame;
a dependency relationship capturing unit 503, configured to capture, according to the feature map of the current frame, a dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network;
an optical flow obtaining unit 504, configured to obtain at least one of a forward optical flow and a backward optical flow between the current frame and a next frame according to a dependency relationship between a past frame before the current frame and the current frame.
In this embodiment, the binary image obtaining unit is specifically configured to:
scanning the obstacle to obtain scanning data;
and generating an occupancy map and a visible map of the current frame for the position of the obstacle according to the scanning data.
In this embodiment, the current frame feature extraction unit is specifically configured to:
and performing feature extraction on the occupation map and the visible map by using the translation convolution operation of the feature detector in the first convolution layer to obtain the feature map of the current frame.
In this embodiment, the dependency relationship capturing unit is specifically configured to;
inputting the feature map of the current frame into a gated cyclic unit neural network;
and utilizing the memory characteristic of the neural network of the gating circulation unit to perform characteristic mining between the characteristic diagram of the previous frame and the characteristic diagram of the current frame and capture the dependency relationship between the previous frame and the current frame.
In this embodiment, the apparatus for obtaining an optical flow further includes a first training unit, where the first training unit is configured to:
obtaining estimation characteristic information of a next frame by using the forward optical flow;
acquiring a first difference value of real characteristic information of a next frame and estimated characteristic information of the next frame;
and obtaining the optimized parameters of the neural network of the gating cycle unit by taking the minimized first difference as a target.
In this embodiment, the apparatus for obtaining an optical flow further includes a second training unit, and the second training unit is configured to:
obtaining estimated characteristic information of the current frame by utilizing the backward optical flow;
acquiring a second difference value between the real characteristic information of the current frame and the estimated characteristic information of the current frame;
and obtaining the optimized parameters of the neural network of the gating cycle unit by taking the minimized second difference as a target.
Optionally, the optical flow obtaining unit is specifically configured to:
the robot obtains optical flow information between the current frame and the next frame according to the dependency relationship between the previous frame and the current frame before the current frame;
and the robot utilizes a second convolution layer to perform feature extraction on the optical flow information to obtain a forward optical flow and a backward optical flow between the current frame and the next frame.
A fifth embodiment of the present application provides a robot, including a robot body and a two-dimensional lidar mounted on the robot body; the two-dimensional laser radar is used for acquiring the scanning data of the current frame; the robot body is used for executing the following operations:
obtaining an occupancy map and a visible map of the current frame for the position of the obstacle according to the scanning data;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
according to the feature map of the current frame, capturing the dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network;
and obtaining at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
A sixth embodiment of the present application provides an electronic apparatus, including:
a processor;
a memory for storing a program that, when read and executed by the processor, performs the following:
obtaining an occupancy map and a visibility map for the obstacle position of the current frame;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
according to the feature map of the current frame, capturing the dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network;
and obtaining at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
A seventh embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of:
obtaining an occupancy map and a visibility map for the obstacle position of the current frame;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
according to the feature map of the current frame, capturing the dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network;
and obtaining at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
An eighth embodiment of the present application provides a method for transporting goods by a robot, including:
step S601: the robot obtains an item to be shipped and determines destination information for the item.
This step is used for the robot to obtain the goods that need to be transported and to determine the destination information of the goods. The destination information includes geographical location information of the destination, such as longitude and latitude, two-dimensional map data, and the like.
Step S602: the robot obtains a feature map of a current frame for representing feature information of the position of the obstacle.
The step is used for the robot to obtain a characteristic diagram of the current frame for representing the position characteristic information of the obstacle.
The robot scans the obstacle to obtain scan data. In this embodiment, the robot may scan the obstacle using the two-dimensional laser radar to obtain scan data. The current frame refers to an image frame obtained by the two-dimensional laser radar at the current moment. The robot generates an occupancy map and a visibility map for the obstacle position of the current frame from the scan data. And the robot performs feature extraction on the occupation map and the visible map to obtain a feature map of the current frame. The feature map includes feature information of the obstacle acquired at the current time, such as information of a distance between the obstacle and the robot.
Step S603: and the robot captures the dependency relationship between the previous frame and the current frame by using a gated cyclic unit neural network according to the characteristic diagram.
The step is used for capturing the dependency relationship between the previous frame and the current frame by the robot through a gate control circulation unit neural network according to the characteristic diagram.
The past frame is a frame previous to the current frame in step S602. For example, two-dimensional lidar at current time 14: 07: current frame a is obtained 30, and the two-dimensional lidar, at previous time 14: 07: frame B obtained at 28 is a past frame. And the dependency relationship between the A frame and the B frame is the change characteristic information between the position information characteristic of the obstacle in the A frame and the position characteristic of the obstacle in the B frame.
Step S604: and the robot obtains at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
The step is used for the robot to obtain at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
The optical flow is used for describing the motion information of each pixel point in two continuous two-dimensional images a and b, and for the current image a, the forward optical flow f records the position of each point in the image b; the backward optical flow b records the position of each point in the next frame image b in the current image a. For example, the robot acquires two consecutive image frames C and D, the acquisition time of the image frame C is 15: 07: 20, the acquisition time of the graphics frame D is 15: 07: 22. the image frames C and D reflect the information of the obstacle acquired by the robot at these two moments using the two-dimensional lidar. The forward optical flow between image frames C and D reflects the position of each point in image frame C at each point in image frame D, reflecting the forward change of the robot to the position of the obstacle at two moments in time. The backward optical flow between image frames C and D reflects the position of each point in image frame D in image frame C, reflecting the backward change of the robot to the position of the obstacle at two moments.
Step S605: the robot generates a travel route from a current position of the robot to a destination indicated by the destination information, based on at least one of a forward optical flow and a backward optical flow between the current frame and a next frame and the destination information of the goods. This step is for the robot to generate a travel route from a current position of the robot to a destination indicated by the destination information, based on at least one of a forward optical flow and a backward optical flow between the current frame and a next frame and the destination information of the goods.
The robot can acquire the change information of the obstacle according to at least one of the forward optical flow and the backward optical flow between the current frame and the next frame, so that the obstacle can be avoided.
Step S606: the robot transports the goods according to the travel route.
This step is for the robot to transport the goods according to the travel route.
Since the method for the robot to obtain the location service provided by the present embodiment is basically the same as that of the first embodiment, please refer to the detailed description of the first embodiment.
Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, therefore, the scope of the present application should be determined by the claims that follow.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
1. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
2. As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Claims (27)

1. A method for a robot to obtain location services, comprising:
the robot obtains an occupancy map and a visible map of the current frame for the position of the obstacle;
the robot extracts features of the occupation map and the visible map to obtain a feature map of the current frame;
the robot captures the dependency relationship between the previous frame and the current frame by using a gate control cycle unit neural network according to the feature map of the current frame;
the robot obtains at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame and the current frame before the current frame;
the robot obtains a location service according to at least one of a forward optical flow and a backward optical flow between the current frame and a next frame.
2. The method of obtaining location services of claim 1, wherein the robot obtains an occupancy map and a visibility map for the obstacle location for a current frame, comprising:
the robot scans the obstacles to obtain scanning data;
the robot generates an occupancy map and a visibility map for the obstacle position of the current frame from the scan data.
3. The method for obtaining location service according to claim 1, wherein the robot performs feature extraction on the occupancy map and the visibility map to obtain a feature map of the current frame, and the method comprises:
and performing feature extraction on the occupation map and the visible map by using the translation convolution operation of the feature detector in the first convolution layer to obtain the feature map of the current frame.
4. The method for obtaining location service according to claim 1, wherein the robot captures dependency relationship between a past frame before the current frame and the current frame by using a gated cyclic unit neural network according to the feature map of the current frame, including;
inputting the feature map of the current frame into a gated cyclic unit neural network;
and utilizing the memory characteristic of the neural network of the gating circulation unit to perform characteristic mining between the characteristic diagram of the previous frame and the characteristic diagram of the current frame and capture the dependency relationship between the previous frame and the current frame.
5. The method of obtaining location services according to claim 1, further comprising:
obtaining estimation characteristic information of a next frame by using the forward optical flow;
acquiring a first difference value of real characteristic information of a next frame and estimated characteristic information of the next frame;
and obtaining the optimized parameters of the neural network of the gating cycle unit by taking the minimized first difference as a target.
6. The method of obtaining location services according to claim 1, further comprising:
obtaining estimated characteristic information of the current frame by utilizing the backward optical flow;
acquiring a second difference value between the real characteristic information of the current frame and the estimated characteristic information of the current frame;
and obtaining the optimized parameters of the neural network of the gating cycle unit by taking the minimized second difference as a target.
7. The method of claim 1, wherein the robot obtains a forward optical flow and a backward optical flow between the current frame and a next frame according to a dependency relationship between a past frame before the current frame and the current frame, and comprises:
the robot obtains optical flow information between the current frame and the next frame according to the dependency relationship between the previous frame and the current frame before the current frame;
the robot performs feature extraction on the optical flow information by using a second convolution layer, and obtains at least one of a forward optical flow and a backward optical flow between the current frame and the next frame.
8. The method of claim 1, wherein the robot obtains location services according to forward and backward optical flows between the current frame and the next frame, comprising:
the robot obtains position information of the obstacle according to the forward optical flow and the backward optical flow between the current frame and the next frame;
and generating the traveling route information of the robot according to the position information of the obstacle.
9. A robot is characterized by comprising a robot body and a two-dimensional laser radar arranged on the robot body; the two-dimensional laser radar is used for acquiring the scanning data of the current frame; the robot body is used for executing the following operations:
obtaining an occupancy map and a visible map of the current frame for the position of the obstacle according to the scanning data;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
according to the feature map of the current frame, capturing the dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network;
obtaining at least one of a forward optical flow and a backward optical flow between the current frame and a next frame according to the dependency relationship between a previous frame before the current frame and the current frame;
obtaining a location service according to at least one of a forward optical flow and a backward optical flow between the current frame and a next frame.
10. A method for obtaining an optical flow, comprising:
obtaining an occupancy map and a visibility map for the obstacle position of the current frame;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
according to the feature map of the current frame, capturing the dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network;
and obtaining at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
11. The method for obtaining optical flow according to claim 10, wherein the obtaining of the occupancy map and the visibility map for the obstacle position of the current frame includes:
scanning the obstacle to obtain scanning data;
and generating an occupancy map and a visible map of the current frame for the position of the obstacle according to the scanning data.
12. The method for obtaining optical flow according to claim 10, wherein the extracting features from the occupancy map and the visibility map to obtain the feature map of the current frame includes:
and performing feature extraction on the occupation map and the visible map by using the translation convolution operation of the feature detector in the first convolution layer to obtain the feature map of the current frame.
13. The optical flow obtaining method according to claim 10, wherein the robot captures, according to the feature map of the current frame, a dependency relationship between a past frame before the current frame and the current frame by using a gated cyclic unit neural network, including;
inputting the feature map of the current frame into a gated cyclic unit neural network;
and utilizing the memory characteristic of the neural network of the gating circulation unit to perform characteristic mining between the characteristic diagram of the previous frame and the characteristic diagram of the current frame and capture the dependency relationship between the previous frame and the current frame.
14. The method of obtaining optical flow according to claim 10, further comprising:
obtaining estimation characteristic information of a next frame by using the forward optical flow;
acquiring a first difference value of real characteristic information of a next frame and estimated characteristic information of the next frame;
and obtaining the optimized parameters of the neural network of the gating cycle unit by taking the minimized first difference as a target.
15. The method of obtaining optical flow according to claim 10, further comprising:
obtaining estimated characteristic information of the current frame by utilizing the backward optical flow;
acquiring a second difference value between the real characteristic information of the current frame and the estimated characteristic information of the current frame;
and obtaining the optimized parameters of the neural network of the gating cycle unit by taking the minimized second difference as a target.
16. The method for obtaining optical flow according to claim 10, wherein the obtaining forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the past frame before the current frame and the current frame includes:
acquiring optical flow information between the current frame and the next frame according to the dependency relationship between the previous frame and the current frame before the current frame;
and performing feature extraction on the optical flow information by using a second convolution layer to obtain a forward optical flow and a backward optical flow between the current frame and the next frame.
17. An optical flow obtaining apparatus, comprising:
a binary image obtaining unit for obtaining an occupancy map and a visibility map for the obstacle position of the current frame;
a current frame feature extraction unit, configured to perform feature extraction on the occupancy map and the visible map to obtain a feature map of the current frame;
the dependency relationship capturing unit is used for capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gate control circulation unit neural network according to the characteristic diagram of the current frame;
and the optical flow obtaining unit is used for obtaining at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
18. The optical flow obtaining apparatus of claim 17, wherein the binary image obtaining unit is specifically configured to:
scanning the obstacle to obtain scanning data;
and generating an occupancy map and a visible map of the current frame for the position of the obstacle according to the scanning data.
19. The optical flow obtaining apparatus according to claim 17, wherein the current frame feature extraction unit is specifically configured to:
and performing feature extraction on the occupation map and the visible map by using the translation convolution operation of the feature detector in the first convolution layer to obtain the feature map of the current frame.
20. The optical flow obtaining apparatus according to claim 17, wherein the dependency relationship capturing unit is specifically configured to;
inputting the feature map of the current frame into a gated cyclic unit neural network;
and utilizing the memory characteristic of the neural network of the gating circulation unit to perform characteristic mining between the characteristic diagram of the previous frame and the characteristic diagram of the current frame and capture the dependency relationship between the previous frame and the current frame.
21. The optical flow obtaining apparatus according to claim 17, further comprising a first training unit configured to:
obtaining estimation characteristic information of a next frame by using the forward optical flow;
acquiring a first difference value of real characteristic information of a next frame and estimated characteristic information of the next frame;
and obtaining the optimized parameters of the neural network of the gating cycle unit by taking the minimized first difference as a target.
22. The optical flow obtaining apparatus according to claim 17, further comprising a second training unit configured to:
obtaining estimated characteristic information of the current frame by utilizing the backward optical flow;
acquiring a second difference value between the real characteristic information of the current frame and the estimated characteristic information of the current frame;
and obtaining the optimized parameters of the neural network of the gating cycle unit by taking the minimized second difference as a target.
23. The optical flow obtaining device according to claim 17, characterized in that said optical flow obtaining unit is specifically configured to:
acquiring optical flow information between the current frame and the next frame according to the dependency relationship between the previous frame and the current frame before the current frame;
and performing feature extraction on the optical flow information by using a second convolution layer to obtain a forward optical flow and a backward optical flow between the current frame and the next frame.
24. A robot is characterized by comprising a robot body and a two-dimensional laser radar arranged on the robot body; the two-dimensional laser radar is used for acquiring the scanning data of the current frame; the robot body is used for executing the following operations:
obtaining an occupancy map and a visible map of the current frame for the position of the obstacle according to the scanning data;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
according to the feature map of the current frame, capturing the dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network;
and obtaining at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
25. An electronic device, comprising:
a processor;
a memory for storing a program that, when read and executed by the processor, performs the following:
obtaining an occupancy map and a visibility map for the obstacle position of the current frame;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
according to the feature map of the current frame, capturing the dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network;
and obtaining at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
26. A computer-readable storage medium having a computer program stored thereon, the program, when executed by a processor, performing the steps of:
obtaining an occupancy map and a visibility map for the obstacle position of the current frame;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
according to the feature map of the current frame, capturing the dependency relationship between a previous frame before the current frame and the current frame by using a gated cyclic unit neural network;
and obtaining at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
27. A method of robotically transporting goods, comprising:
the robot obtains goods to be transported and determines destination information of the goods;
the robot obtains a feature map of a current frame for representing the feature information of the position of the obstacle;
the robot captures the dependency relationship between the previous frame and the current frame by using a gated cyclic unit neural network according to the feature map;
the robot obtains at least one of a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame and the current frame before the current frame;
the robot generating a travel route from a current position of the robot to a destination indicated by the destination information, based on at least one of a forward optical flow and a backward optical flow between the current frame and a next frame and the destination information of the goods;
the robot transports the goods according to the travel route.
CN201811294619.2A 2018-11-01 2018-11-01 Method for robot to obtain position service and robot Active CN111113404B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811294619.2A CN111113404B (en) 2018-11-01 2018-11-01 Method for robot to obtain position service and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811294619.2A CN111113404B (en) 2018-11-01 2018-11-01 Method for robot to obtain position service and robot

Publications (2)

Publication Number Publication Date
CN111113404A true CN111113404A (en) 2020-05-08
CN111113404B CN111113404B (en) 2023-07-04

Family

ID=70494333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811294619.2A Active CN111113404B (en) 2018-11-01 2018-11-01 Method for robot to obtain position service and robot

Country Status (1)

Country Link
CN (1) CN111113404B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170064204A1 (en) * 2015-08-26 2017-03-02 Duke University Systems and methods for burst image delurring
CN106681353A (en) * 2016-11-29 2017-05-17 南京航空航天大学 Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion
CN106934347A (en) * 2017-02-10 2017-07-07 百度在线网络技术(北京)有限公司 Obstacle recognition method and device, computer equipment and computer-readable recording medium
US20170262995A1 (en) * 2016-03-11 2017-09-14 Qualcomm Incorporated Video analysis with convolutional attention recurrent neural networks
CN107292912A (en) * 2017-05-26 2017-10-24 浙江大学 A kind of light stream method of estimation practised based on multiple dimensioned counter structure chemistry
CN108204812A (en) * 2016-12-16 2018-06-26 中国航天科工飞航技术研究院 A kind of unmanned plane speed estimation method
CN108647646A (en) * 2018-05-11 2018-10-12 北京理工大学 The optimizing detection method and device of low obstructions based on low harness radar

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170064204A1 (en) * 2015-08-26 2017-03-02 Duke University Systems and methods for burst image delurring
US20170262995A1 (en) * 2016-03-11 2017-09-14 Qualcomm Incorporated Video analysis with convolutional attention recurrent neural networks
CN106681353A (en) * 2016-11-29 2017-05-17 南京航空航天大学 Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion
CN108204812A (en) * 2016-12-16 2018-06-26 中国航天科工飞航技术研究院 A kind of unmanned plane speed estimation method
CN106934347A (en) * 2017-02-10 2017-07-07 百度在线网络技术(北京)有限公司 Obstacle recognition method and device, computer equipment and computer-readable recording medium
CN107292912A (en) * 2017-05-26 2017-10-24 浙江大学 A kind of light stream method of estimation practised based on multiple dimensioned counter structure chemistry
CN108647646A (en) * 2018-05-11 2018-10-12 北京理工大学 The optimizing detection method and device of low obstructions based on low harness radar

Also Published As

Publication number Publication date
CN111113404B (en) 2023-07-04

Similar Documents

Publication Publication Date Title
Chen et al. 3d point cloud processing and learning for autonomous driving: Impacting map creation, localization, and perception
Liu et al. Bevfusion: Multi-task multi-sensor fusion with unified bird's-eye view representation
Hou et al. Multiview detection with feature perspective transformation
Hinzmann et al. Mapping on the fly: Real-time 3D dense reconstruction, digital surface map and incremental orthomosaic generation for unmanned aerial vehicles
JP2019532433A (en) Laser scanner with real-time online egomotion estimation
Sarlin et al. Lamar: Benchmarking localization and mapping for augmented reality
WO2014114923A1 (en) A method of detecting structural parts of a scene
McManus et al. Towards appearance-based methods for lidar sensors
US10810783B2 (en) Dynamic real-time texture alignment for 3D models
Bu et al. Pedestrian planar LiDAR pose (PPLP) network for oriented pedestrian detection based on planar LiDAR and monocular images
Yao et al. Radar-camera fusion for object detection and semantic segmentation in autonomous driving: A comprehensive review
Suleymanov et al. Online inference and detection of curbs in partially occluded scenes with sparse lidar
CN106504274A (en) A kind of visual tracking method and system based under infrared camera
Löffler et al. Evaluation criteria for inside-out indoor positioning systems based on machine learning
Nguyen et al. ROI-based LiDAR sampling algorithm in on-road environment for autonomous driving
Mohamed et al. Towards dynamic monocular visual odometry based on an event camera and IMU sensor
Forechi et al. Visual global localization with a hybrid WNN-CNN approach
Sun et al. TransFusionOdom: Transformer-based LiDAR-Inertial Fusion Odometry Estimation
CN111113405B (en) Method for robot to obtain position service and robot
CN106558069A (en) A kind of method for tracking target and system based under video monitoring
CN111598927B (en) Positioning reconstruction method and device
Birk et al. Simultaneous localization and mapping (SLAM)
Kocur et al. Traffic camera calibration via vehicle vanishing point detection
CN111113404B (en) Method for robot to obtain position service and robot
Nardi et al. Generation of laser-quality 2D navigation maps from RGB-D sensors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230719

Address after: Room 437, Floor 4, Building 3, No. 969, Wenyi West Road, Wuchang Subdistrict, Yuhang District, Hangzhou City, Zhejiang Province

Patentee after: Wuzhou Online E-Commerce (Beijing) Co.,Ltd.

Address before: Box 847, four, Grand Cayman capital, Cayman Islands, UK

Patentee before: ALIBABA GROUP HOLDING Ltd.

TR01 Transfer of patent right