CN111113404B - Method for robot to obtain position service and robot - Google Patents

Method for robot to obtain position service and robot Download PDF

Info

Publication number
CN111113404B
CN111113404B CN201811294619.2A CN201811294619A CN111113404B CN 111113404 B CN111113404 B CN 111113404B CN 201811294619 A CN201811294619 A CN 201811294619A CN 111113404 B CN111113404 B CN 111113404B
Authority
CN
China
Prior art keywords
current frame
optical flow
frame
map
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811294619.2A
Other languages
Chinese (zh)
Other versions
CN111113404A (en
Inventor
宋亚斐
郑艺强
李名杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuzhou Online E Commerce Beijing Co ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811294619.2A priority Critical patent/CN111113404B/en
Publication of CN111113404A publication Critical patent/CN111113404A/en
Application granted granted Critical
Publication of CN111113404B publication Critical patent/CN111113404B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application discloses a method for obtaining a position service by a robot and the robot. The method for obtaining the location service comprises the following steps: the robot obtains an occupancy map and a visible map of the current frame for the obstacle position; the robot performs feature extraction on the occupancy map and the visible map to obtain a feature map of the current frame; the robot captures the dependency relationship between the previous frame before the current frame and the current frame by using a gating cycle unit neural network according to the characteristic diagram of the current frame; and the robot obtains at least one optical flow of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame. By adopting the method provided by the application, the problem that only forward optical flow can be estimated aiming at optical flow estimation of the two-dimensional laser radar in the prior art is solved.

Description

Method for robot to obtain position service and robot
Technical Field
The application relates to the technical field of robots, in particular to a method for obtaining a position service by a robot and the robot.
Background
With the rapid development of intelligent technology, robots have been increasingly used in commercial activities, such as distribution of logistics using robots. Two-dimensional lidar is a common technical means for robots to measure obstacle distances.
The optical flow is used for describing the motion information of each pixel point in two continuous two-dimensional images a and b, and for the current image a, the forward optical flow f records the position of each point in a in the image b; the backward optical flow b records the position of each point in the next frame image b in the current image a. The two-dimensional laser radar optical flow is utilized to predict the movement track of the robot, so that the robot can be positioned, navigated and kept away from the obstacle to provide support.
For the optical flow estimation of the two-dimensional laser radar, a circulating network is adopted in the prior art to estimate the forward optical flow of the two-dimensional laser radar. The method can only estimate the forward optical flow, but cannot estimate the backward optical flow.
Disclosure of Invention
The application provides a method and a device for obtaining a position service by a robot, which are used for solving the problem that in the prior art, only forward optical flow can be estimated aiming at optical flow estimation of a two-dimensional laser radar.
The method for obtaining the location service by the robot comprises the following steps:
The robot obtains an occupancy map and a visible map of the current frame for the obstacle position;
the robot performs feature extraction on the occupancy map and the visible map to obtain a feature map of the current frame;
the robot captures the dependency relationship between the previous frame before the current frame and the current frame by using a gating cycle unit neural network according to the characteristic diagram of the current frame;
the robot obtains at least one optical flow of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame;
the robot obtains a location service from at least one of a forward optical flow and a backward optical flow between the current frame and a next frame.
Optionally, the robot obtains an occupancy map and a visible map of the current frame for the obstacle position, including:
the robot scans for obstacles to obtain scanning data;
the robot generates an occupancy map and a visible map of the current frame for the obstacle position according to the scan data.
Optionally, the robot performs feature extraction on the occupancy map and the visible map to obtain a feature map of the current frame, including:
And carrying out feature extraction on the occupancy map and the visible map by using a translational convolution operation of a feature detector in the first convolution layer to obtain a feature map of the current frame.
Optionally, the robot captures the dependency relationship between the previous frame before the current frame and the current frame by using a gating cycle unit neural network according to the feature map of the current frame, including;
inputting the characteristic diagram of the current frame into a gating circulating unit neural network;
and utilizing the memory characteristic of the gating circulation unit neural network to perform feature mining between the feature map of the previous frame and the feature map of the current frame, and capturing the dependency relationship between the previous frame and the current frame.
Optionally, the method for obtaining the location service by the robot further comprises:
obtaining estimated characteristic information of the next frame by utilizing the forward optical flow;
acquiring a first difference value between real characteristic information of a next frame and estimated characteristic information of the next frame;
and obtaining the optimized parameters of the gating cycle unit neural network by taking the first difference value minimization as a target.
Optionally, the method for obtaining the location service by the robot further comprises:
obtaining estimated characteristic information of the current frame by utilizing the backward optical flow;
Acquiring a second difference value between the real characteristic information of the current frame and the estimated characteristic information of the current frame;
and obtaining the optimized parameters of the gating cycle unit neural network by taking the second difference value minimization as a target.
Optionally, the robot obtains a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame, including:
the robot obtains optical flow information between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame;
and the robot performs feature extraction on the optical flow information by using a second convolution layer to obtain a forward optical flow and a backward optical flow between the current frame and the next frame.
Optionally, the robot obtains a location service according to the forward optical flow and the backward optical flow between the current frame and the next frame, including:
the robot obtains the position information of the obstacle according to the forward optical flow and the backward optical flow between the current frame and the next frame;
and generating the travel route information of the robot according to the position information of the obstacle.
The application provides a robot, which comprises a robot body and a two-dimensional laser radar arranged on the robot body; the two-dimensional laser radar is used for acquiring scanning data of a current frame; the robot body is used for executing the following operations:
Obtaining an occupancy map and a visible map of the current frame for the obstacle position according to the scanning data;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gating loop unit neural network according to the characteristic diagram of the current frame;
according to the dependency relationship between the previous frame before the current frame and the current frame, at least one optical flow of forward optical flow and backward optical flow between the current frame and the next frame is obtained;
a location service is obtained from at least one of a forward optical flow and a backward optical flow between the current frame and a next frame.
The application provides a method for obtaining optical flow, which comprises the following steps:
obtaining an occupancy map and a visible map of the current frame for the obstacle position;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gating loop unit neural network according to the characteristic diagram of the current frame;
and obtaining at least one of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
Optionally, the obtaining the occupancy map and the visible map of the current frame for the obstacle position includes:
scanning the obstacle to obtain scanning data;
and generating an occupancy map and a visible map of the current frame for the obstacle position according to the scanning data.
Optionally, the feature extraction for the occupancy map and the visible map to obtain a feature map of the current frame includes:
and carrying out feature extraction on the occupancy map and the visible map by using a translational convolution operation of a feature detector in the first convolution layer to obtain a feature map of the current frame.
Optionally, capturing, according to the feature map of the current frame, a dependency relationship between a previous frame before the current frame and the current frame by using a gating loop unit neural network, including;
inputting the characteristic diagram of the current frame into a gating circulating unit neural network;
and utilizing the memory characteristic of the gating circulation unit neural network to perform feature mining between the feature map of the previous frame and the feature map of the current frame, and capturing the dependency relationship between the previous frame and the current frame.
Optionally, the method for obtaining optical flow further includes:
obtaining estimated characteristic information of the next frame by utilizing the forward optical flow;
Acquiring a first difference value between real characteristic information of a next frame and estimated characteristic information of the next frame;
and obtaining the optimized parameters of the gating cycle unit neural network by taking the first difference value minimization as a target.
Optionally, the method for obtaining optical flow further includes:
obtaining estimated characteristic information of the current frame by utilizing the backward optical flow;
acquiring a second difference value between the real characteristic information of the current frame and the estimated characteristic information of the current frame;
and obtaining the optimized parameters of the gating cycle unit neural network by taking the second difference value minimization as a target.
Optionally, obtaining the forward optical flow and the backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame includes:
acquiring optical flow information between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame;
and extracting features of the optical flow information by using a second convolution layer to obtain a forward optical flow and a backward optical flow between the current frame and the next frame.
The application provides an optical flow obtaining device, which comprises:
a binary image obtaining unit for obtaining an occupancy image and a visible image of the current frame for the obstacle position;
The current frame feature extraction unit is used for carrying out feature extraction on the occupation map and the visible map to obtain a feature map of the current frame;
the dependency relation capturing unit is used for capturing the dependency relation between the previous frame before the current frame and the current frame by using the gating loop unit neural network according to the characteristic diagram of the current frame;
an optical flow obtaining unit, configured to obtain at least one optical flow of a forward optical flow and a backward optical flow between the current frame and a next frame according to a dependency relationship between a previous frame before the current frame and the current frame.
Optionally, the binary image obtaining unit is specifically configured to:
scanning the obstacle to obtain scanning data;
and generating an occupancy map and a visible map of the current frame for the obstacle position according to the scanning data.
Optionally, the current frame feature extraction unit is specifically configured to:
and carrying out feature extraction on the occupancy map and the visible map by using a translational convolution operation of a feature detector in the first convolution layer to obtain a feature map of the current frame.
Optionally, the dependency relation capturing unit is specifically configured to;
inputting the characteristic diagram of the current frame into a gating circulating unit neural network;
And capturing the dependency relationship between the previous frame and the current frame by utilizing the characteristic that the output of the gate control loop unit neural network at the current moment depends on the input of the current moment and the state quantity of the hidden layer at the previous moment.
Optionally, the optical flow obtaining device further includes a first training unit, where the first training unit is configured to:
obtaining estimated characteristic information of the next frame by utilizing the forward optical flow;
acquiring a first difference value between real characteristic information of a next frame and estimated characteristic information of the next frame;
and obtaining the optimized parameters of the gating cycle unit neural network by taking the first difference value minimization as a target.
Optionally, the optical flow obtaining device further includes a second training unit, where the second training unit is configured to:
obtaining estimated characteristic information of the current frame by utilizing the backward optical flow;
acquiring a second difference value between the real characteristic information of the current frame and the estimated characteristic information of the current frame;
and obtaining the optimized parameters of the gating cycle unit neural network by taking the second difference value minimization as a target.
Optionally, the optical flow obtaining unit is specifically configured to:
the robot obtains optical flow information between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame;
And the robot performs feature extraction on the optical flow information by using a second convolution layer to obtain a forward optical flow and a backward optical flow between the current frame and the next frame.
The application provides a robot, which comprises a robot body and a two-dimensional laser radar arranged on the robot body; the two-dimensional laser radar is used for acquiring scanning data of a current frame; the robot body is used for executing the following operations:
obtaining an occupancy map and a visible map of the current frame for the obstacle position according to the scanning data;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gating loop unit neural network according to the characteristic diagram of the current frame;
and obtaining at least one of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
The application provides an electronic device, the electronic device includes:
a processor;
a memory for storing a program which, when read for execution by the processor, performs the following operations:
Obtaining an occupancy map and a visible map of the current frame for the obstacle position;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gating loop unit neural network according to the characteristic diagram of the current frame;
and obtaining at least one of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
The present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
obtaining an occupancy map and a visible map of the current frame for the obstacle position;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gating loop unit neural network according to the characteristic diagram of the current frame;
and obtaining at least one of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
The application provides a method for transporting goods by a robot, comprising the following steps:
the robot obtains goods to be transported and determines destination information of the goods;
the robot obtains a characteristic diagram of the current frame, which is used for representing the characteristic information of the position of the obstacle;
the robot captures the dependency relationship between the previous frame before the current frame and the current frame by using a gating circulation unit neural network according to the characteristic diagram;
the robot obtains at least one optical flow of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame;
the robot generates a travel route from a current position of the robot to a destination indicated by the destination information according to at least one of a forward optical flow and a backward optical flow between the current frame and a next frame and the destination information of the article;
the robot transports the article according to the travel route.
By adopting the method for obtaining the position service by the robot, the forward optical flow and the backward optical flow between the current frame and the next frame are estimated according to the data of the current frame and the previous frame by using the gating loop unit neural network, so that the problem that only the forward optical flow can be estimated for the optical flow estimation of the two-dimensional laser radar in the prior art is solved.
Drawings
Fig. 1 is a flowchart of a first embodiment of a method for obtaining a location service by a robot provided in the present application.
Fig. 2 is a diagram showing an internal structure of a GRU according to a first embodiment of the present invention.
Fig. 3 is a schematic diagram of an application example related to the first embodiment of the present application.
Fig. 4 is a flowchart of a third embodiment of a method for obtaining an optical flow provided in the present application.
Fig. 5 is a block diagram of a fourth embodiment of an optical flow obtaining apparatus provided in the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is, however, susceptible of embodiment in many other ways than those herein described and similar generalizations can be made by those skilled in the art without departing from the spirit of the application and the application is therefore not limited to the specific embodiments disclosed below.
The first embodiment of the application provides a method for a robot to obtain a location service. Referring to fig. 1, a flowchart of a first embodiment of the present application is shown. A method for obtaining a location service by a robot according to a first embodiment of the present application will be described in detail with reference to fig. 1. The method comprises the following steps:
step S101: the robot obtains an occupancy map and a visible map for the obstacle location of the current frame.
This step is used for the robot to obtain an occupancy map and a visible map for the obstacle location of the current frame.
The robot obtains an occupancy map and a visible map of a current frame for an obstacle location, comprising:
the robot scans for obstacles to obtain scan data. In this embodiment, the robot may scan the obstacle using the two-dimensional laser radar to obtain scan data. The two-dimensional laser radar is also called as a single-line laser radar, and the working principle of the two-dimensional laser radar is that the distance between an obstacle and the laser radar is measured by emitting laser and receiving laser signals reflected by the obstacle. Only one laser beam can be emitted at a single moment of the two-dimensional laser radar, and one scanning is completed by shifting the emitting angle of the laser radar. The scanning angle of the two-dimensional laser radar is changed only on one plane, so that the distance between the laser radar and surrounding objects on one plane can be measured.
The robot generates an occupancy map and a visible map of the current frame for the obstacle position according to the scan data. The two-dimensional laser radar can measure and obtain the distance from the laser radar to the obstacle at a plurality of angles, and a one-dimensional vector is obtained. On the basis, according to the working principle of the laser radar, the measurement result is converted into two binary images, in the occupied image, the point occupied by the obstacle is set as 1, and the rest points are set as 0; in the visible graph, a point which can be observed by the laser radar is set to be 1, and a point which is blocked by an obstacle is set to be 0.
Step S102: and the robot performs feature extraction on the occupancy map and the visible map to obtain a feature map of the current frame.
The step is used for extracting the characteristics of the robot aiming at the occupation map and the visible map to obtain the characteristic map of the current frame.
The robot performs feature extraction on the occupancy map and the visible map to obtain a feature map of the current frame, including:
and carrying out feature extraction on the occupancy map and the visible map by using a translational convolution operation of a feature detector in the first convolution layer to obtain a feature map of the current frame.
The feature detector may employ a 3x3 matrix, and perform a translational convolution operation on the occupancy map and the visible map using the matrix to obtain a feature map of the current frame.
Step S103: and the robot captures the dependency relationship between the previous frame before the current frame and the current frame by using a gating loop unit neural network according to the characteristic diagram of the current frame.
The step is used for capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gating circulation unit neural network according to the characteristic diagram of the current frame.
The robot captures the dependency relationship between the previous frame before the current frame and the current frame by using a gating cycle unit neural network according to the characteristic diagram of the current frame, and the method comprises the following steps of;
Inputting the characteristic diagram of the current frame into a gating circulating unit neural network;
and utilizing the memory characteristic of the gating circulation unit neural network to perform feature mining between the feature map of the previous frame and the feature map of the current frame, and capturing the dependency relationship between the previous frame and the current frame.
The gated loop cell neural network may be composed of a stack of three gated loop cells. The gating cycle unit, gated Recurrent Unit, abbreviated as GRU. A GRU neural network is one of the cyclic neural networks (Recurrent Neural Network, RNN) that is better able to handle tasks entered as a time series than conventional neural networks. Because the RNN neural network can retain the effects of previous inputs to the model and participate together in the calculation of the next step. In theory, RNN neural networks can use time series information of arbitrary length, but in practice, gradient extinction will occur very quickly when the step size between the two inputs is too large, making it difficult to achieve. As a variant of RNN, the special gate structure of the GRU neural network can effectively solve the problem of variation in long and short time sequences.
Please refer to fig. 2, which shows an internal structure diagram of the GRU. h is a t-1 Represented as the state at the previous time point with respect to the current time point t. X is x t And h t Representing the input and output, respectively, of the current GRU network. r is (r) t And z t Representing two key structures in a GRU network, namely a reset gate and an update gate, each gate is a simple neural network, and the activation function of the neural network adopts a sigmoid function in order to fix the output of the gate between 0 and 1.
The GRU networks may be stacked to form a multi-layer GRU neural network, in this embodiment, a 3-layer GRU neural network may be constructed.
And based on the memory characteristics of the GRU neural network, performing characteristic mining between the characteristic diagram of the previous frame and the characteristic diagram of the current frame, and capturing the dependency relationship between the previous frame and the current frame. In this embodiment, the feature mining includes location feature mining of an obstacle.
Step S104: and the robot obtains at least one optical flow of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
The step is used for the robot to obtain at least one optical flow of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
The optical flow is the instantaneous speed of the pixel motion of a space moving object on an observation imaging plane, and is a method for finding the corresponding relation between the previous frame and the current frame by utilizing the change of the pixels in an image sequence on a time domain and the correlation between the adjacent frames, so as to calculate the motion information of the object between the adjacent frames.
Motion information of each pixel point in two consecutive two-dimensional images a and b is described, and for the current image a, the forward optical flow f records the position of each point in a in image b; the backward optical flow b records the position of each point in the next frame image b in the current image a.
The robot obtains a forward optical flow and a backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame, and the method comprises the following steps:
and the robot obtains optical flow information between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
In step S103, the robot obtains the dependency relationship between the previous frame before the current frame and the current frame by using the gated loop unit neural network.
And in the gating loop unit neural network, according to the dependency relationship, obtaining the characteristic information of the forward optical flow and the backward optical flow between the current frame and the next frame through prediction.
And the robot performs feature extraction on the optical flow information by using a second convolution layer to obtain at least one optical flow of forward optical flow and backward optical flow between the current frame and the next frame.
Step S105: the robot obtains a location service from at least one of a forward optical flow and a backward optical flow between the current frame and a next frame.
The step is used for the robot to obtain a position service according to at least one of the forward optical flow and the backward optical flow between the current frame and the next frame.
According to the forward optical flow and the backward optical flow, the movement track of the robot can be predicted, so that support is provided for positioning, navigation and obstacle avoidance of the robot.
The method for obtaining location services further comprises:
obtaining estimated characteristic information of the next frame by utilizing the forward optical flow;
acquiring a first difference value between real characteristic information of a next frame and estimated characteristic information of the next frame;
and obtaining the optimized parameters of the gating cycle unit neural network by taking the first difference value minimization as a target.
The method for obtaining location services further comprises:
obtaining estimated characteristic information of the current frame by utilizing the backward optical flow;
Acquiring a second difference value between the real characteristic information of the current frame and the estimated characteristic information of the current frame;
and obtaining the optimized parameters of the gating cycle unit neural network by taking the second difference value minimization as a target.
In order to train the proposed neural network effectively, the embodiment provides a self-supervision strategy, so that manual labeling of data is avoided. Specifically, the current frame is deformed according to the forward optical flow estimated by the neural network, the next frame is estimated, the difference between the current frame and the true value of the next frame is calculated, and the difference is minimized, so that the network parameters are updated, and the network is trained. For backward optical flow, the next frame data is deformed, the current frame is estimated, the difference between the current frame and the true value of the current frame is calculated, and the difference is minimized, so that the network parameters are updated, and the network is trained.
Fig. 3 is a schematic diagram of an application example of a robot obtaining location service method provided by the present embodiment. In fig. 3, three layers of gated loop cell layer stacks constitute a gated loop neural network.
In fig. 3, first, lidar scan data is obtained, and data format conversion is performed on the lidar scan data to obtain an occupancy map and a visible map. And then, inputting the occupancy map and the visible map into a convolution layer for feature extraction to obtain a feature map of the current frame corresponding to the laser radar scanning data. Then, the characteristic diagram of the current frame is input into a gating cycle unit neural network formed by stacking three gating cycle unit layers. In the gating cycle unit neural network, feature mining is carried out between a feature map of a previous frame and a feature map of the current frame by utilizing the memory characteristic of the gating cycle unit neural network, and the dependency relationship between the previous frame and the current frame is captured. And in the gating loop unit neural network, according to the dependency relationship, obtaining the characteristic information of the forward optical flow and the backward optical flow between the current frame and the next frame through prediction. And carrying out feature extraction on the feature information through a convolution layer to obtain a forward optical flow and a backward optical flow.
A second embodiment of the present application provides a robot including a robot body and a two-dimensional lidar mounted on the robot body; the two-dimensional laser radar is used for acquiring scanning data of a current frame; the robot body is used for executing the following operations:
obtaining an occupancy map and a visible map of the current frame for the obstacle position according to the scanning data;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gating loop unit neural network according to the characteristic diagram of the current frame;
according to the dependency relationship between the previous frame before the current frame and the current frame, at least one optical flow of forward optical flow and backward optical flow between the current frame and the next frame is obtained;
a location service is obtained from at least one of a forward optical flow and a backward optical flow between the current frame and a next frame.
The third embodiment of the present application provides a method for obtaining an optical flow, please refer to fig. 4, which is a flowchart of the third embodiment of the present application. The implementation steps of the optical flow obtaining method comprise:
Step S401: an occupancy map and a visible map for the obstacle location of the current frame are obtained.
This step is used to obtain an occupancy map and a visible map for the obstacle location of the current frame.
The obtaining an occupancy map and a visible map of the current frame for the obstacle position includes:
scanning the obstacle to obtain scanning data;
and generating an occupancy map and a visible map of the current frame for the obstacle position according to the scanning data.
In this embodiment, the two-dimensional laser radar may be used to scan an obstacle to obtain scan data.
Step S402: and carrying out feature extraction on the occupancy map and the visible map to obtain a feature map of the current frame.
The step is used for extracting the characteristics of the occupation map and the visible map to obtain the characteristic map of the current frame.
The feature extraction is performed on the occupancy map and the visible map to obtain a feature map of the current frame, including:
and carrying out feature extraction on the occupancy map and the visible map by using a translational convolution operation of a feature detector in the first convolution layer to obtain a feature map of the current frame.
The feature detector may employ a 3x3 matrix, and perform a translational convolution operation on the occupancy map and the visible map using the matrix to obtain a feature map of the current frame.
Step S403: and capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gating loop unit neural network according to the characteristic diagram of the current frame.
The step is used for capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gating loop unit neural network according to the characteristic diagram of the current frame.
Capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gating loop unit neural network according to the characteristic diagram of the current frame, wherein the dependency relationship comprises;
inputting the characteristic diagram of the current frame into a gating circulating unit neural network;
and utilizing the memory characteristic of the gating circulation unit neural network to perform feature mining between the feature map of the previous frame and the feature map of the current frame, and capturing the dependency relationship between the previous frame and the current frame.
The gated loop cell neural network may be composed of a stack of three gated loop cells.
Step S404: and obtaining at least one of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
According to the dependency relationship between the previous frame before the current frame and the current frame, obtaining the forward optical flow and the backward optical flow between the current frame and the next frame, including:
Acquiring optical flow information between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame;
and extracting features of the optical flow information by using a second convolution layer to obtain a forward optical flow and a backward optical flow between the current frame and the next frame.
The method for obtaining optical flow further comprises:
obtaining estimated characteristic information of the next frame by utilizing the forward optical flow;
acquiring a first difference value between real characteristic information of a next frame and estimated characteristic information of the next frame;
and obtaining the optimized parameters of the gating cycle unit neural network by taking the first difference value minimization as a target.
The method for obtaining optical flow further comprises:
obtaining estimated characteristic information of the current frame by utilizing the backward optical flow;
acquiring a second difference value between the real characteristic information of the current frame and the estimated characteristic information of the current frame;
and obtaining the optimized parameters of the gating cycle unit neural network by taking the second difference value minimization as a target.
In order to train the proposed neural network effectively, the embodiment provides a self-supervision strategy, so that manual labeling of data is avoided. Specifically, the current frame is deformed according to the forward optical flow estimated by the neural network, the next frame is estimated, the difference between the current frame and the true value of the next frame is calculated, and the difference is minimized, so that the network parameters are updated, and the network is trained. For backward optical flow, the next frame data is deformed, the current frame is estimated, the difference between the current frame and the true value of the current frame is calculated, and the difference is minimized, so that the network parameters are updated, and the network is trained.
In the above embodiment, a method for obtaining an optical flow is provided, and correspondingly, the application also provides a device for obtaining an optical flow. Refer to FIG. 5, which is a flowchart of an embodiment of an optical flow obtaining apparatus of the present application. Since this embodiment, the fourth embodiment, is substantially similar to the method embodiment, the description is relatively simple, and reference will be made to the partial explanation of the method embodiment for the relevant points. The device embodiments described below are merely illustrative.
A fourth embodiment of the present application provides an optical flow obtaining device, including:
a binary map obtaining unit 501 for obtaining an occupancy map and a visible map for an obstacle position of a current frame;
a current frame feature extraction unit 502, configured to perform feature extraction on the occupancy map and the visible map, to obtain a feature map of the current frame;
a dependency relation capturing unit 503, configured to capture, according to the feature map of the current frame, a dependency relation between a previous frame before the current frame and the current frame by using a gating loop unit neural network;
an optical flow obtaining unit 504, configured to obtain at least one optical flow of a forward optical flow and a backward optical flow between the current frame and a next frame according to a dependency relationship between a previous frame before the current frame and the current frame.
In this embodiment, the binary image obtaining unit is specifically configured to:
scanning the obstacle to obtain scanning data;
and generating an occupancy map and a visible map of the current frame for the obstacle position according to the scanning data.
In this embodiment, the current frame feature extraction unit is specifically configured to:
and carrying out feature extraction on the occupancy map and the visible map by using a translational convolution operation of a feature detector in the first convolution layer to obtain a feature map of the current frame.
In this embodiment, the dependency relationship capturing unit is specifically configured to;
inputting the characteristic diagram of the current frame into a gating circulating unit neural network;
and utilizing the memory characteristic of the gating circulation unit neural network to perform feature mining between the feature map of the previous frame and the feature map of the current frame, and capturing the dependency relationship between the previous frame and the current frame.
In this embodiment, the optical flow obtaining device further includes a first training unit, where the first training unit is configured to:
obtaining estimated characteristic information of the next frame by utilizing the forward optical flow;
acquiring a first difference value between real characteristic information of a next frame and estimated characteristic information of the next frame;
And obtaining the optimized parameters of the gating cycle unit neural network by taking the first difference value minimization as a target.
In this embodiment, the optical flow obtaining device further includes a second training unit, where the second training unit is configured to:
obtaining estimated characteristic information of the current frame by utilizing the backward optical flow;
acquiring a second difference value between the real characteristic information of the current frame and the estimated characteristic information of the current frame;
and obtaining the optimized parameters of the gating cycle unit neural network by taking the second difference value minimization as a target.
Optionally, the optical flow obtaining unit is specifically configured to:
the robot obtains optical flow information between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame;
and the robot performs feature extraction on the optical flow information by using a second convolution layer to obtain a forward optical flow and a backward optical flow between the current frame and the next frame.
A fifth embodiment of the present application provides a robot including a robot body and a two-dimensional lidar mounted on the robot body; the two-dimensional laser radar is used for acquiring scanning data of a current frame; the robot body is used for executing the following operations:
Obtaining an occupancy map and a visible map of the current frame for the obstacle position according to the scanning data;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gating loop unit neural network according to the characteristic diagram of the current frame;
and obtaining at least one of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
A sixth embodiment of the present application provides an electronic device, including:
a processor;
a memory for storing a program which, when read for execution by the processor, performs the following operations:
obtaining an occupancy map and a visible map of the current frame for the obstacle position;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gating loop unit neural network according to the characteristic diagram of the current frame;
and obtaining at least one of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
A seventh embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
obtaining an occupancy map and a visible map of the current frame for the obstacle position;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gating loop unit neural network according to the characteristic diagram of the current frame;
and obtaining at least one of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
An eighth embodiment of the present application provides a method of robotically transporting items, comprising:
step S601: the robot obtains an article to be transported and determines destination information of the article.
The method is used for obtaining goods to be transported by a robot and determining destination information of the goods. The destination information includes geographical location information of the destination, such as longitude and latitude, two-dimensional map data, and the like.
Step S602: the robot obtains a feature map of a current frame for representing feature information of the obstacle position.
The step is used for the robot to obtain a characteristic diagram of the current frame for representing the characteristic information of the obstacle position.
The robot scans for obstacles to obtain scan data. In this embodiment, the robot may scan the obstacle using the two-dimensional laser radar to obtain scan data. The current frame refers to an image frame obtained by the two-dimensional laser radar at the current moment. The robot generates an occupancy map and a visible map of the current frame for the obstacle position according to the scan data. And the robot performs feature extraction on the occupancy map and the visible map to obtain a feature map of the current frame. The characteristic map comprises characteristic information of the obstacle, such as the distance between the obstacle and the robot, acquired at the current moment.
Step S603: and the robot captures the dependency relationship between the previous frame before the current frame and the current frame by using a gating circulation unit neural network according to the characteristic diagram.
The step is used for capturing the dependency relationship between the previous frame before the current frame and the current frame by the robot through the gating loop unit neural network according to the characteristic diagram.
The previous frame is a frame previous to the current frame in step S602. For example, a two-dimensional lidar at the current time 14:07:30 obtaining a current frame a, and the two-dimensional laser radar at a previous time 14:07: the frame B obtained at 28 is a previous frame. And the dependency relationship between the A frame and the B frame is that is, the change characteristic information between the position information characteristic of the obstacle in the A frame and the position characteristic of the obstacle in the B frame.
Step S604: and the robot obtains at least one optical flow of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
The step is used for the robot to obtain at least one optical flow of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
The optical flow is used for describing the motion information of each pixel point in two continuous two-dimensional images a and b, and for the current image a, the forward optical flow f records the position of each point in a in the image b; the backward optical flow b records the position of each point in the next frame image b in the current image a. For example, the robot acquires two consecutive image frames C and D, and the acquisition time of the image frame C is 15:07:20, the acquisition time of the graphic frame D is 15:07:22. image frames C and D reflect the information of the obstacle acquired by the robot at these two moments with the two-dimensional lidar. The forward optical flow between image frames C and D reflects the position of each point in image frame C in image frame D, reflecting the forward variation of the robot for the obstacle position at two moments. The backward optical flow between image frames C and D reflects the position of each point in image frame D in image frame C, reflecting the backward variation of the robot for the obstacle position at two moments.
Step S605: the robot generates a travel route from a current position of the robot to a destination indicated by the destination information, based on at least one of a forward optical flow and a backward optical flow between the current frame and a next frame, and the destination information of the article. The step is for the robot to generate a travel route from a current position of the robot to a destination indicated by the destination information, based on at least one of a forward optical flow and a backward optical flow between the current frame and the next frame, and the destination information of the article.
The robot may acquire the change information of the obstacle according to at least one of the forward optical flow and the backward optical flow between the current frame and the next frame, so that the obstacle may be avoided.
Step S606: the robot transports the article according to the travel route.
The step is used for the robot to transport the goods according to the travelling route.
Since the method for obtaining the location service by the robot according to the present embodiment is substantially the same as that of the first embodiment, please refer to the detailed description of the first embodiment.
While the preferred embodiment has been described, it is not intended to limit the invention thereto, and any person skilled in the art may make variations and modifications without departing from the spirit and scope of the present invention, so that the scope of the present invention shall be defined by the claims of the present application.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
1. Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer readable media, as defined herein, does not include non-transitory computer readable media (transmission media), such as modulated data signals and carrier waves.
2. It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Claims (27)

1. A method for a robot to obtain location services, comprising:
the robot obtains an occupancy map and a visible map of the current frame for the obstacle position;
the robot performs feature extraction on the occupancy map and the visible map to obtain a feature map of the current frame;
the robot captures the dependency relationship between the previous frame before the current frame and the current frame by using a gating cycle unit neural network according to the characteristic diagram of the current frame;
the robot obtains at least one optical flow of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame;
The robot obtains a location service from at least one of a forward optical flow and a backward optical flow between the current frame and a next frame.
2. The method of obtaining location services of claim 1, wherein the robot obtaining an occupancy map and a visible map for an obstacle location of a current frame comprises:
the robot scans for obstacles to obtain scanning data;
the robot generates an occupancy map and a visible map of the current frame for the obstacle position according to the scan data.
3. The method for obtaining location services according to claim 1, wherein the robot performs feature extraction for the occupancy map and the visible map to obtain a feature map of the current frame, comprising:
and carrying out feature extraction on the occupancy map and the visible map by using a translational convolution operation of a feature detector in the first convolution layer to obtain a feature map of the current frame.
4. The method for obtaining location services according to claim 1, wherein the robot captures a dependency relationship of a previous frame before the current frame and the current frame using a gated loop cell neural network according to a feature map of the current frame, comprising;
Inputting the characteristic diagram of the current frame into a gating circulating unit neural network;
and utilizing the memory characteristic of the gating circulation unit neural network to perform feature mining between the feature map of the previous frame and the feature map of the current frame, and capturing the dependency relationship between the previous frame and the current frame.
5. The method of obtaining location services of claim 1, further comprising:
obtaining estimated characteristic information of the next frame by utilizing the forward optical flow;
acquiring a first difference value between real characteristic information of a next frame and estimated characteristic information of the next frame;
and obtaining the optimized parameters of the gating cycle unit neural network by taking the first difference value minimization as a target.
6. The method of obtaining location services of claim 1, further comprising:
obtaining estimated characteristic information of the current frame by utilizing the backward optical flow;
acquiring a second difference value between the real characteristic information of the current frame and the estimated characteristic information of the current frame;
and obtaining the optimized parameters of the gating cycle unit neural network by taking the second difference value minimization as a target.
7. The method for obtaining location services according to claim 1, wherein the robot obtains a forward optical flow and a backward optical flow between the current frame and a next frame according to a dependency relationship between a previous frame before the current frame and the current frame, comprising:
The robot obtains optical flow information between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame;
the robot performs feature extraction on the optical flow information by using a second convolution layer to obtain at least one optical flow of forward optical flow and backward optical flow between the current frame and the next frame.
8. The method of obtaining location services of claim 1, wherein the robot obtains location services from forward optical flow and backward optical flow between the current frame and a next frame, comprising:
the robot obtains the position information of the obstacle according to the forward optical flow and the backward optical flow between the current frame and the next frame;
and generating the travel route information of the robot according to the position information of the obstacle.
9. The robot is characterized by comprising a robot body and a two-dimensional laser radar arranged on the robot body; the two-dimensional laser radar is used for acquiring scanning data of a current frame; the robot body is used for executing the following operations:
obtaining an occupancy map and a visible map of the current frame for the obstacle position according to the scanning data;
Extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gating loop unit neural network according to the characteristic diagram of the current frame;
according to the dependency relationship between the previous frame before the current frame and the current frame, at least one optical flow of forward optical flow and backward optical flow between the current frame and the next frame is obtained;
a location service is obtained from at least one of a forward optical flow and a backward optical flow between the current frame and a next frame.
10. A method of obtaining optical flow, comprising:
obtaining an occupancy map and a visible map of the current frame for the obstacle position;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gating loop unit neural network according to the characteristic diagram of the current frame;
and obtaining at least one of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
11. The method of obtaining optical flow according to claim 10, characterized in that said obtaining an occupancy map and a visible map for an obstacle position of a current frame comprises:
scanning the obstacle to obtain scanning data;
and generating an occupancy map and a visible map of the current frame for the obstacle position according to the scanning data.
12. The method according to claim 10, wherein the feature extraction for the occupancy map and the visible map to obtain the feature map of the current frame includes:
and carrying out feature extraction on the occupancy map and the visible map by using a translational convolution operation of a feature detector in the first convolution layer to obtain a feature map of the current frame.
13. The method according to claim 10, wherein the robot captures the dependency relationship between the previous frame before the current frame and the current frame using a gated loop cell neural network according to the feature map of the current frame, comprising;
inputting the characteristic diagram of the current frame into a gating circulating unit neural network;
and utilizing the memory characteristic of the gating circulation unit neural network to perform feature mining between the feature map of the previous frame and the feature map of the current frame, and capturing the dependency relationship between the previous frame and the current frame.
14. The method for obtaining optical flow according to claim 10, characterized by further comprising:
obtaining estimated characteristic information of the next frame by utilizing the forward optical flow;
acquiring a first difference value between real characteristic information of a next frame and estimated characteristic information of the next frame;
and obtaining the optimized parameters of the gating cycle unit neural network by taking the first difference value minimization as a target.
15. The method for obtaining optical flow according to claim 10, characterized by further comprising:
obtaining estimated characteristic information of the current frame by utilizing the backward optical flow;
acquiring a second difference value between the real characteristic information of the current frame and the estimated characteristic information of the current frame;
and obtaining the optimized parameters of the gating cycle unit neural network by taking the second difference value minimization as a target.
16. The method according to claim 10, wherein the obtaining the forward optical flow and the backward optical flow between the current frame and the next frame based on the dependency relationship between the previous frame before the current frame and the current frame includes:
acquiring optical flow information between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame;
And extracting features of the optical flow information by using a second convolution layer to obtain a forward optical flow and a backward optical flow between the current frame and the next frame.
17. An optical flow obtaining device, comprising:
a binary image obtaining unit for obtaining an occupancy image and a visible image of the current frame for the obstacle position;
the current frame feature extraction unit is used for carrying out feature extraction on the occupation map and the visible map to obtain a feature map of the current frame;
the dependency relation capturing unit is used for capturing the dependency relation between the previous frame before the current frame and the current frame by using the gating loop unit neural network according to the characteristic diagram of the current frame;
an optical flow obtaining unit, configured to obtain at least one optical flow of a forward optical flow and a backward optical flow between the current frame and a next frame according to a dependency relationship between a previous frame before the current frame and the current frame.
18. The optical flow obtaining device according to claim 17, wherein the binary image obtaining unit is specifically configured to:
scanning the obstacle to obtain scanning data;
and generating an occupancy map and a visible map of the current frame for the obstacle position according to the scanning data.
19. The apparatus for obtaining an optical flow according to claim 17, wherein said current frame feature extraction unit is specifically configured to:
and carrying out feature extraction on the occupancy map and the visible map by using a translational convolution operation of a feature detector in the first convolution layer to obtain a feature map of the current frame.
20. The optical flow obtaining device according to claim 17, characterized in that said dependency relation capturing unit is specifically configured to;
inputting the characteristic diagram of the current frame into a gating circulating unit neural network;
and utilizing the memory characteristic of the gating circulation unit neural network to perform feature mining between the feature map of the previous frame and the feature map of the current frame, and capturing the dependency relationship between the previous frame and the current frame.
21. The optical flow obtaining device according to claim 17, further comprising a first training unit configured to:
obtaining estimated characteristic information of the next frame by utilizing the forward optical flow;
acquiring a first difference value between real characteristic information of a next frame and estimated characteristic information of the next frame;
and obtaining the optimized parameters of the gating cycle unit neural network by taking the first difference value minimization as a target.
22. The optical flow obtaining device according to claim 17, further comprising a second training unit configured to:
obtaining estimated characteristic information of the current frame by utilizing the backward optical flow;
acquiring a second difference value between the real characteristic information of the current frame and the estimated characteristic information of the current frame;
and obtaining the optimized parameters of the gating cycle unit neural network by taking the second difference value minimization as a target.
23. The optical flow obtaining device according to claim 17, characterized in that said optical flow obtaining unit is specifically configured to:
acquiring optical flow information between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame;
and extracting features of the optical flow information by using a second convolution layer to obtain a forward optical flow and a backward optical flow between the current frame and the next frame.
24. The robot is characterized by comprising a robot body and a two-dimensional laser radar arranged on the robot body; the two-dimensional laser radar is used for acquiring scanning data of a current frame; the robot body is used for executing the following operations:
obtaining an occupancy map and a visible map of the current frame for the obstacle position according to the scanning data;
Extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gating loop unit neural network according to the characteristic diagram of the current frame;
and obtaining at least one of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
25. An electronic device, the electronic device comprising:
a processor;
a memory for storing a program which, when read for execution by the processor, performs the following operations:
obtaining an occupancy map and a visible map of the current frame for the obstacle position;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gating loop unit neural network according to the characteristic diagram of the current frame;
and obtaining at least one of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
26. A computer readable storage medium having stored thereon a computer program, characterized in that the program, when executed by a processor, realizes the steps of:
obtaining an occupancy map and a visible map of the current frame for the obstacle position;
extracting features of the occupation map and the visible map to obtain a feature map of the current frame;
capturing the dependency relationship between the previous frame before the current frame and the current frame by using a gating loop unit neural network according to the characteristic diagram of the current frame;
and obtaining at least one of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame.
27. A method of robotically transporting items, comprising:
the robot obtains goods to be transported and determines destination information of the goods;
the robot obtains a characteristic diagram of the current frame, which is used for representing the characteristic information of the position of the obstacle;
the robot captures the dependency relationship between the previous frame before the current frame and the current frame by using a gating circulation unit neural network according to the characteristic diagram;
the robot obtains at least one optical flow of forward optical flow and backward optical flow between the current frame and the next frame according to the dependency relationship between the previous frame before the current frame and the current frame;
The robot generates a travel route from a current position of the robot to a destination indicated by the destination information according to at least one of a forward optical flow and a backward optical flow between the current frame and a next frame and the destination information of the article;
the robot transports the article according to the travel route.
CN201811294619.2A 2018-11-01 2018-11-01 Method for robot to obtain position service and robot Active CN111113404B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811294619.2A CN111113404B (en) 2018-11-01 2018-11-01 Method for robot to obtain position service and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811294619.2A CN111113404B (en) 2018-11-01 2018-11-01 Method for robot to obtain position service and robot

Publications (2)

Publication Number Publication Date
CN111113404A CN111113404A (en) 2020-05-08
CN111113404B true CN111113404B (en) 2023-07-04

Family

ID=70494333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811294619.2A Active CN111113404B (en) 2018-11-01 2018-11-01 Method for robot to obtain position service and robot

Country Status (1)

Country Link
CN (1) CN111113404B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9998666B2 (en) * 2015-08-26 2018-06-12 Duke University Systems and methods for burst image deblurring
US9830709B2 (en) * 2016-03-11 2017-11-28 Qualcomm Incorporated Video analysis with convolutional attention recurrent neural networks
CN106681353B (en) * 2016-11-29 2019-10-25 南京航空航天大学 The unmanned plane barrier-avoiding method and system merged based on binocular vision with light stream
CN108204812A (en) * 2016-12-16 2018-06-26 中国航天科工飞航技术研究院 A kind of unmanned plane speed estimation method
CN106934347B (en) * 2017-02-10 2021-03-19 百度在线网络技术(北京)有限公司 Obstacle identification method and device, computer equipment and readable medium
CN107292912B (en) * 2017-05-26 2020-08-18 浙江大学 Optical flow estimation method based on multi-scale corresponding structured learning
CN108647646B (en) * 2018-05-11 2019-12-13 北京理工大学 Low-beam radar-based short obstacle optimized detection method and device

Also Published As

Publication number Publication date
CN111113404A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
Cadena et al. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age
JP2019532433A (en) Laser scanner with real-time online egomotion estimation
CN106940186A (en) A kind of robot autonomous localization and air navigation aid and system
CN109902702A (en) The method and apparatus of target detection
CN104050710B (en) The method and system of 3D figure rendering is carried out with implicit solid
Maier et al. Submap-based bundle adjustment for 3D reconstruction from RGB-D data
JP6782903B2 (en) Self-motion estimation system, control method and program of self-motion estimation system
CN112734931B (en) Method and system for assisting point cloud target detection
JP2022547288A (en) Scene display using image processing
US20190304161A1 (en) Dynamic real-time texture alignment for 3d models
Löffler et al. Evaluation criteria for inside-out indoor positioning systems based on machine learning
Forechi et al. Visual global localization with a hybrid WNN-CNN approach
Kocur et al. Traffic camera calibration via vehicle vanishing point detection
Baur et al. Real-time 3D LiDAR flow for autonomous vehicles
Sun et al. TransFusionOdom: Transformer-based LiDAR-inertial fusion odometry estimation
CN111113404B (en) Method for robot to obtain position service and robot
CN111113405B (en) Method for robot to obtain position service and robot
Nardi et al. Generation of laser-quality 2D navigation maps from RGB-D sensors
Homeyer et al. Multi-view monocular depth and uncertainty prediction with deep sfm in dynamic environments
Patel et al. Collaborative mapping of archaeological sites using multiple uavs
Korthals et al. Semantical occupancy grid mapping framework
WO2022267444A1 (en) Method and device for camera calibration
Liu et al. LSFB: A low-cost and scalable framework for building large-scale localization benchmark
Kanojia et al. Patch-based detection of dynamic objects in CrowdCam images
JP2022138037A (en) Information processor, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230719

Address after: Room 437, Floor 4, Building 3, No. 969, Wenyi West Road, Wuchang Subdistrict, Yuhang District, Hangzhou City, Zhejiang Province

Patentee after: Wuzhou Online E-Commerce (Beijing) Co.,Ltd.

Address before: Box 847, four, Grand Cayman capital, Cayman Islands, UK

Patentee before: ALIBABA GROUP HOLDING Ltd.