CN114061586A - Method and product for generating navigation path of electronic device - Google Patents

Method and product for generating navigation path of electronic device Download PDF

Info

Publication number
CN114061586A
CN114061586A CN202111327724.3A CN202111327724A CN114061586A CN 114061586 A CN114061586 A CN 114061586A CN 202111327724 A CN202111327724 A CN 202111327724A CN 114061586 A CN114061586 A CN 114061586A
Authority
CN
China
Prior art keywords
map
scene
path
locations
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111327724.3A
Other languages
Chinese (zh)
Inventor
朱敏昭
赵冰蕾
孔涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202111327724.3A priority Critical patent/CN114061586A/en
Publication of CN114061586A publication Critical patent/CN114061586A/en
Priority to PCT/CN2022/127124 priority patent/WO2023082985A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Navigation (AREA)

Abstract

Embodiments of the present disclosure relate to methods and products for navigation path planning for electronic devices. The method comprises the following steps: a second map is generated based on a first map that describes locations of a plurality of objects in a scene in the scene, the second map describing predicted distances from the plurality of locations in the scene to a target object of the plurality of objects. The method further comprises the following steps: a candidate path to the target object from a target location from the plurality of locations is determined based on the second map. The method further comprises the following steps: and selecting a navigation path from the candidate paths to reach the target object from the target position. By using the method, the spatial relationship between the priori knowledge and the object in the scene can be fully utilized, and the electronic equipment is helped to find and reach the target object more efficiently.

Description

Method and product for generating navigation path of electronic device
Technical Field
Embodiments of the present disclosure relate to the field of path planning technologies, and more particularly, to a method, an apparatus, a device, a medium, and a program product for navigation path planning of an electronic device.
Background
As technology has evolved, many electronic devices (e.g., robots) have the ability to automatically perform tasks. For example, upon receiving a given task (e.g., adding water to a cup on a table), the robot will automatically plan a path, avoid an obstacle, move around its planned path to the vicinity of the table, and then perform a subsequent water addition. Tasks such as this type may have some challenges because the environment in which the robot is located (e.g., the room in which the robot is located) may be completely new to the robot, with no map that can be used directly. Moreover, even if there is a map describing the environment, the original map is not applicable due to the change in the location of the items in the environment.
To solve these problems, one idea is to create a map describing the environment and the positions of the various objects in the environment and then to plan the path in the map. However, this method of path planning requires a large number of search tasks to be performed, and may traverse all points in the map, which results in a large computational cost. Also, this method does not take into account the relationship between the various objects in the environment, such as a table and a chair typically placed together, and a cup typically placed on the table.
Disclosure of Invention
Embodiments of the present disclosure provide a method, apparatus, device, medium, and program product for generating a navigation path of an electronic device.
In a first aspect of the disclosure, a method for generating a navigation path of an electronic device is provided. The method comprises the following steps: generating a second map based on a first map, the first map describing locations of a plurality of objects in a scene in the scene, the second map describing predicted distances from the plurality of locations in the scene to a target object of the plurality of objects; determining a candidate path to the target object from a target location from the plurality of locations based on the second map; and selecting a navigation path from the candidate paths to reach the target object from the target position.
In a first aspect of the disclosure, a method for training a neural network model is also provided. The method comprises the following steps: acquiring a training data set comprising a plurality of scenes and a plurality of objects; acquiring training labels, wherein the training labels comprise positions of a plurality of objects in a plurality of scenes in the scenes, real distances from the positions in the scenes to target objects in the plurality of objects, and classes of the objects; a neural network model is trained based on a training dataset and training labels, wherein the neural network model outputs a map that describes predicted distances to a target object of a plurality of objects from a plurality of locations in a scene.
In a second aspect of the disclosure, an apparatus for generating a navigation path of an electronic device is provided. The device includes: a map generation module configured to generate a second map based on a first map, the first map describing locations of a plurality of objects in a scene in the scene, the second map describing predicted distances from the plurality of locations in the scene to a target object of the plurality of objects; a candidate path determination module configured to determine a candidate path to the target object from a target location from the plurality of locations based on the second map; and a navigation path selection module configured to select a navigation path from the candidate paths to reach the target object from the target position.
In a second aspect of the present disclosure, there is also provided an apparatus for training a neural network model. The device includes: a training data acquisition module configured to acquire a training data set including a plurality of scenes and a plurality of subjects; a training label obtaining module configured to obtain a training label, the training label comprising positions of a plurality of objects in a plurality of scenes in the scene, real distances from the plurality of positions in the scene to a target object in the plurality of objects, and a category of the object; a training module configured to train a neural network model based on a training dataset and a training label, wherein the neural network model outputs a map describing predicted distances to a target object of the plurality of objects from a plurality of locations in the scene.
In a third aspect of the present disclosure, an electronic device is provided. The electronic device includes: a memory and a processor; wherein the memory is for storing one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method according to the first aspect.
In a fourth aspect of the disclosure, a computer-readable storage medium is provided. The computer readable storage medium has stored thereon one or more computer instructions, wherein the one or more computer instructions are executed by a processor to implement the method according to the first aspect.
In a fifth aspect of the disclosure, a computer program product is provided. The computer program product comprises one or more computer instructions, wherein the one or more computer instructions are executed by a processor to implement the method according to the first aspect.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of a use environment for a method of generating a navigation path for an electronic device, in accordance with certain embodiments of the present disclosure;
FIG. 2 illustrates a flow diagram of a method for generating a navigation path for an electronic device, in accordance with certain embodiments of the present disclosure;
fig. 3A shows a schematic diagram of a second map, and where predicted distances are shown, in accordance with certain embodiments of the present disclosure;
FIG. 3B shows a schematic diagram of a second map, and in which certain objects are shown, in accordance with certain embodiments of the present disclosure;
FIG. 4 illustrates a flow diagram of a method for training a neural network model, in accordance with certain embodiments of the present disclosure;
FIG. 5 illustrates a schematic diagram of a sub-scenario in accordance with certain embodiments of the present disclosure;
FIG. 6 illustrates a block diagram of an apparatus for generating a navigation path for an electronic device, in accordance with certain embodiments of the present disclosure;
FIG. 7 illustrates a block diagram of an apparatus for training a neural network model, in accordance with certain embodiments of the present disclosure; and
FIG. 8 illustrates a block diagram of a computing system in which one or more embodiments of the disclosure may be implemented.
Throughout the drawings, the same or similar reference numbers refer to the same or similar elements.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
The term "map" as used in this disclosure refers to the result of modeling the environment/scene, an important link in path planning. The method aims to establish a model which is convenient for a computer to carry out path planning, namely, an actual physical space is abstracted into an abstract space which can be processed by an algorithm, and mapping between physics and abstraction is realized.
The term "path" used in the present disclosure refers to finding a walking path by applying a corresponding algorithm on the basis of an environment model in a path searching stage. The walking path is a path that optimizes a function associated with a predetermined target, and may not necessarily be a path leading directly to the target object, but may be a path leading to an intermediate target selected to reach the target object.
The term "training" or "learning" as used herein refers to a process of optimizing system performance using experience or data. For example, the neural network system may gradually optimize the performance of the predicted distance through a training or learning process, such as improving the accuracy of the predicted distance. In the context of the present disclosure, the terms "training" or "learning" may be used interchangeably for purposes of discussion convenience.
The term "method/model of generating a navigation path of an electronic device" as used herein refers to a method/model that is built from a priori knowledge associated with color information, depth information, object types, etc. in a particular environment/scene. The method/model may be used to find a target object in a navigation task of the electronic device and to bring the electronic device to the target object.
As used herein, the terms "comprises," comprising, "and variations thereof are intended to be open-ended, i.e.," including, but not limited to. The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment". Relevant definitions for other terms will be given in the following description.
The inventors have observed that existing map and navigation planning methods have failed to meet the increasing demand for electronic devices to perform autonomous tasks. For example, when a domestic life service robot performs a task of pouring water for the first time at home, the robot may not know where to go with the cup. Moreover, in the conventional navigation task, a map of the environment is constructed in advance, and the target position of the navigation is also given in the form of coordinates on the map. However, in the aforementioned task, there is no pre-constructed map, and the robot does not know the location of the target, but only what the target is (e.g., the target is a cup, as the water can be poured before finding it). Therefore, the robot must set a target object for itself, which may include a final target object (e.g., a cup) and may also include an intermediate target object (e.g., a table near which the cup is located or a chair beside the table) in order to reasonably plan a navigation path, avoid obstacles, and the like.
The inventor also finds that in the traditional navigation planning method, the priori knowledge is not utilized to provide a faster, more accurate and more concise navigation path planning process. Since the spatial relationship and distance between the objects satisfy certain rules in certain environments, such as indoor scenes, especially in home scenes, the chair is often placed near a desk, and the cup is usually on the desk. When the robot needs to search for a cup, it is likely that an object that is easier to locate (e.g., a more visually apparent table or chair) will be found first.
According to an embodiment of the present disclosure, a map (hereinafter also referred to as a "second map") including a priori knowledge information of spatial relationships between objects will be reproduced on the basis of a map describing a scene around the robot (hereinafter also referred to as a "first map") to provide predicted distances from a plurality of locations in the scene to a target object of the plurality of objects. In this way, when determining a candidate path to reach the target object from a target position among the plurality of positions, it is easier to find a shorter path. That is, the embodiments described herein advantageously utilize the spatial relationships of the various objects in the scene and directly utilize the distances of the various locations to the target object without first performing a search or the like. Compared with the traditional scheme, a better navigation path can be provided, so that the robot can efficiently move to the target object.
In the following description, certain embodiments will be discussed with reference to the working process of a robot, for example, a robot that provides a home life service, and the like. It should be understood that they are merely illustrative of the principles and concepts of the embodiments of the disclosure, and are not intended to limit the scope of the disclosure in any way. The embodiments described herein may be applicable in other scenarios as well.
FIG. 1 illustrates a schematic diagram of a use environment 100 for a method of generating a navigation path for an electronic device, in accordance with certain embodiments of the present disclosure. As shown, at an electronic device 101, such as a robot, color information (e.g., RGB images) and depth information (e.g., depth images) in a scene are acquired. The manner in which such information is acquired includes, but is not limited to, acquiring from a camera mounted on the electronic device, such as the RGBD camera 102. The camera may capture a depth distance of a space within a camera view angle, providing a three-dimensional image.
The electronic device 101 will be guided to the target object according to the navigation path. The electronic device 101 may then perform the task-required operation, e.g., the robot picks up the cup to fill with water, etc. The present disclosure is not limited with respect to the operations or actions that the subsequent electronic device will perform.
Fig. 2 illustrates a flow diagram of a method 200 for generating a navigation path for an electronic device, in accordance with certain embodiments of the present disclosure. For ease of description, the process of generating a navigation path for an electronic device implemented by method 200 will be described with the robot moving from its current location to the side of a table in an indoor home scenario as an example. As noted above, however, this is merely exemplary and is not intended to limit the scope of the present disclosure in any way. Embodiments of the method 200 described herein can be used in the navigation process of any other suitable electronic device as well.
At 201, a second map is generated based on the first map. For example, in this embodiment, the first map describes the locations of a plurality of objects in the living room.
At the electronic device 101, a first map may be generated using the acquired three-dimensional image and predetermined categories of objects in the scene. One example of a first map is a semantic map, which takes the map as a carrier, into which semantics are mapped. It is understood that semantics represent the classes of individual objects in a scene. Categories refer to names of objects, such as tables, chairs, or may be encoded as numbers or the like. Thus, the first map provides a simplified model. The "semantics" can be learned and acquired from the three-dimensional image through classification, detection, segmentation and other models, but the "semantics" can also be defined by human, as long as the definition is sufficiently general and concise.
The first map may be acquired by the method discussed above. And in some embodiments, the first map is a two-dimensional map obtained by projecting individual objects onto a plane based on a color image and a depth image of the scene. More specifically, such a first map can be projected to obtain a two-dimensional map of an overhead view by using a scene within the view of the robot, in combination with information such as the position and posture of the robot, intrinsic parameters of the camera, and the like, and the type of the object. Thus, this is a relatively efficient abstraction to represent various types of information in a scene.
A second map generated based on the first map describes predicted distances to a target object of the plurality of objects from a plurality of locations in the scene. As an example, fig. 3A shows a schematic diagram of a second map, and where predicted distances are shown, where the numbers in each grid represent the predicted distances, according to some embodiments of the present disclosure. Fig. 3B shows yet another example of the second map, and in which a specific object is shown. It can be seen that the second map composed in the form of a grid better reflects the state of the scene where the robot is located. The area 301 is a boundary between an already searched area and an unknown area of the robot. When a robot finds a specific object (e.g., a door, i.e., an area between the walls 302 and 303), the angle θ to both sides of the door and the corresponding area may be preferentially judged to better generate a candidate path and a navigation path.
Still taking the living room as an example, at this time, the second map may describe the minimum distance that a plurality of locations in the living room reach the table. For example, the distance from a couch to a table, the distance from a television to a table, etc. In a broader sense, multiple locations may refer to the distance from one pixel of the map to a pixel at the target object. The distance is obtained by prediction. For example, a neural network model may be trained to learn features of various objects of a relevant scene, and the neural network model may predict distances from multiple locations in the scene to a target object of the multiple objects. This distance is referred to herein as the predicted distance.
By the second map describing the predicted distance, the scene in which the robot is currently located, the positional relationship of each object in the scene and the target object can be generally known. And, the position relation takes into account the prior knowledge mentioned above, so it is more accurate.
The generation of the second map is discussed below with continued reference to FIG. 2. In some embodiments, some grids may be provided, and the size of these grids may be associated with the actual size of the scene. The grids can facilitate data processing when generating candidate paths and navigation paths and balance real-time performance and economy when the robot explores scenes.
For example, one grid corresponds to the size of a living room by 5 cm. This can simplify the calculation, save computational resource, promote efficiency. It is noted that any specific numerical values described herein and elsewhere herein are merely exemplary and are not intended to limit the scope of the present disclosure.
Accordingly, each grid in the second map that includes the predicted distance preserves the predicted path length of that grid to the target object. The predicted distance that a grid within the range of the target object and the support or container of the target object (e.g., the target object is a cup on a table, and then the table is the support) holds is 0, and the predicted distance that a grid within the range of the obstacle holds is infinity.
In some embodiments, the first map and the second map may be updated based on at least one of a movement time, a movement distance, and a movement angle of the electronic device 101 exceeding a threshold.
As will be appreciated from the above description, the first map and the second map are both from a robot perspective, which means that as the robot moves, the perspective changes and the previously planned path may change and no longer be suitable. Thus, in order to find a balance in real-time and computational efficiency, thresholds may be determined, such as thresholds for time of movement, distance of movement, angle of movement, and when the thresholds are exceeded, the first map and the second map are updated. This achieves a balance between real-time and economic efficiency.
In some embodiments, the predicted distance may be represented as a continuous distance value or a discrete distance value, where the discrete distance value corresponds to one interval of the continuous distance value.
Since the prediction distance is expressed as discrete values, a prediction mode in which each discrete value corresponds to one interval in continuous values is easier to implement. In some embodiments, the predicted distance may be represented by an interval numbered 0 to 12, 0 representing a predicted distance of 0 to 1 meter, 1 representing a predicted distance of 1 to 2 meters, and so on. Therefore, the method has the advantages of calculation, processing and storage, and can provide higher calculation speed and reduce storage capacity. At the same time, this introduces another advantage in that errors in prediction can be eliminated, since it is difficult to predict an accurate continuous value.
In some embodiments, wherein generating the second map may further comprise: dividing a scene in a first map into a plurality of sub-scenes; a second map is generated based on a plurality of maps describing locations of a plurality of objects in the sub-scene.
When the scene is large, the idea of exploring local scenes (i.e., exploring and planning intermediate targets and paths within the field of view) and then synthesizing each local scene to obtain a global scene can be considered to complete the exploration of the whole scene. This enables the robot to perform tasks even when the new, unknown area is large and the target object is not in the explored area.
At 202, a candidate path to reach the target object from a target location from the plurality of locations is determined based on the second map.
In some embodiments, the target position may be a current position of the robot. Continuing with the living room scenario, the robot may move directly to the target object in order to reach the target object, but when blocked by an obstacle, such as a couch, the robot then faces the option of either bypassing the left or bypassing the right. For another example, if the target object is not in the living room, the robot faces the option of leaving the living room and entering another room. These selections all correspond to candidate paths. In particular, due to the limitations of computing resources and the field of view of the robot, it is possible that the robot cannot directly find the target object, but needs to search for it, or selects an intermediate target first and then reaches the target object through the intermediate target. To this end, embodiments of the present disclosure utilize candidate paths.
There may be a variety of ways to determine candidate paths. For example, in some embodiments, a route associated with a predicted distance from a boundary of a second map (e.g., an exploration boundary, i.e., a boundary between an area representing an already constructed map and an area not yet constructed map) to a target object may be selected as a candidate route. For convenience of description, such a candidate path is referred to as a "second path".
If the scene described by the second map is a living room, the first path plans the path with the minimum of the sum of the predicted distance from the boundary of the already explored area in the living room to the target object and the distance of the robot to the boundary of the living room as the target. In some embodiments, the target or intermediate target to be selected and subsequent candidate paths may be determined using the following formulas:
Figure BDA0003347814890000091
wherein p isgoalDenotes an intermediate target, d (p)agentP) represents the distance of the current position of the robot to the boundary of the second map, LDis(p) represents the predicted distance(s),
Figure BDA0003347814890000092
representing the range that has been explored in the second map. The purpose of this equation is to minimize the value of the sum of the two terms in parenthesis to the right of the equation.
In this way, the generated planned path is the theoretical shortest path.
In some embodiments, a path associated with the predicted distance of the target location to the target object may be selected as the candidate path. For convenience of description, such a candidate path is also referred to as a "first path". If the scene depicted by the second map is a living room, the second path plans the path with the predicted distance of the robot's location to the location of the map boundary (e.g., the boundary of the area that has been explored) as a target or intermediate target. In some embodiments, the intermediate target to be selected and subsequent candidate paths may be determined using the following formulas:
Figure BDA0003347814890000101
the purpose of the formula is to minimize LDisThe value of (p). It can be seen that, in this case, the pairThe selection of the intermediate target is more efficient, not taking into account the current position of the robot.
In some embodiments, a path associated with an angle or boundary of a target object to a predetermined particular object in the scene may be selected as a candidate path. For convenience of description, such a candidate path is also referred to as a "third path".
Assuming that the scene depicted by the second map is a living room, the boundary of the living room has a specific object (e.g., a door), note that the specific object should be predetermined. In this case, the position where the predicted distance is the smallest is selected as the intermediate target in the range of the door with priority. The position of the door may be a position within a range, and thus it will be appreciated that the intermediate target may be at an angle θ from the target position to the particular objectd(e.g., 120 degrees) or range boundaries. In some embodiments, the target or intermediate target to be selected and the corresponding candidate path may be determined using the following formula:
Figure BDA0003347814890000102
wherein the content of the first and second substances,
Figure BDA0003347814890000103
representing definitions
Figure BDA0003347814890000104
pdoorIndicating the probability of the presence of a door (or a predetermined specific object such as a hallway). The probability is obtained from a neural network model. In some embodiments, Cross Entropy Loss (Cross Entropy Loss) may be utilized to train categories about a gate (or other particular object) to more accurately determine its probability. It can be seen that this path causes the robot to temporarily skip over objects that are not related to the target object (e.g., other rooms), preferentially searching for objects that are related to the target object (e.g., the room in which the target object is located).
Since the candidate paths are generated based on different strategies, these strategies provide a mechanism for the robot to determine the path in the face of these choices. These mechanisms will provide intermediate targets to the target object and paths to the intermediate targets. The target is reached by continuously changing the intermediate target. This can enable the robot to still have the ability to provide navigation paths in a new context (e.g., in a never-explored scenario).
In particular, in some embodiments, the second map may be generated by a neural network model based on the first map. The neural network model obtains a first map and the category of each object in the scene, and generates a second map.
In some embodiments, the neural network used to generate the second map may be trained with a data set of the first map and the categories of the respective objects embodying the spatial relationship of the respective objects in the scene. An example embodiment of this aspect will be described below with reference to fig. 4.
With continued reference to FIG. 2, at 203, a navigation path from the target location to the target object is selected from the candidate paths.
In some embodiments, a navigation path of the electronic device is generated using a path planning algorithm based on at least one of the first path, the second path, the third path, and based on the target location and the target object. It should then be understood that the scope of the present disclosure is not limited to the several examples of determining candidate paths described above. Other suitable means may be used.
In some embodiments, the navigation path may be provided using a Fast Marching algorithm (Fast Marching Method) or an a-path planning algorithm based on one of the first path, the second path, and the third path. Other path planning algorithms may also be used to provide the navigation path, as the present disclosure is not limited in this respect.
It can be seen that, according to embodiments of the present disclosure, spatial position relationships (i.e., a priori knowledge) between various objects in a scene are considered sufficiently, resulting in a second map describing these a priori knowledge that is more realistic. In the second map, the predicted distance of the path that the robot can actually move can also be described in a simplified form (i.e., the predicted distance represented in discrete values), so that when a candidate path and a navigation path are generated, each point in the map does not need to be searched any more, a large amount of computing resources are saved, and the efficiency is improved. When facing a new scene and an unknown environment, the navigation to a target object can be realized by expanding and exploring in a local-to-global mode. The robot can achieve good balance in real time and economy due to updating the map in consideration of the fact that the movement time, the movement distance or the movement angle exceeds the threshold value.
As described above, in some embodiments, the second map may be generated based on the first map according to a neural network. Fig. 4 illustrates a flow diagram of a method 400 for training such a neural network model, in accordance with certain embodiments of the present disclosure. It will be appreciated that training and use of the neural network may occur in the same or different locations. That is, the method 200 and the method 400 may be performed by the same entity or may be performed by different entities.
At 401, a training data set comprising a plurality of scenes and a plurality of subjects is acquired.
In some embodiments, the scenarios may be pre-established standard environments, each arranged as needed for the neural network model to learn the specified features. The category of the object may include various items that may be placed in a practical application, such as a bed, a sofa, a table, and so on.
At 402, training labels are obtained, the training labels including locations in a scene of a plurality of objects in a plurality of scenes, true distances from the plurality of locations in the scene to a target object in the plurality of objects, and a class of the objects.
In some embodiments, the values for these locations, distances, categories (referred to herein as true values or true distances) may be pre-labeled in various scenarios at 401. These values are determined as sample labels and the neural network model is trained. Since these sample labels are specifically set, including the prior knowledge mentioned above, and imply the characteristics of the scene, the trained neural network model can generate a second map having the characteristics of the scene to promote the accuracy of the candidate route and the navigation route.
In some embodiments, wherein training the neural network model may further comprise: dividing a scene into a plurality of sub-scenes; and training the neural network model based on training labels comprising locations of a plurality of objects in the sub-scene and true distances from the plurality of locations in the sub-scene to target objects in the plurality of objects.
In some embodiments, the training for the entire scene may be completed by training with a sub-scene of a specific size and then gradually exploring the entire scene.
At 403, a neural network model is trained based on the training dataset and the training labels, wherein the neural network model outputs a map that describes predicted distances to a target object of the plurality of objects from a plurality of locations in the scene.
In some embodiments, the neural network model may be a fully convolutional neural network, which may have 3 downsampled ResBlock layers, 3 upsampled ResBlock layers, and a lower level feature map and an upsampled feature map concatenated at each layer. The output of the neural network is the predicted distance. The output channel may be set to nb*nTWherein n isbThe side length of the area of the representation of the discrete prediction distance is, for example, 5 cm. n isTIndicating the number of target classes. Thus, in this way, each nbThe channels form a group and are responsible for predicting the prediction distance of a certain target, so that a plurality of groups of object types and output prediction distances can be trained and predicted, and the efficiency is improved.
In some embodiments, wherein training the neural network model further comprises: when the position of the target object is not in the scene, training a neural network model by using the real distance from the target position in the plurality of positions to the boundary of the scene; and/or when the location of the target object is not in the sub-scene, training the neural network model using the true distance of the target location of the plurality of locations to the boundary of the sub-scene.
It can be seen that the neural network model generated by training through the method 400 described above can be accurately classified for each object in the scene, and the prediction for the distance between the target position and the target object is not only accurate, but also eliminates the error caused by the possible inaccuracy of the continuous value, and increases the robustness of the robot in the actual application environment. Due to the real-time nature of the robot movement requirements, the computational overhead of updating the map may also be reduced due to the higher computational efficiency of the neural network model.
FIG. 5 illustrates a schematic diagram of a sub-scenario in accordance with certain embodiments of the present disclosure.
It can be seen that when the scene is large, such as when the target object is not in the robot field of view at the beginning or has not yet reached the full scene, if the robot wants to search for a navigation path to the chair, it may get a priority to the side of the table, due to the a priori knowledge provided by the second map. Thus, in practical applications, the implementation may embody the aforementioned advantages.
Fig. 6 illustrates a block diagram of an apparatus 600 for generating a navigation path for an electronic device, in accordance with certain embodiments of the present disclosure. The device includes: a map generation module 601 configured to generate, at the electronic device 101, a second map based on a first map, the first map describing locations of a plurality of objects in a scene in the scene, the second map describing predicted distances from the plurality of locations in the scene to a target object of the plurality of objects; the device also includes: a candidate route determination module 602 configured to determine a candidate route to the target object from a target location from the plurality of locations based on the second map; and the apparatus further comprises: a navigation path selection module 603 configured to select a navigation path from the candidate paths to reach the target object from the target position.
In some embodiments, wherein determining the candidate path may comprise determining at least one of: a first path associated with a predicted distance of the target location to the target object; a second path associated with a predicted distance of a boundary of the second map to the target object; and a third path associated with an angle or boundary of the target object to a predetermined specific object in the scene.
In some embodiments, equation (1) may be used to determine a target or intermediate target related to the first path; equation (2) may be used to determine a target or intermediate target associated with the second path; equation (3) to determine the target or intermediate target associated with the third path. The formula may be described in detail with reference to the associated description of method 200.
In some embodiments, wherein selecting the navigation path may comprise: and generating a navigation path of the electronic equipment by using a path planning algorithm based on at least one path of the first path, the second path and the third path and based on the target position and the target object.
In some embodiments, wherein the first map is a two-dimensional map obtained by projecting individual objects onto a plane based on a color image and a depth image of the scene.
In some embodiments, the apparatus may further include a map update module 604 configured to update the first map and the second map based on at least one of a movement time, a movement distance, and a movement angle of the electronic device 101 exceeding a threshold.
In some embodiments, the predicted distance may be represented as a continuous distance value or a discrete distance value, where the discrete distance value corresponds to one interval of the continuous distance value.
In some embodiments, wherein the second map generation module is further configured to: dividing a scene in a first map into a plurality of sub-scenes; a second map is generated based on a plurality of maps describing locations of a plurality of objects in the sub-scene.
In some embodiments, the second map is generated by a neural network model. The neural network model obtains a first map and the category of each object in the scene, and generates a second map.
For the specific implementation process of the apparatus 600, reference may be made to the description of the method 200, and the detailed description of the disclosure is omitted here. It is understood that the apparatus 600 of the present disclosure can achieve the same technical effects as the method 200, so that at least one advantage of the method 200 for generating a navigation path of an electronic device as described above can be achieved.
In some embodiments, the second map in the apparatus 600 may be generated by a neural network model trained based on the apparatus 700. The neural network model may be trained using the apparatus 700 in fig. 7. Fig. 7 illustrates a block diagram of an apparatus 700 for training a neural network model, in accordance with certain embodiments of the present disclosure. The apparatus 700 includes a training data acquisition module 701 configured to acquire a training data set including a plurality of scenes and a plurality of subjects. The apparatus also includes a training label acquisition module 702 configured to acquire a training label including locations in the scene of a plurality of objects in the scene, true distances from the plurality of locations in the scene to a target object in the plurality of objects, and a class of the object. The apparatus further comprises a training module 703 configured to train a neural network model based on the training dataset and the training labels, wherein the neural network model outputs a map describing predicted distances to a target object of the plurality of objects from a plurality of locations in the scene.
In some embodiments, the training data acquisition module 701 is further configured to: a scene is segmented into a plurality of sub-scenes, and a training data set including the plurality of sub-scenes and a plurality of objects is obtained. The training label acquisition module 702 is further configured to: training labels are obtained that include locations of a plurality of objects in the sub-scene and true distances from the plurality of locations in the sub-scene to target objects in the plurality of objects.
In some embodiments, the training module 703 is further configured to: when the position of the target object is not in the scene, training a neural network model by using the real distance from the target position in the plurality of positions to the boundary of the scene; and/or when the location of the target object is not in the sub-scene, training the neural network model using the true distance of the target location of the plurality of locations to the boundary of the sub-scene.
It can be understood that the neural network model trained by the above-described apparatus 700 not only can solve the problem of navigation path planning when the robot performs a task, but also can provide an optimal path for the robot to explore a scene. This makes it possible to quickly understand the full view of the scene in which it is located. Accordingly, at least one of the method 400 and the aforementioned other advantages may be provided.
FIG. 8 illustrates a block diagram of a computing system 800 in which one or more embodiments of the disclosure may be implemented. The methods 200 and 400 illustrated in fig. 2 and 4 may be implemented by the computing system 800. The computing system 800 shown in fig. 8 is only an example, and should not be construed as limiting the scope or functionality of use of the implementations described herein.
As shown in fig. 8, computing system 800 is in the form of a general purpose computing device. Components of computing system 800 may include, but are not limited to, one or more processors or processing units 800, memory 820, one or more input devices 830, one or more output devices 840, storage 850, and one or more communication units 860. The processing unit 800 may be a real or virtual processor and may be capable of performing various processes according to persistence stored in the memory 820. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power.
Computing system 800 typically includes a number of computer media. Such media may be any available media that is accessible by computing system 800 and includes, but is not limited to, volatile and non-volatile media, removable and non-removable media. The memory 820 may be volatile memory (e.g., registers, cache, Random Access Memory (RAM)), non-volatile memory (e.g., Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory), or some combination thereof. Storage 850 may be removable or non-removable, and may include machine-readable media, such as a flash drive, a magnetic disk, or any other medium, which may be capable of being used to store information and which may be accessed within computing system 800.
The computing system 800 may further include additional removable/non-removable, volatile/nonvolatile computer system storage media. Although not shown in FIG. 8, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, non-volatile optical disk may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 820 may include at least one program product having (e.g., at least one) set of program modules that are configured to carry out the functions of various embodiments described herein.
A program/utility tool 822 having a set of one or more execution modules 824 may be stored, for example, in the memory 820. Execution module 824 may include, but is not limited to, an operating system, one or more application programs, other program modules, and operating data. Each of these examples, or particular combinations, may include an implementation of a networked environment. Execution module 824 generally performs the functions and/or methods of embodiments of the subject matter described herein, such as method 200.
The input unit 830 may be one or more of various input devices. For example, the input unit 839 may include a user device such as a mouse, a keyboard, a trackball, or the like. A communication unit 860 enables communication over a communication medium to another computing entity. Additionally, the functionality of the components of computing system 800 may be implemented in a single computing cluster or multiple computing machines, which are capable of communicating over a communication connection. Thus, the computing system 800 may operate in a networked environment using logical connections to one or more other servers, network Personal Computers (PCs), or another general network node. By way of example, and not limitation, communication media includes wired or wireless networking technologies.
Computing system 800 can also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., as desired, one or more devices that enable a user to interact with computing system 800, or any device (e.g., network card, modem, etc.) that enables computing system 800 to communicate with one or more other computing devices. Such communication may be performed via input/output (I/O) interfaces (not shown).
The functions described herein may be performed, at least in part, by one or more hardware logic components. By way of example, and not limitation, illustrative types of hardware logic components that may be used include Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
Program code for implementing the methodologies of the subject matter described herein may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, although several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the subject matter described herein. Certain features that are described in the context of separate implementations can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Some example implementations of the present disclosure are listed below.
In certain embodiments of the first aspect, a method for generating a navigation path for an electronic device is provided. The method comprises the following steps: generating a second map based on a first map, the first map describing locations of a plurality of objects in a scene in the scene, the second map describing predicted distances from the plurality of locations in the scene to a target object of the plurality of objects; determining a candidate path to the target object from a target location from the plurality of locations based on the second map; and selecting a navigation path from the candidate paths to reach the target object from the target position.
In certain embodiments, wherein determining the candidate path comprises determining at least one of: a first path associated with a predicted distance of the target location to the target object; a second path associated with a predicted distance of a boundary of the second map to the target object; and a third path associated with an angle or boundary of the target object to a predetermined specific object in the scene.
In some embodiments, wherein selecting the navigation path comprises: and generating a navigation path of the electronic equipment by using a path planning algorithm based on at least one path of the first path, the second path and the third path and based on the target position and the target object.
In some embodiments, wherein the first map is a two-dimensional map obtained by projecting individual objects onto a plane based on a color image and a depth image of the scene.
In certain embodiments, the method further comprises: updating the first map and the second map based on at least one of a movement time, a movement distance, and a movement angle of the electronic device exceeding a threshold.
In some embodiments, the predicted distance is represented as a continuous distance value or a discrete distance value, wherein the discrete distance value corresponds to an interval in the continuous distance value.
In some embodiments, wherein generating the second map further comprises: dividing a scene in a first map into a plurality of sub-scenes; a second map is generated based on a plurality of maps describing locations of a plurality of objects in the sub-scene.
In certain embodiments, wherein the second map is generated by a neural network model.
In certain embodiments, the neural network model is trained by the following method. The method comprises the following steps: acquiring a training data set comprising a plurality of scenes and a plurality of objects; acquiring training labels, wherein the training labels comprise positions of a plurality of objects in a plurality of scenes in the scenes, real distances from the positions in the scenes to target objects in the plurality of objects, and classes of the objects; a neural network model is trained based on a training dataset and training labels, wherein the neural network model outputs a map that describes predicted distances to a target object of a plurality of objects from a plurality of locations in a scene.
In some embodiments, wherein training the neural network model further comprises: dividing a scene in a first map into a plurality of sub-scenes; and training the neural network model based on training labels comprising locations of the plurality of objects in the sub-scene and true distances from the plurality of locations in the sub-scene to target objects in the plurality of objects.
In some embodiments, wherein training the neural network model further comprises: when the position of the target object is not in the scene, training a neural network model by using the real distance from the target position in the plurality of positions to the boundary of the scene; and/or when the location of the target object is not in the sub-scene, training the neural network model using the true distance of the target location of the plurality of locations to the boundary of the sub-scene.
In an embodiment of the second aspect, an apparatus for generating a navigation path of an electronic device is provided. The device includes: a map generation module configured to generate a second map based on a first map, the first map describing locations of a plurality of objects in a scene in the scene, the second map describing predicted distances from the plurality of locations in the scene to a target object of the plurality of objects; a candidate path determination module configured to determine a candidate path to the target object from a target location from the plurality of locations based on the second map; and a navigation path selection module configured to select a navigation path from the candidate paths to reach the target object from the target position.
In certain embodiments, wherein determining the candidate path comprises determining at least one of: a first path associated with a predicted distance of the target location to the target object; a second path associated with a predicted distance of a boundary of the second map to the target object; and a third path associated with an angle or boundary of the target object to a predetermined specific object in the scene.
In some embodiments, wherein selecting the navigation path comprises: and generating a navigation path of the electronic equipment by using a path planning algorithm based on at least one path of the first path, the second path and the third path and based on the target position and the target object.
In some embodiments, wherein the first map is a two-dimensional map obtained by projecting individual objects onto a plane based on a color image and a depth image of the scene.
In certain embodiments, the apparatus further comprises: a map update module configured to update the first map and the second map based on at least one of a movement time, a movement distance, and a movement angle of the electronic device exceeding a threshold.
In some embodiments, the predicted distance is represented as a continuous distance value or a discrete distance value, wherein the discrete distance value corresponds to an interval in the continuous distance value.
In some embodiments, wherein the second map generation module is further configured to: dividing a scene in a first map into a plurality of sub-scenes; a second map is generated based on a plurality of maps describing locations of a plurality of objects in the sub-scene.
In certain embodiments, wherein the second map is generated by a neural network model.
In certain embodiments, wherein the neural network model is trained by a neural network device, the neural network device comprises: a training data acquisition module configured to acquire a training data set including a plurality of scenes and a plurality of subjects; a training label obtaining module configured to obtain a training label, the training label comprising positions of a plurality of objects in a plurality of scenes in the scene, real distances from the plurality of positions in the scene to a target object in the plurality of objects, and a category of the object; a training module configured to train a neural network model based on a training dataset and a training label, wherein the neural network model outputs a map describing predicted distances to a target object of the plurality of objects from a plurality of locations in the scene.
In certain embodiments, wherein the training data acquisition module is further configured to: dividing a scene into a plurality of sub-scenes, and acquiring a training data set comprising the plurality of sub-scenes and a plurality of objects; the training label acquisition module is further configured to: training labels are obtained that include locations of a plurality of objects in the sub-scene and true distances from the plurality of locations in the sub-scene to target objects in the plurality of objects.
In some embodiments, wherein the training module is further configured to: when the position of the target object is not in the scene, training a neural network model by using the real distance from the target position in the plurality of positions to the boundary of the scene; and/or when the location of the target object is not in the sub-scene, training the neural network model using the true distance of the target location of the plurality of locations to the boundary of the sub-scene.
In an embodiment of the third aspect, an electronic device is provided. The electronic device includes: a memory and a processor; wherein the memory is for storing one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method according to the first aspect.
In an embodiment of the fourth aspect, a computer-readable storage medium is provided. The computer readable storage medium has stored thereon one or more computer instructions, wherein the one or more computer instructions are executed by a processor to implement the method according to the first aspect.
In an embodiment of the fifth aspect, a computer program product is provided. The computer program product comprises one or more computer instructions which, when executed by a processor, implement the method according to the first aspect.
Although the disclosure has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (20)

1. A method for generating a navigation path for an electronic device, comprising:
generating a second map based on a first map, the first map describing locations of a plurality of objects in a scene in the scene, the second map describing predicted distances from the plurality of locations in the scene to a target object of the plurality of objects;
determining a candidate path from a target location of the plurality of locations to the target object based on the second map; and
selecting a navigation path from the candidate paths to the target object from the target position.
2. The method of claim 1, wherein determining the candidate path comprises determining at least one of:
a first path associated with the predicted distance of the target location to the target object;
a second path associated with the predicted distance of a boundary of the second map to the target object; and
a third path associated with an angle or boundary of the target object to a predetermined specific object in the scene.
3. The method of claim 2, wherein selecting the navigation path comprises:
generating the navigation path using a path planning algorithm based on at least one of the first path, the second path, the third path, and based on the target location and the target object.
4. The method of claim 1, wherein the first map is a two-dimensional map obtained by projecting the respective objects to a plane based on a color image and a depth image of the scene.
5. The method of claim 1, further comprising:
updating the first map and the second map based on at least one of a movement time, a movement distance, and a movement angle of the electronic device exceeding a threshold.
6. The method of claim 1, wherein the predicted distance is represented as a continuous distance value or a discrete distance value, wherein the discrete distance value corresponds to one interval of the continuous distance value.
7. The method of claim 1, wherein generating the second map further comprises:
dividing the scene in the first map into a plurality of sub-scenes;
generating the second map based on a plurality of maps describing locations of a plurality of objects in the sub-scene.
8. The method of claim 1, wherein the second map is generated by a neural network model.
9. The method of claim 8, wherein the neural network model is trained by:
acquiring a training data set comprising a plurality of scenes and a plurality of objects;
obtaining training labels comprising locations in the scene of a plurality of objects in the plurality of scenes, true distances from the locations in the scene to target objects in the plurality of objects, and classes of the objects;
training the neural network model based on the training dataset and the training labels.
10. The method of claim 9, wherein the neural network model is further trained by:
dividing the scene in the first map into a plurality of sub-scenes; and
training the neural network model based on training labels comprising locations of a plurality of objects in the sub-scene and the true distances from the plurality of locations in the sub-scene to target objects in the plurality of objects.
11. The method of claim 9, wherein the neural network model is further trained by:
training the neural network model using the true distances of target locations of the plurality of locations to boundaries of the scene when locations of the target object are not in the scene; and/or
Training the neural network model using the true distances of target locations of the plurality of locations to boundaries of the sub-scene when locations of the target object are not in the sub-scene.
12. An apparatus for generating a navigation path for an electronic device, comprising:
a map generation module configured to generate a second map based on a first map, the first map describing locations of a plurality of objects in a scene in the scene, the second map describing predicted distances from the plurality of locations in the scene to a target object of the plurality of objects;
a candidate path determination module configured to determine a candidate path to the target object from a target location of the plurality of locations based on the second map; and
a navigation path selection module configured to select a navigation path from the candidate paths to the target object from the target location.
13. The apparatus of claim 12, wherein determining the candidate path comprises determining at least one of:
a first path associated with the predicted distance of the target location to the target object;
a second path associated with the predicted distance of a boundary of the second map to the target object; and
a third path associated with an angle or boundary of the target object to a predetermined specific object in the scene.
14. The device of claim 13, wherein selecting the navigation path comprises:
generating the navigation path using a path planning algorithm based on at least one of the first path, the second path, the third path, and based on the target location and the target object.
15. The device of claim 12, wherein the first map is a two-dimensional map obtained by projecting the respective objects to a plane based on a color image and a depth image of the scene.
16. The apparatus of claim 12, further comprising:
a map update module configured to update the first map and the second map based on at least one of a movement time, a movement distance, and a movement angle of the electronic device exceeding a threshold.
17. The apparatus of claim 12, wherein the second map is generated by a neural network model trained by a neural network device comprising:
a training data acquisition module configured to acquire a training data set including a plurality of scenes and a plurality of subjects;
a training label acquisition module configured to acquire training labels including locations in the scene of a plurality of objects in the plurality of scenes, a true distance from the plurality of locations in the scene to a target object in the plurality of objects, and a category of the object;
a training module configured to train the neural network model based on the training dataset and the training labels.
18. An electronic device, comprising:
a memory and a processor;
wherein the memory is to store one or more computer instructions, wherein the one or more computer instructions are to be executed by the processor to implement the method of any one of claims 1 to 11.
19. A computer readable storage medium having one or more computer instructions stored thereon, wherein the one or more computer instructions are executed by a processor to implement the method of any one of claims 1 to 11.
20. A computer program product comprising one or more computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 11.
CN202111327724.3A 2021-11-10 2021-11-10 Method and product for generating navigation path of electronic device Pending CN114061586A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111327724.3A CN114061586A (en) 2021-11-10 2021-11-10 Method and product for generating navigation path of electronic device
PCT/CN2022/127124 WO2023082985A1 (en) 2021-11-10 2022-10-24 Method and product for generating navigation path for electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111327724.3A CN114061586A (en) 2021-11-10 2021-11-10 Method and product for generating navigation path of electronic device

Publications (1)

Publication Number Publication Date
CN114061586A true CN114061586A (en) 2022-02-18

Family

ID=80274651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111327724.3A Pending CN114061586A (en) 2021-11-10 2021-11-10 Method and product for generating navigation path of electronic device

Country Status (2)

Country Link
CN (1) CN114061586A (en)
WO (1) WO2023082985A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023082985A1 (en) * 2021-11-10 2023-05-19 北京有竹居网络技术有限公司 Method and product for generating navigation path for electronic device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116798030B (en) * 2023-08-28 2023-11-14 中国建筑第六工程局有限公司 Curved surface sightseeing radar high tower acceptance method, system, device and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100049431A1 (en) * 2008-04-30 2010-02-25 Rafael Maya Zetune Navigation Using Portable Reading Machine
CN106289285A (en) * 2016-08-20 2017-01-04 南京理工大学 Map and construction method are scouted by a kind of robot associating scene
CN109931942A (en) * 2019-03-13 2019-06-25 浙江大华技术股份有限公司 Robot path generation method, device, robot and storage medium
US20200160178A1 (en) * 2018-11-16 2020-05-21 Nvidia Corporation Learning to generate synthetic datasets for traning neural networks
CN111982094A (en) * 2020-08-25 2020-11-24 北京京东乾石科技有限公司 Navigation method, device and system thereof and mobile equipment
US20210141383A1 (en) * 2019-11-07 2021-05-13 Naver Corporation Systems and methods for improving generalization in visual navigation
CN113048980A (en) * 2021-03-11 2021-06-29 浙江商汤科技开发有限公司 Pose optimization method and device, electronic equipment and storage medium
US20210207974A1 (en) * 2018-06-04 2021-07-08 The Research Foundation For The State University Of New York System and Method Associated with Expedient Determination of Location of One or More Object(s) Within a Bounded Perimeter of 3D Space Based on Mapping and Navigation to a Precise POI Destination Using a Smart Laser Pointer Device
CN113284240A (en) * 2021-06-18 2021-08-20 深圳市商汤科技有限公司 Map construction method and device, electronic equipment and storage medium
US20210281977A1 (en) * 2020-03-05 2021-09-09 Xerox Corporation Indoor positioning system for a mobile electronic device
US20210279907A1 (en) * 2020-03-05 2021-09-09 Xerox Corporation Methods and systems for sensing obstacles in an indoor environment
CN113570664A (en) * 2021-07-22 2021-10-29 北京百度网讯科技有限公司 Augmented reality navigation display method and device, electronic equipment and computer medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190354781A1 (en) * 2018-05-17 2019-11-21 GM Global Technology Operations LLC Method and system for determining an object location by using map information
CN111340766B (en) * 2020-02-21 2024-06-11 北京市商汤科技开发有限公司 Target object detection method, device, equipment and storage medium
CN114061586A (en) * 2021-11-10 2022-02-18 北京有竹居网络技术有限公司 Method and product for generating navigation path of electronic device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100049431A1 (en) * 2008-04-30 2010-02-25 Rafael Maya Zetune Navigation Using Portable Reading Machine
CN106289285A (en) * 2016-08-20 2017-01-04 南京理工大学 Map and construction method are scouted by a kind of robot associating scene
US20210207974A1 (en) * 2018-06-04 2021-07-08 The Research Foundation For The State University Of New York System and Method Associated with Expedient Determination of Location of One or More Object(s) Within a Bounded Perimeter of 3D Space Based on Mapping and Navigation to a Precise POI Destination Using a Smart Laser Pointer Device
US20200160178A1 (en) * 2018-11-16 2020-05-21 Nvidia Corporation Learning to generate synthetic datasets for traning neural networks
CN109931942A (en) * 2019-03-13 2019-06-25 浙江大华技术股份有限公司 Robot path generation method, device, robot and storage medium
US20210141383A1 (en) * 2019-11-07 2021-05-13 Naver Corporation Systems and methods for improving generalization in visual navigation
US20210281977A1 (en) * 2020-03-05 2021-09-09 Xerox Corporation Indoor positioning system for a mobile electronic device
US20210279907A1 (en) * 2020-03-05 2021-09-09 Xerox Corporation Methods and systems for sensing obstacles in an indoor environment
CN111982094A (en) * 2020-08-25 2020-11-24 北京京东乾石科技有限公司 Navigation method, device and system thereof and mobile equipment
CN113048980A (en) * 2021-03-11 2021-06-29 浙江商汤科技开发有限公司 Pose optimization method and device, electronic equipment and storage medium
CN113284240A (en) * 2021-06-18 2021-08-20 深圳市商汤科技有限公司 Map construction method and device, electronic equipment and storage medium
CN113570664A (en) * 2021-07-22 2021-10-29 北京百度网讯科技有限公司 Augmented reality navigation display method and device, electronic equipment and computer medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023082985A1 (en) * 2021-11-10 2023-05-19 北京有竹居网络技术有限公司 Method and product for generating navigation path for electronic device

Also Published As

Publication number Publication date
WO2023082985A1 (en) 2023-05-19

Similar Documents

Publication Publication Date Title
US11644338B2 (en) Ground texture image-based navigation method and device, and storage medium
US10593110B2 (en) Method and device for computing a path in a game scene
Brachmann et al. Visual camera re-localization from RGB and RGB-D images using DSAC
KR102126724B1 (en) Method and apparatus for restoring point cloud data
CN107677279B (en) Method and system for positioning and establishing image
EP3506212A1 (en) Method and apparatus for generating raster map
US11300964B2 (en) Method and system for updating occupancy map for a robotic system
WO2023082985A1 (en) Method and product for generating navigation path for electronic device
EP3408848A1 (en) Systems and methods for extracting information about objects from scene information
CN110806211A (en) Method and device for robot to autonomously explore and establish graph and storage medium
CN110020144B (en) Recommendation model building method and equipment, storage medium and server thereof
US11238641B2 (en) Architecture for contextual memories in map representation for 3D reconstruction and navigation
CN108638062A (en) Robot localization method, apparatus, positioning device and storage medium
CN110263713A (en) Method for detecting lane lines, device, electronic equipment and storage medium
Kaufman et al. Autonomous exploration by expected information gain from probabilistic occupancy grid mapping
US9910878B2 (en) Methods for processing within-distance queries
JP2015132539A (en) Sunshine quantity calculating apparatus, route proposing apparatus, and sunshine quantity calculating method
Löffler et al. Evaluation criteria for inside-out indoor positioning systems based on machine learning
Warburg et al. Sparseformer: Attention-based depth completion network
Wang et al. Where to explore next? ExHistCNN for history-aware autonomous 3D exploration
CN114543808A (en) Indoor relocation method, device, equipment and storage medium
Rozenberszki et al. 3d semantic label transfer in human-robot collaboration
Byrne et al. Applications of the VOLA format for 3D data knowledge discovery
Gomez et al. Localization Exploiting Semantic and Metric Information in Non-static Indoor Environments
Bai et al. Vdbblox: Accurate and efficient distance fields for path planning and mesh reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination