CN111739099B - Falling prevention method and device and electronic equipment - Google Patents

Falling prevention method and device and electronic equipment Download PDF

Info

Publication number
CN111739099B
CN111739099B CN202010698510.6A CN202010698510A CN111739099B CN 111739099 B CN111739099 B CN 111739099B CN 202010698510 A CN202010698510 A CN 202010698510A CN 111739099 B CN111739099 B CN 111739099B
Authority
CN
China
Prior art keywords
image
result
road
trafficability
predicted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010698510.6A
Other languages
Chinese (zh)
Other versions
CN111739099A (en
Inventor
陈波
支涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yunji Technology Co Ltd
Original Assignee
Beijing Yunji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yunji Technology Co Ltd filed Critical Beijing Yunji Technology Co Ltd
Priority to CN202010698510.6A priority Critical patent/CN111739099B/en
Publication of CN111739099A publication Critical patent/CN111739099A/en
Application granted granted Critical
Publication of CN111739099B publication Critical patent/CN111739099B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a method, a device and electronic equipment for preventing falling, wherein the method comprises the following steps: acquiring an image to be detected, wherein the image to be detected is a local area of a first image; the first image is obtained by acquiring an application environment through which the robot passes, and the local area is a position area of a first road included in the application environment in the first image; namely, the image to be detected is an image corresponding to the first road, the image to be detected is input into a pre-constructed predicted trafficability model, and the predicted trafficability model obtains a predicted trafficability result only based on the characteristics of the first road. Compared with the situation that the image to be detected comprises the image of the road to be passed by the robot and the image of the road not passed by the robot or the image of the non-road, the result is more accurate, and therefore the situation that the robot falls is avoided.

Description

Falling prevention method and device and electronic equipment
Technical Field
The application relates to the technical field of deep learning, in particular to a method and a device for preventing falling and electronic equipment.
Background
The robot may fall down during operation, for example, the robot may fall down from stairs when it travels along a path in front of the stairs. Once the robot falls, the robot may be damaged due to its high running speed and great weight.
Therefore, how to avoid the falling of the robot is an urgent problem to be solved by those skilled in the art.
Disclosure of Invention
In view of this, the present application provides a method and an apparatus for preventing a robot from falling, and an electronic device, so as to overcome the problem that a robot falls in the prior art.
In order to achieve the above purpose, the present application provides the following technical solutions:
a fall prevention method applied to a robot, the method comprising:
acquiring an image to be detected, wherein the image to be detected is a local area of a first image; the first image is obtained by acquiring an application environment through which the robot passes, and the local area is a position area of a first road included in the application environment in the first image;
inputting the image to be detected into a pre-constructed predicted trafficability model, and obtaining a predicted trafficability result through the predicted trafficability model, wherein the predicted trafficability result is an impassable result representing that the robot falls off if passing through the first road, or a passable result representing that the robot cannot fall off if passing through the first road;
determining whether to pass the first road based on the predicted trafficability result.
A fall prevention device comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an image to be detected, and the image to be detected is a local area of a first image; the first image is obtained by acquiring an application environment through which the robot passes, and the local area is a position area of a first road included in the application environment in the first image;
the input module is used for inputting the image to be detected to a pre-constructed predicted trafficability model, and obtaining a predicted trafficability result through the predicted trafficability model, wherein the predicted trafficability result is an impassable result representing that the robot falls off if passing through the first road or a passable result representing that the robot cannot fall off if passing through the first road;
a determination module for determining whether to pass the first road based on the predicted trafficability result.
An electronic device, comprising:
a memory for storing a program;
a processor configured to execute the program, the program specifically configured to:
acquiring an image to be detected, wherein the image to be detected is a local area of a first image; the first image is obtained by acquiring an application environment through which the robot passes, and the local area is a position area of a first road included in the application environment in the first image;
inputting the image to be detected into a pre-constructed predicted trafficability model, and obtaining a predicted trafficability result through the predicted trafficability model, wherein the predicted trafficability result is an impassable result representing that the robot falls off if passing through the first road, or a passable result representing that the robot cannot fall off if passing through the first road;
determining whether to pass the first road based on the predicted trafficability result.
A readable storage medium, having stored thereon a computer program which, when executed by a processor, carries out the steps of the fall prevention method according to any one of the preceding claims.
According to the technical scheme, the falling prevention method is used for obtaining an image to be detected, wherein the image to be detected is a local area of a first image; the first image is obtained by acquiring an application environment through which the robot passes, and the local area is a position area of a first road included in the application environment in the first image; namely, the image to be detected is an image corresponding to the first road, the image to be detected is input into a pre-constructed predicted trafficability model, and the predicted trafficability model obtains a predicted trafficability result only based on the characteristics of the first road. Compared with the situation that the image to be detected comprises the image of the road to be passed by the robot and the image of the road not passed by the robot or the image of the non-road, the result is more accurate, so that the robot is prevented from falling, and the running safety of the robot is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application;
fig. 2 is a flowchart of an implementation manner of a fall prevention method provided by an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a process of mapping any point in an application environment to a corresponding point in a first image according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of one implementation of a robot traveling in a navigation path according to an embodiment of the present disclosure;
fig. 5a is a schematic diagram of acquiring an application scene to obtain a first image when an acquisition perspective provided by the embodiment of the present application is a first acquisition perspective;
fig. 5b is a schematic diagram of acquiring an application scene to obtain a first image when an acquisition perspective provided by the embodiment of the present application is a second acquisition perspective;
FIG. 6 is a flow chart of one implementation of a neural network training process provided by an embodiment of the present application;
fig. 7 is a block diagram of one implementation of a fall prevention device provided in an embodiment of the present application;
fig. 8 is a block diagram of an implementation manner of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application provides a method, a device, equipment and a storage medium for preventing falling.
Before describing the fall prevention method provided by the embodiment of the present application in detail, a brief description is given here to an implementation environment related to the embodiment of the present application.
Fig. 1 is a schematic diagram of an implementation environment provided in the embodiment of the present application. The falling prevention method provided by the embodiment of the application can be applied to the robot in fig. 1. The implementation environment includes: a robot 11 and an electronic device 12.
The robot 11 may be any robot having shooting and navigation functions, for example, a service robot such as a sweeping robot or a meal delivery robot.
Fig. 1 illustrates an example of a food delivery robot as a robot 11.
Optionally, the electronic device 12 may be a server, which may be one server, a server cluster composed of several servers, or a cloud computing service center.
It should be noted that fig. 1 illustrates the electronic device 12 as a server, but the electronic device 12 is not limited to be a server.
Alternatively, the robot 11 and the electronic device 12 may establish a connection and communicate via a wireless network.
The robot 11 is configured to acquire an image of a forward path and transmit the acquired image to the electronic device 12.
For example, the robot 11 captures an image of the front path in real time by a camera while traveling.
And the electronic device 12 is configured to obtain a predicted trafficability result of the front route based on the fall prevention method provided by the embodiment of the application, and feed the predicted trafficability result back to the robot 11.
And the robot 11 is used for executing corresponding behaviors based on the predicted trafficability result.
In an alternative embodiment, the robot 11 is also used to acquire sensor data, such as readings from a lidar and/or a depth camera, and transmit the sensor data to the electronic device 12. The electronic device 12 is further configured to obtain a target trafficability result based on the fall prevention method provided in the embodiment of the present application in combination with the sensor data, and feed the target trafficability result back to the robot 11, so as to implement navigation and prevent the robot 11 from falling.
The embodiment of the application also provides another implementation environment, which relates to a robot; a robot 11 for acquiring an image of a forward path; obtaining a predicted trafficability result of a front path based on the falling prevention method provided by the embodiment of the application; and performing a corresponding action based on the predicted trafficability result.
In an optional embodiment, the robot is further configured to acquire sensor data; the robot obtains a target trafficability result by combining sensor data based on the falling prevention method provided by the embodiment of the application, and executes corresponding behaviors based on the target trafficability result.
The fall prevention method, apparatus, and electronic device provided by the present application will be described below with reference to the above embodiments.
As shown in fig. 2, a flowchart of an implementation manner of a fall prevention method provided by an embodiment of the present application is provided, where the method includes:
step S201: and acquiring an image to be detected.
The image to be detected is a local area of the first image; the first image is obtained by acquiring an application environment to be passed by the robot, and the local area is a position area of a first road included in the application environment in the first image.
For the image to be measured, in this embodiment of the application, the first image is referred to as an image obtained by acquiring an application environment through which the robot will pass, and the first road is a road included in the application environment through which the robot will pass.
In an alternative embodiment, the manner of acquiring the image to be measured includes, but is not limited to, shooting with a mobile phone, a camera, a tablet, a computer, or other equipment equipped with a camera.
In an alternative embodiment, the application environment refers to the environment on which the robot is about to travel, for example, the application environment may include, in addition to the first road the robot is about to travel, scenery (non-roads), such as trees; obstacles (non-roads) such as walls may also be included; it is also possible to include roads a, such as stairs, etc., which the robot does not pass through. The content included in the application environment is related to the actual situation, and the embodiment of the present application does not limit this.
In an optional embodiment, a mapping relation exists between the application environment in the world coordinate system and the first image in the two-dimensional coordinate system, and the application environment in the three-dimensional space can be mapped into the first image in the two-dimensional space through the mapping relation; since the application environment contains the first road, the first road is also mapped to a local area on the first image.
It will be appreciated that other content contained in the application environment, such as scenery and/or obstacles and/or roads that the robot does not pass through, may also be mapped to the corresponding location area of the first image. However, in the embodiment of the present application, the image to be measured only includes the location area of the first road in the first image. The amount of data is reduced compared to the entire first image.
Step S202: and inputting the image to be detected into a pre-constructed predicted trafficability model, and obtaining a predicted trafficability result through the predicted trafficability model.
The predicted trafficability result is an impassable result representing that the robot falls off when passing through the first road, or a trafficable result representing that the robot does not fall off when passing through the first road.
In an alternative embodiment, the pre-constructed predictive trafficability model is obtained by training a neural network by using a plurality of sample images as input of the neural network. The sample images include positive sample images and negative sample images.
In an alternative embodiment, the sample image and the image to be measured are obtained in the same process, including: mapping the application environment of the three-dimensional space in the world coordinate system into a second image of the two-dimensional space, and acquiring a position area of a second road included in the application environment, which is mapped to the second image, wherein the image corresponding to the position area is the sample image. For the sample image, the second image is referred to as an image obtained by acquiring an application environment through which the robot will pass in the embodiment of the application, and the second road is a road included in the application environment through which the robot will pass.
The second road is known to be passable or impassable in advance, so the sample image can be marked, if the second road included in the sample image is passable, that is, the robot does not fall off when passing through the second road, the actual marking result of the mark is the passable result that the robot does not fall off when passing through the second road, and the sample image is called as a positive sample image in the embodiment of the application; if the second road included in the sample image is impassable, that is, the robot falls off when passing through the second road, the marked actual result is the impassable result that the robot falls off when passing through the second road, and such sample image is referred to as a negative sample image in the embodiment of the application.
Step S203: determining whether to pass the first road based on the predicted trafficability result.
It can be understood that the image to be measured only includes the image corresponding to the first road in the position area of the first image, so that the accuracy of predicting the trafficability model is improved for the following reasons:
if the application environment comprises a road which the robot does not pass through, if the image to be detected comprises an image corresponding to a road A which the robot does not pass through and an image corresponding to a first road, then, because the predicted trafficability model does not know whether the robot passes through the road A or the first road, the obtained predicted trafficability result may not be the predicted trafficability result corresponding to the first road during prediction.
For example, if the predicted trafficability results corresponding to the first road and the road a are the same, one predicted trafficability result may be output; if the predicted trafficability results corresponding to the first road and the road a are different, the predicted trafficability model may not output the predicted trafficability result, or even output the predicted trafficability result corresponding to the road a, and the robot may perform a corresponding behavior based on the predicted trafficability result, thereby causing an erroneous behavior. For example, if the first road is a staircase and the road a is a flat road, and the result of passing corresponding to the road a is output by the predictive trafficability model, the robot may fall when performing a behavior of passing through the first road based on the result of passing.
If the application environment comprises the scenery or the obstacle, if the image to be detected comprises the image corresponding to the first road and the image corresponding to the scenery or the obstacle, when the characteristics of the traffic prediction model are extracted from the image to be detected, the characteristics of the traffic prediction model are extracted, the characteristics of the scenery or the obstacle are not only the characteristics of the first road, but also the characteristics of the scenery or the obstacle, and the characteristics of the scenery or the obstacle interfere with the judgment result of the traffic prediction model, so that the traffic prediction result output by the traffic prediction model is inaccurate.
In the method for preventing falling provided by the embodiment of the application, an image to be detected is obtained, wherein the image to be detected is a local area of a first image; the first image is obtained by acquiring an application environment through which the robot passes, and the local area is a position area of a first road included in the application environment in the first image; namely, the image to be detected is an image corresponding to the first road, the image to be detected is input into a pre-constructed predicted trafficability model, and the predicted trafficability model obtains a predicted trafficability result only based on the characteristics of the first road. Compared with the situation that the image to be detected comprises the image of the road to be passed by the robot and the image of the road not passed by the robot or the image of the non-road, the result is more accurate, and therefore the running safety of the robot is improved.
In order to make the person skilled in the art understand the mapping relationship between the first road included in the application environment and the local region in the first image in step 201, the mapping process between any position point P in the application environment and the corresponding point P' in the first image in the two-dimensional space is described below as an example.
Fig. 3 is a schematic diagram illustrating a process of mapping any point in the application environment to a corresponding point in the first image according to the embodiment of the present application.
The process of the robot shooting the application environment through the camera to obtain the first image can be simplified to a simple form, namely an aperture 30 and a physical imaging plane 31, the aperture represents the lens of the camera, the physical imaging plane is the plane bearing the first image, the aperture is located between the physical imaging plane and the application environment, and any light from the real world can reach the physical imaging plane 31 only through the aperture.
The three-dimensional coordinate system corresponding to the camera is a coordinate system with an optical center O as an origin and coordinate axes X, Y and Z, and the three-dimensional coordinate system is used for describing the spatial position of the position point P from the angle of the lens; the two-dimensional coordinate system corresponding to the physical imaging plane 31 is a coordinate system with the center o' of the physical imaging plane as an origin and coordinate axes of x and y.
Assuming that the P point is any position point in the application environment, the coordinate in the three-dimensional coordinate system is
Figure 675732DEST_PATH_IMAGE001
And reaches a point (image point) in the physical imaging plane after passing through the optical center of the lens
Figure 172966DEST_PATH_IMAGE002
Since the optical axis is perpendicular to the physical imaging plane, the coordinate of the image point P in the three-dimensional coordinate system is
Figure 421544DEST_PATH_IMAGE001
Where z = f, (f is the distance between the focal point and the physical imaging plane, called focal length), then, according to the triangle similarity principle, as shown in the right diagram of fig. 3, we obtain
Figure 144650DEST_PATH_IMAGE003
Is finished to obtain
Figure 427864DEST_PATH_IMAGE004
Although the imaging process generally takes the coordinate origin of the two-dimensional coordinate system as the center of the physical imaging plane, it is customary to use the upper left corner of the physical imaging plane as the origin of the two-dimensional coordinate system when performing image processing, and therefore, it is necessary to translate and scale the two-dimensional coordinate system appropriately to transform the two-dimensional coordinate system to the pixel coordinate system. Assuming that the axis of the pixel coordinate in the horizontal direction is μ and the axis in the vertical direction is v, a physical image is formedThe coordinates (x, y) of the plane are scaled by a times in the horizontal direction and by β times in the vertical direction while translating (C)x,Cy) The coordinates (μ, ν) of the pixel coordinate system can be obtained by the following formula:
Figure 328955DEST_PATH_IMAGE005
the formula written as homogeneous coordinates after substituting (x, y) is:
Figure 189463DEST_PATH_IMAGE006
defining K as an internal parameter matrix, then
Figure 591626DEST_PATH_IMAGE007
Because the three-dimensional coordinate system using the optical center as the origin describes the position point P, and the position coordinate of the position point P changes with the change of the lens position and is not stable, in order to better describe the position of the position point P in the three-dimensional space, a world coordinate system is introduced, under which the position of the point P is fixed, and the coordinate of the position point P in the world coordinate system is recorded as
Figure 572089DEST_PATH_IMAGE008
The coordinate transformation between the world coordinate system and the three-dimensional coordinate system with the optical center as the origin can be represented by a rotation matrix R and a translation vector t,
Figure 893349DEST_PATH_IMAGE009
where R is a 3x3 rotation matrix and t is a 3x1 translation vector written in homogeneous coordinates in the form of
Figure 460728DEST_PATH_IMAGE010
. Noting the extrinsic parameter matrix as T, then
Figure 525635DEST_PATH_IMAGE011
By combining the internal parameter matrix and the external parameter matrix, any P point on the application environment can be mapped to a P 'point of the physical imaging plane, and the P' point is on the first imagePixel coordinates (mu, v) of (c) and world coordinates of point P
Figure 617395DEST_PATH_IMAGE012
The corresponding matrix is converted into relational expression
Figure 250502DEST_PATH_IMAGE013
Therefore, the specific implementation process of acquiring the image to be measured in step S201 is as follows:
mapping the acquired application environment to a physical imaging plane based on a mapping relation between a world coordinate system and the physical imaging plane to obtain a first image; determining a location area where the first road included in the application environment is mapped to the physical imaging plane to obtain the local area in the first image.
It can be understood that, through the above matrix conversion relationship, the three-dimensional coordinate sets of the first road included in the application environment in the world coordinate system may be mapped onto the physical imaging plane one by one, so as to obtain the pixel coordinate set of the three-dimensional coordinate set on the physical imaging plane, where the region formed by the pixel coordinate set is the above local region.
In an alternative embodiment, determining the location area where the first road maps to the physical imaging plane is implemented by:
the method comprises the following steps: and determining a three-dimensional coordinate set of the first road in a three-dimensional space based on the navigation path of the robot, the current address position of the robot and the collection visual angle for collecting the application scene, wherein the three-dimensional coordinate set comprises at least one three-dimensional coordinate.
In an alternative embodiment, the three-dimensional space mentioned in the first step may be a three-dimensional space in a world coordinate system.
In an alternative embodiment, the robot may locate its own geographic location to obtain the "current address location" mentioned above.
In an optional embodiment, the robot may collect the application scene through a camera carried by the robot, and the "collecting view angle for collecting the application scene" is a collecting view angle of the camera.
Fig. 4 is a schematic diagram of an implementation manner of a robot traveling in a navigation path according to an embodiment of the present application.
It can be understood that the position area of the first road included in the obtained first image is different when the shooting angle of the robot camera is different, that is, the collecting angle of view is different.
The field angle FOV range of the camera of the robot is shown by the dotted line in fig. 4, and the first image corresponding to the application scene 41 shown in fig. 4 can be captured by the camera.
As shown in fig. 5a and 5b, fig. 5a is a first image obtained by collecting the application scene 41 when the collection perspective is a first collection perspective, and a position area where a first road is located included in the first image is a position area 51 (shown by a dashed square frame in fig. 5 a); fig. 5b is a first image obtained by collecting the application scene 41 when the collection angle of view is the second collection angle of view, and a location area where the first road is located included in the first image is a location area 52 (as shown by a dashed box in fig. 5 b).
The first position area 51 and the second position area 52 are different in position area in the first image.
There are various implementations of determining the three-dimensional coordinate set of the first road in the three-dimensional space provided in the embodiments of the present application, and the embodiments of the present application provide, but are not limited to, the following.
The first method comprises the following steps: taking any point of the first road in the application environment 41 as a circle center, and taking the first preset distance as a radius, obtaining three-dimensional coordinates of each position point included in the circular area to obtain a three-dimensional coordinate set.
And the second method comprises the following steps: the method includes the steps of obtaining three-dimensional coordinates of each point included in a preset range with any point of a first road in an application environment 41 as a center to obtain a three-dimensional coordinate set, where the shape of the preset range may be a square shape, a diamond shape, a polygon shape, and the like, and the embodiment of the present application does not limit this.
Step two: acquiring a two-dimensional coordinate set mapped to the physical imaging plane by the three-dimensional coordinate set, wherein the two-dimensional coordinate set comprises at least one two-dimensional coordinate; and the position area is an area corresponding to the two-dimensional coordinate set.
And mapping all points contained in the three-dimensional coordinate set to the physical imaging plane one by one through a matrix conversion relation between the pixel coordinate in the physical imaging plane and the world coordinate system, wherein the obtained two-dimensional coordinate set is the pixel coordinate set, and thus, the image to be detected can be obtained.
The following describes a neural network training process included in the predictive traffic model in the embodiment of the present application, and as shown in fig. 6, is a flowchart of an implementation manner of the neural network training process provided in the embodiment of the present application.
Step S601: acquiring a plurality of sample images, wherein different sample images correspond to different positions in a navigation path of the robot, and one sample image is a local area of a second image; the second image is obtained by acquiring the application environment corresponding to the corresponding position in the navigation path, and the local area of the second image is the position area of the second road in the second image, which is included in the application environment corresponding to the corresponding position in the navigation path.
For the sample image, the second image is referred to as an image obtained by acquiring an application environment through which the robot will pass in the embodiment of the application, and the second road is a road included in the application environment through which the robot will pass.
The manner of obtaining the second image based on the application environment is the same as the manner of obtaining the first image based on the application environment, and is not described herein again.
The process of obtaining the sample image from the second image is the same as the process of obtaining the image to be measured from the first image, and is not repeated here.
Optionally, the different sample images may be sample images shot by different robots at different positions, or sample images shot by robots located at the same position through different collection perspectives.
The sample images include positive sample images and negative sample images.
In an alternative embodiment, the acquisition of the positive sample image is as follows:
the method comprises the following steps: and setting the current mode as a first mode for automatically marking the actual result as a passable result.
Step two: determining a sample image acquired in the first mode as a positive sample image.
Step three: and setting the actual labeling result corresponding to the positive sample image as a passable result.
In an alternative embodiment, the navigation path of the robot may be set such that the navigation path does not include a road on which the robot can fall; or, the navigation path includes roads on which the robot can fall, but virtual walls are provided at the roads on which the robot can fall; the virtual wall functions to prevent the robot from passing a road where the robot can fall.
For example, if the navigation path of the robot includes an underground ladder, in order to prevent the robot from falling down due to the robot passing through the ladder, a virtual wall may be provided at the ladder exit to prevent the robot from passing through. Alternatively, the navigation path of the robot is altered such that the navigation path does not include access to an underground stairway.
By means of the mode of setting the first mode, the actual result is not marked for the positive sample image mark artificially.
In an alternative embodiment, the negative sample image is acquired as follows:
the method comprises the following steps: and setting the current mode as a second mode for automatically marking the actual result as the unviable result.
Step two: determining a sample image acquired in the second mode as a negative sample image.
Step three: and setting the actual labeling result corresponding to the negative sample image as a non-passable result.
The navigation path corresponding to the negative sample image comprises a road on which the robot falls.
In an alternative embodiment, it may be an artificial push of the robot in front of the road where the robot can fall so that the robot can take a negative sample image.
By setting the second mode, the actual result is not marked for the negative sample image mark artificially.
Step S602: and taking a plurality of sample images as the input of a neural network, taking the comparison result of the predicted trafficability result corresponding to each of the plurality of sample images output by the neural network and the labeled actual result corresponding to the corresponding sample image as a loss function, and training the neural network to obtain the predicted trafficability model.
The actual labeling result corresponding to one sample image comprises an impassable result representing that the robot falls off if passing through the second road, or an impassable result representing that the robot cannot fall off if passing through the second road.
In an alternative embodiment, the training end condition can be reached quickly by training the neural network with the loss function by means of transfer learning.
In an alternative embodiment, a pre-constructed residual error network Resnet18 is selected as a neural network for training to obtain the predicted trafficability model.
The residual error network is obtained by learning the residual errors between the input and the output of a plurality of parameter network layers. Assuming that the input is x and a certain network layer with parameters is set as H, the output of the layer with x as the input will be H (x), and the difference H (x) -x between the input and the output is recorded as residual error.
The residual error network Resnet18 includes convolutional layers and full link layers.
And convolution layers (convolution layers) for extracting feature vectors of corresponding image data in the sample image. Through convolution operation, different features of the sample image can be extracted, and the dimensionality of the feature vector is reduced. Except that the first layer of convolution layers adopts a convolution kernel of 7 × 7 to extract general features of the input image, the other layers adopt a convolution kernel of 3 × 3 to further extract image features of deep layers of the input sample image. It should be noted that the convolution kernel of 3 × 3 can reduce the number of parameters, simplify the calculation, and better preserve the properties of the sample image.
And a full connected layers (fc) for integrating the image features extracted by the convolutional layers into global features in a form of weighted sum and outputting the global features in a form of one-dimensional feature vectors.
Since the predictive traffic model needs to predict the input sample image to obtain the predictive traffic result, which plays a role of two classifications, the output type of the fully-connected layer is set to 2, that is, the predictive traffic model can output a passable result or a non-passable result.
After a plurality of sample images are input into a Resnet18 network, image features of the sample images are extracted and calculated through a convolutional layer contained in the neural network, the results are classified by a full connection layer and then output to obtain a predicted trafficability result, the obtained predicted trafficability result is compared with an actual labeling result corresponding to the sample images, and the comparison result is used as a loss function to retrain the Resnet18 network to obtain the predicted trafficability model. The predicted trafficability result output by the model is recorded as
Figure 633073DEST_PATH_IMAGE014
Noting the actual result as
Figure 970513DEST_PATH_IMAGE015
The comparison result of the predicted trafficability result and the labeled actual result corresponding to the sample image is expressed in many forms, for example, the difference between two corresponding values.
In binary classification, the common loss function is a 0-1 loss function, i.e., taking 0 for a correctly classified estimate, taking 1 for an incorrectly classified estimate,
Figure 191148DEST_PATH_IMAGE016
. However, the 0-1 loss function is a discontinuous piecewise function, which is not favorable for solving the minimization problem, so that the proxy loss (loss) can be constructed when the method is applied. The proxy loss is a loss function with consistency with the original loss function, and the model parameter obtained by minimizing the proxy loss is also the original lossSolution of the loss function. Common proxy losses are: the hinge loss function (hinge loss function),
Figure 854210DEST_PATH_IMAGE017
(ii) a A cross-entropy loss function (cross-entropy loss function),
Figure 396181DEST_PATH_IMAGE018
(ii) a Exponential loss function (exponential loss function),
Figure 678258DEST_PATH_IMAGE019
the simplest 0-1 loss function is taken as an example to illustrate how the loss function is used to train the neural network. Assuming that the predicted trafficability result obtained by the neural network is a trafficable result,
Figure 504132DEST_PATH_IMAGE020
is 0, when the predicted trafficability result is a non-trafficability result,
Figure 590293DEST_PATH_IMAGE021
1, calculating a loss function by combining the actual labeling result y corresponding to the sample image
Figure 868827DEST_PATH_IMAGE022
And (3) taking the numerical result as the mentioned comparison result, training the neural network, namely updating the parameters in the neural network until the training end condition is reached, so as to obtain a predicted trafficability model with more accurate prediction.
In an optional embodiment, a constructed ImageNet classical feature extraction network, such as a VGG16 network, can also be selected as a neural network for training to obtain the predictive trafficability model.
In an optional embodiment, in order to enable the robot to pass through the first road more safely, a distance measuring sensor, such as a laser radar and a depth camera, is further required to be combined to determine whether a falling area exists on the first road, and the specific implementation process is as follows:
the method comprises the following steps: acquiring a data set, wherein the data set comprises first data and/or second data, and the first data is a reading of a laser radar contained in the robot for the first road; the second data is a reading of the first road by a depth camera included with the robot.
Wherein the first data and/or the second data represent any one of the first data, the second data, the first data, and the second data.
The laser radar is a sensor for obtaining accurate position information, can determine the position, size and the like of a target detection object, and consists of a transmitting system, a receiving system and an information processing part. The working principle of the system is that a detection signal (laser beam) is sent to a target detection object, then a signal (target echo) emitted by a target is compared with an emission signal, and after appropriate processing is carried out, relevant information of the target detection object, such as parameters of target distance, direction, height, speed, posture, even shape and the like, can be obtained, so that the target detection object is detected, tracked and identified. In the application of robot, often use laser radar to launch infrared light and survey whether there is the barrier in the place ahead route, when there is the difference between laser radar's the transmission signal and the signal received, when having the reading in the laser radar promptly, explain that there is the barrier in the place ahead, avoid current.
It can be understood that the obtaining of the first data is to obtain a reading of the laser radar for the first road, and then, the reading of the laser radar indicates that the obstacle existing in the first road cannot pass through; the no reading indicates that no obstacle exists in the first road, and the first road can safely pass through.
The depth camera, also called a 3D camera, can detect the depth distance of the shooting space, i.e., can be used for depth measurement, thereby more conveniently and accurately sensing the surrounding environment and changes. There are three types of depth cameras in common use today: binocular depth cameras, structured light depth cameras, and tof (time of flight) depth cameras. The binocular depth camera measures the arrangement similar to that of two eyes of a human, observes the same environment through two 2D cameras with calibrated positions, matches characteristic points according to image content and further calculates the depth; the structured light depth camera actively projects structured light with special texture features, and depth measurement is carried out from feature deformation in feedback; the Tof depth camera performs depth calculation by actively projecting laser light, calculating the time of flight from emission to reception of the light.
In the embodiment of the application, the depth camera is used for measuring the depth of the first road, and whether the first road has a region with a certain depth, such as an underground ladder, a pit and the like, is detected. When the depth camera has a reading, the first road is characterized to have an area with a certain depth and is not accessible; when no reading is carried out, the first road is characterized to have no area with a certain depth and can pass through.
Step two: determining a target trafficability result based on the data set and the predicted trafficability result.
It should be noted that the predicted trafficability result here is a predicted trafficability result obtained by inputting the image to be measured into the predicted trafficability model.
When the predicted trafficability result is a trafficable result and the lidar and/or the depth camera have no reading, determining that a target trafficability result is a trafficable result; determining that the target traffic outcome is a non-traffic outcome when the predicted traffic outcome is a non-traffic outcome and the lidar and/or the depth camera have readings.
Determining that the target traffic outcome is a non-traffic outcome when the predicted traffic outcome is a traffic outcome, the lidar and/or the depth camera have readings.
And when the predicted trafficability result is an impassable result, no reading is given by the laser radar and/or the depth camera, and the target trafficability result is determined to be a trafficable result.
When the predicted trafficability result is inconsistent with the reading of the ranging sensor, the result predicted by the predicted trafficability model can be considered to be wrong. At this time, it is described that the predicted trafficability result obtained by predicting the image to be measured by the traffic prediction model is wrong.
In an optional embodiment, the image to be measured can be labeled through a target trafficability result, and the predicted trafficability model is trained by taking a comparison result of the predicted trafficability result and the target trafficability result as a loss function. Thereby making the output of the predictive traffic model more accurate.
Optionally, according to the target trafficability result of the image to be measured, the image to be measured is used as a sample image and added into the traffic prediction model training set, and the traffic prediction model is trained again, so that the traffic prediction model is more and more accurately output.
Step three: determining whether to pass the first road based on the target trafficability result.
It can be understood that when the target trafficability result is a trafficable result, the first road is characterized by having no barrier and no falling area, and can be safely trafficked; when the target trafficability result is an impassable result, the first road is characterized to have either an obstacle or a falling area, and the first road is impassable.
The method is described in detail in the embodiments disclosed in the present application, and the method of the present application can be implemented by various types of apparatuses, so that an apparatus is also disclosed in the present application, and the following detailed description is given of specific embodiments.
As shown in fig. 7, which is a block diagram of an implementation manner of a fall prevention device provided in an embodiment of the present application, the device includes:
a first obtaining module 71, configured to obtain an image to be detected, where the image to be detected is a local area of a first image; the first image is obtained by acquiring an application environment to be passed by the robot, and the local area is a position area of a first road included in the application environment in the first image.
The input module 72 is configured to input the image to be detected to a pre-constructed predicted trafficability model, and obtain a predicted trafficability result through the predicted trafficability model, where the predicted trafficability result is an impassable result that indicates that the robot will fall off if passing through the first road, or a passable result that indicates that the robot will not fall off if passing through the first road.
A determining module 73, configured to determine whether to pass through the first road based on the predicted trafficability result.
Optionally, the first obtaining module includes:
the mapping unit is used for mapping the acquired application environment to a physical imaging plane so as to obtain the first image;
a first determining unit, configured to determine a location area where the first road is mapped to the physical imaging plane, so as to obtain the local area in the first image.
Optionally, the first obtaining module further includes:
the second determining unit is used for determining a three-dimensional coordinate set of the first road in a three-dimensional space based on a navigation path of the robot, the current address position of the robot and a collecting visual angle for collecting the application scene, wherein the three-dimensional coordinate set comprises at least one three-dimensional coordinate;
a first obtaining unit, configured to obtain a two-dimensional coordinate set in which the three-dimensional coordinate set is mapped to the physical imaging plane, where the two-dimensional coordinate set includes at least one two-dimensional coordinate; and the position area is an area corresponding to the two-dimensional coordinate set.
Optionally, the method further includes:
the second acquisition module is used for acquiring a plurality of sample images, different sample images correspond to different positions in a navigation path of the robot, and one sample image is a local area of the second image; the second image is obtained by acquiring an application environment corresponding to a corresponding position in the navigation path, and the local area of the second image is a position area of a second road in the second image, wherein the second road is included in the application environment corresponding to the corresponding position in the navigation path;
the training module is used for taking a plurality of sample images as input of a neural network, taking a comparison result of predicted trafficability results respectively corresponding to the plurality of sample images output by the neural network and labeled actual results corresponding to the corresponding sample images as a loss function, and training the neural network to obtain the predicted trafficability model;
the actual labeling result corresponding to one sample image comprises an impassable result representing that the robot falls off if passing through the second road, or an impassable result representing that the robot cannot fall off if passing through the second road.
Optionally, the determining module includes:
the second acquisition unit is used for acquiring a data set, wherein the data set comprises first data and/or second data, and the first data is a reading of a laser radar contained in the robot for the first road; the second data is a reading of the first road by a depth camera included with the robot;
a third determination unit for determining a target trafficability result based on the data set and the predicted trafficability result;
a fourth determination unit configured to determine whether to pass the first road based on the target trafficability result.
Optionally, the determining module further includes:
the setting unit is used for setting the actual labeling result corresponding to the image to be detected as a target trafficability result if the target trafficability result is different from the predicted trafficability result;
and the training unit is used for training the predicted trafficability model by taking the comparison result of the predicted trafficability result and the target trafficability result as a loss function.
As shown in fig. 8, which is a structural diagram of an implementation manner of an electronic device provided in an embodiment of the present application, the electronic device includes:
a memory 81 for storing a program;
a processor 82 configured to execute the program, the program being specifically configured to:
acquiring an image to be detected, wherein the image to be detected is a local area of a first image; the first image is obtained by acquiring an application environment through which the robot passes, and the local area is a position area of a first road included in the application environment in the first image;
inputting the image to be detected into a pre-constructed predicted trafficability model, and obtaining a predicted trafficability result through the predicted trafficability model, wherein the predicted trafficability result is an impassable result representing that the robot falls off if passing through the first road, or a passable result representing that the robot cannot fall off if passing through the first road;
determining whether to pass the first road based on the predicted trafficability result.
The processor 82 may be a central processing unit CPU or an Application Specific Integrated Circuit (ASIC).
The first server may further comprise a communication interface 83 and a communication bus 84, wherein the memory 81, the processor 82 and the communication interface 83 are communicated with each other via the communication bus 84.
Note that the features described in the embodiments in the present specification may be replaced with or combined with each other. For the device or system type embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A fall prevention method, applied to a robot, the method comprising:
acquiring an image to be detected, wherein the image to be detected is a local area of a first image; the first image is obtained by acquiring an application environment through which the robot passes, and the local area is a position area of a first road included in the application environment in the first image;
inputting the image to be detected of the road characteristics to be passed to a pre-constructed predicted trafficability model, and obtaining a predicted trafficability result through the predicted trafficability model, wherein the predicted trafficability result is an impassable result representing that the robot falls off if passing through the first road or a passable result representing that the robot does not fall off if passing through the first road; the traffic prediction model obtains a traffic prediction result only based on the characteristics of the first road;
determining whether to pass the first road based on the predicted trafficability result.
2. The fall prevention method according to claim 1, wherein the acquiring the image to be measured includes:
mapping the acquired application environment to a physical imaging plane to obtain the first image;
determining a location area where the first road maps to the physical imaging plane to obtain the local area in the first image.
3. The fall prevention method according to claim 2, wherein the determining the location area where the first road maps to the physical imaging plane comprises:
determining a three-dimensional coordinate set of the first road in a three-dimensional space based on a navigation path of the robot, a current address position of the robot and a collection view angle of a collection application scene, wherein the three-dimensional coordinate set comprises at least one three-dimensional coordinate;
acquiring a two-dimensional coordinate set mapped to the physical imaging plane by the three-dimensional coordinate set, wherein the two-dimensional coordinate set comprises at least one two-dimensional coordinate; and the position area is an area corresponding to the two-dimensional coordinate set.
4. The fall prevention method according to any one of claims 1 to 3, further comprising:
acquiring a plurality of sample images, wherein different sample images correspond to different positions in a navigation path of the robot, and one sample image is a local area of a second image; the second image is obtained by acquiring an application environment corresponding to a corresponding position in the navigation path, and the local area of the second image is a position area of a second road in the second image, wherein the second road is included in the application environment corresponding to the corresponding position in the navigation path;
taking a plurality of sample images as input of a neural network, taking a comparison result of predicted trafficability results respectively corresponding to the plurality of sample images output by the neural network and labeled actual results corresponding to the corresponding sample images as a loss function, and training the neural network to obtain a predicted trafficability model;
the actual labeling result corresponding to one sample image comprises an impassable result representing that the robot falls off if passing through the second road, or an impassable result representing that the robot cannot fall off if passing through the second road.
5. The fall prevention method according to claim 4, wherein the plurality of sample images include a positive sample image, further comprising:
setting the current mode as a first mode for automatically marking the actual result as a passable result;
determining a sample image acquired in the first mode as a positive sample image;
and setting the actual labeling result corresponding to the positive sample image as a passable result.
6. The fall prevention method according to claim 4 or 5, wherein the plurality of sample images include a negative sample image, further comprising:
setting the current mode as a second mode for automatically marking the actual result as the result which cannot pass;
determining a sample image acquired in the second mode as a negative sample image;
and setting the actual labeling result corresponding to the negative sample image as a non-passable result.
7. The fall prevention method of claim 1, wherein the determining whether to traverse the first road based on the predicted trafficability result comprises:
acquiring a data set, wherein the data set comprises first data and/or second data, and the first data is a reading of a laser radar contained in the robot for the first road; the second data is a reading of the first road by a depth camera included with the robot;
determining a target trafficability result based on the data set and the predicted trafficability result;
determining whether to pass the first road based on the target trafficability result.
8. The fall prevention method according to claim 7, further comprising:
if the target trafficability result is different from the predicted trafficability result, setting the actual labeling result corresponding to the image to be detected as a target trafficability result;
and taking the comparison result of the predicted trafficability result and the target trafficability result as a loss function, and training the predicted trafficability model.
9. A fall prevention device, comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an image to be detected, and the image to be detected is a local area of a first image; the first image is obtained by acquiring an application environment through which the robot passes, and the local area is a position area of a first road included in the application environment in the first image;
the input module is used for inputting the image to be detected of the road characteristics to be passed to a pre-constructed predicted trafficability model, and obtaining a predicted trafficability result through the predicted trafficability model, wherein the predicted trafficability result is an impassable result representing that the robot falls off if passing through the first road or an impassable result representing that the robot cannot fall off if passing through the first road; the traffic prediction model obtains a traffic prediction result only based on the characteristics of the first road;
a determination module for determining whether to pass the first road based on the predicted trafficability result.
10. An electronic device, comprising:
a memory for storing a program;
a processor configured to execute the program, the program specifically configured to:
acquiring an image to be detected, wherein the image to be detected is a local area of a first image; the first image is obtained by acquiring an application environment through which the robot passes, and the local area is a position area of a first road included in the application environment in the first image;
inputting the image to be detected of the road characteristics to be passed to a pre-constructed predicted trafficability model, and obtaining a predicted trafficability result through the predicted trafficability model, wherein the predicted trafficability result is an impassable result representing that the robot falls off if passing through the first road or a passable result representing that the robot does not fall off if passing through the first road; the traffic prediction model obtains a traffic prediction result only based on the characteristics of the first road;
determining whether to pass the first road based on the predicted trafficability result.
CN202010698510.6A 2020-07-20 2020-07-20 Falling prevention method and device and electronic equipment Active CN111739099B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010698510.6A CN111739099B (en) 2020-07-20 2020-07-20 Falling prevention method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010698510.6A CN111739099B (en) 2020-07-20 2020-07-20 Falling prevention method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111739099A CN111739099A (en) 2020-10-02
CN111739099B true CN111739099B (en) 2020-12-11

Family

ID=72655069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010698510.6A Active CN111739099B (en) 2020-07-20 2020-07-20 Falling prevention method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111739099B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112462772A (en) * 2020-11-26 2021-03-09 深圳优地科技有限公司 Robot traveling method, device, equipment and storage medium
CN112926632B (en) * 2021-02-01 2023-04-18 广州赛特智能科技有限公司 Method for detecting height difference between elevator and floor
CN114200935A (en) * 2021-12-06 2022-03-18 北京云迹科技股份有限公司 Robot anti-falling method and device, electronic equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3615702B2 (en) * 1999-11-25 2005-02-02 ソニー株式会社 Motion control device and motion control method for legged mobile robot, and legged mobile robot
CN103335658B (en) * 2013-06-19 2016-09-14 华南农业大学 A kind of autonomous vehicle barrier-avoiding method generated based on arc path
CN103605368A (en) * 2013-12-04 2014-02-26 苏州大学张家港工业技术研究院 Method and device for route programming in dynamic unknown environment
CN106597843B (en) * 2015-10-20 2019-08-09 沈阳新松机器人自动化股份有限公司 A kind of front driving wheel formula robot security control method and system
CN109506641A (en) * 2017-09-14 2019-03-22 深圳乐动机器人有限公司 The pose loss detection and relocation system and robot of mobile robot
CN107861508B (en) * 2017-10-20 2021-04-20 纳恩博(北京)科技有限公司 Local motion planning method and device for mobile robot

Also Published As

Publication number Publication date
CN111739099A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
CN112417967B (en) Obstacle detection method, obstacle detection device, computer device, and storage medium
CN111739099B (en) Falling prevention method and device and electronic equipment
KR102210715B1 (en) Method, apparatus and device for determining lane lines in road
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
JP2019215853A (en) Method for positioning, device for positioning, device, and computer readable storage medium
KR102052114B1 (en) Object change detection system for high definition electronic map upgrade and method thereof
JP2020527500A (en) Methods and equipment for calibrating external parameters of onboard sensors
CN110674705B (en) Small-sized obstacle detection method and device based on multi-line laser radar
CN113673282A (en) Target detection method and device
JP7042905B2 (en) Methods and devices for generating inverse sensor models, as well as methods for detecting obstacles
CN110799989A (en) Obstacle detection method, equipment, movable platform and storage medium
JPWO2020090428A1 (en) Feature detection device, feature detection method and feature detection program
US11062475B2 (en) Location estimating apparatus and method, learning apparatus and method, and computer program products
RU2018132850A (en) Methods and systems for computer determining the presence of objects
KR102167835B1 (en) Apparatus and method of processing image
WO2021195886A1 (en) Distance determination method, mobile platform, and computer-readable storage medium
CN113111513B (en) Sensor configuration scheme determining method and device, computer equipment and storage medium
US11373328B2 (en) Method, device and storage medium for positioning object
CN114556445A (en) Object recognition method, device, movable platform and storage medium
CN111381585A (en) Method and device for constructing occupation grid map and related equipment
JP6919764B2 (en) Radar image processing device, radar image processing method, and program
CN111553342B (en) Visual positioning method, visual positioning device, computer equipment and storage medium
CN112405526A (en) Robot positioning method and device, equipment and storage medium
CN114425774B (en) Robot walking road recognition method, robot walking road recognition device, and storage medium
CN113014899B (en) Binocular image parallax determination method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: Room 702, 7th floor, NO.67, Beisihuan West Road, Haidian District, Beijing 100080

Patentee after: Beijing Yunji Technology Co.,Ltd.

Address before: Room 702, 7th floor, NO.67, Beisihuan West Road, Haidian District, Beijing 100080

Patentee before: BEIJING YUNJI TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder