CN115797817A - Obstacle identification method, obstacle display method, related equipment and system - Google Patents

Obstacle identification method, obstacle display method, related equipment and system Download PDF

Info

Publication number
CN115797817A
CN115797817A CN202310075235.6A CN202310075235A CN115797817A CN 115797817 A CN115797817 A CN 115797817A CN 202310075235 A CN202310075235 A CN 202310075235A CN 115797817 A CN115797817 A CN 115797817A
Authority
CN
China
Prior art keywords
obstacle
map
target
image
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310075235.6A
Other languages
Chinese (zh)
Other versions
CN115797817B (en
Inventor
孙境廷
李华清
张圆
钟锟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN202310075235.6A priority Critical patent/CN115797817B/en
Publication of CN115797817A publication Critical patent/CN115797817A/en
Application granted granted Critical
Publication of CN115797817B publication Critical patent/CN115797817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides an obstacle identification method, an obstacle display method, related equipment and a system, wherein the obstacle identification method can identify the type of an obstacle in a use scene of a robot, can also determine the occupied area of the obstacle of the identified type in a map, and can send map information containing the information to display equipment for display, so that a user can know the type of the obstacle, the position of the obstacle in the map and the occupied area, and the user can know the specific situation of the obstacle, which is favorable for eliminating the question of the user about the working capacity of the robot, namely, the display of the specific information of the obstacle is favorable for enhancing the trust of the user on the working capacity of the robot, in addition, the display of the specific information of the obstacle can also help the user to find lost articles, and the user experience is better.

Description

Obstacle identification method, obstacle display method, related equipment and system
Technical Field
The invention relates to the technical field of intelligent obstacle avoidance, in particular to an obstacle identification method, an obstacle display method, related equipment and a system.
Background
With the development of artificial intelligence technology, the robot (such as a cleaning robot) has the added functions of intelligent obstacle avoidance, automatic charging, autonomous navigation path planning and the like, and the addition of the functions greatly improves the intelligent degree of the robot. The intelligent obstacle avoidance means that the robot can automatically avoid an obstacle if encountering the obstacle in the moving process.
In the use process of the robot, the obstacles in the advancing direction of the robot are identified and displayed in the map, so that the trust of a user on the working capacity of the robot can be enhanced, the user can be helped to find lost articles, and the use experience of products is further improved.
The existing obstacle identification and display method can determine the point position of an obstacle in a two-dimensional map, and further can indicate that the obstacle exists in the position to a user by displaying a point at the determined point position when the map is displayed on a display device. However, the user can only know where the obstacle is located in the map displayed by the display device, but cannot know the specific situation of the obstacle, and the user may question the working capability of the robot if the specific situation of the obstacle is not known, so that for example, the user may question that the cleaning path coverage of the robot is not complete when the obstacle-detouring distance is relatively long.
Disclosure of Invention
In view of the above, the present invention provides an obstacle recognition method, an obstacle display method, a related device and a system, so as to solve the problem that the current obstacle recognition and display method cannot enable a user to know the specific situation of an obstacle, and further cause the user to question the working capability of a robot, and the technical scheme is as follows:
an obstacle identification method applied to a processing device, the method comprising:
in the moving process of the robot, acquiring an image and distance information of an obstacle in the advancing direction of the robot to obtain an obstacle image and obstacle distance information corresponding to the obstacle image;
identifying a type of an obstacle contained in the obstacle image based on the obstacle image and the obstacle distance information;
determining a footprint area of a target obstacle in a map based on an obstacle image and/or obstacle distance information of the target obstacle, wherein the target obstacle is an identified type of obstacle, and the footprint area is capable of indicating a position and a footprint size of the target obstacle in the map;
and sending map information containing the type of the target obstacle and the occupied area of the target obstacle in a map to display equipment so that the display equipment can display the type of the target obstacle and the occupied area of the target obstacle in the map.
Optionally, the identifying the type of the obstacle included in the obstacle image based on the obstacle image and the obstacle distance information includes:
predicting the type of an obstacle contained in the obstacle image based on an obstacle type prediction model obtained by pre-training by combining obstacle distance information corresponding to the obstacle image;
the obstacle type prediction model is obtained by training a training obstacle image marked with an obstacle type and obstacle distance information corresponding to the training obstacle image.
Optionally, the obstacle distance information of the target obstacle includes obstacle distance information corresponding to a plurality of angles, respectively, and the obstacle distance information corresponding to one angle is distance information measured at the angle by using a distance measuring device arranged on the robot for a measurable point on the target obstacle;
determining a footprint area of a target obstacle in a map based on obstacle distance information of the target obstacle, comprising:
determining local map information corresponding to the plurality of angles respectively based on obstacle distance information corresponding to the plurality of angles respectively, wherein the local map information comprises shape information of the target obstacle in a local map;
and determining the occupied area of the target obstacle in the map based on the local map information respectively corresponding to the plurality of angles.
Optionally, the obstacle image of the target obstacle includes a plurality of obstacle images corresponding to a plurality of angles, respectively, and an obstacle image corresponding to one angle is an image including the target obstacle acquired at the angle by using an image acquisition device disposed on the robot;
determining a footprint of a target obstacle in a map based on an obstacle image of the target obstacle, comprising:
inputting the obstacle image corresponding to each angle into a pre-established map information prediction model to obtain local map information output by the map information prediction model so as to obtain local map information corresponding to a plurality of angles; the local map information comprises shape information of the target obstacle in a local map, and the map information prediction model is obtained by training a training obstacle image and local map information corresponding to the training obstacle image;
and determining the occupied area of the target obstacle in the map based on the local map information corresponding to the plurality of angles respectively.
Optionally, the obstacle image of the target obstacle includes a plurality of obstacle images corresponding to a plurality of angles, respectively, and an obstacle image corresponding to one angle is an image including the target obstacle acquired at the angle by using an image acquisition device disposed on the robot;
the obstacle distance information of the target obstacle comprises obstacle distance information corresponding to the plurality of angles respectively, and the obstacle distance information corresponding to one angle is the obstacle distance information corresponding to the obstacle image corresponding to the angle;
determining a footprint area of a target obstacle in a map based on an obstacle image of the target obstacle and obstacle distance information, comprising:
inputting the obstacle image and the obstacle distance information corresponding to each angle into a pre-established map information prediction model to obtain local map information output by the map information prediction model so as to obtain local map information corresponding to the plurality of angles respectively; the local map information comprises shape information of the target obstacle in a local map, and the map information prediction model is obtained by training a training obstacle image, obstacle distance information corresponding to the training obstacle image and local map information corresponding to the training obstacle image;
and determining the occupied area of the target obstacle in the map based on the local map information respectively corresponding to the plurality of angles.
Optionally, the determining, based on the local map information corresponding to the plurality of angles, a floor area of the target obstacle in the map includes:
determining shape information of the target obstacle in the whole map based on the local map information, the pose of the robot and the position of the robot in the map, wherein the local map information corresponds to the plurality of angles respectively;
and determining the occupied area of the target obstacle in the map based on the shape information of the target obstacle in the whole map.
Optionally, the determining the occupied area of the target obstacle in the map based on the shape information of the target obstacle in the whole map includes:
determining the outline of the target obstacle in the map based on the shape information of the target obstacle in the whole map;
and determining an area surrounded by the outline of the target obstacle in the map as a floor area of the target obstacle in the map.
Optionally, the obstacle identification method further includes:
after obtaining the type of the target obstacle and the floor area of the target obstacle in the map, each obtaining an image containing the target obstacle, for each grid in the map associated with the currently obtained image:
acquiring the probability that the grid has obstacles of various set types and does not have obstacles determined based on the currently acquired image as the probability that the currently acquired image corresponds to the grid;
fusing the probability of the currently obtained image corresponding to the grid with the probabilities of a plurality of historical obstacle images corresponding to the grid respectively, wherein the probability of the historical obstacle image corresponding to the grid is the probability of the grid having obstacles of various set types and the probability of no obstacles determined based on the historical obstacle image;
determining whether the grid has an obstacle or not and the type of the obstacle when the obstacle exists on the basis of the fused probability, wherein the type of the obstacle is used as a recognition result corresponding to the grid;
and updating the type of the target obstacle and the occupied area of the target obstacle in the map based on the identification result corresponding to each determined grid.
An obstacle display method is applied to a display device and comprises the following steps:
receiving map information which is sent by a processing device and contains a type of a target obstacle and a floor area of the target obstacle in a map, wherein the target obstacle is the obstacle of which the type is identified by the processing device, and the floor area can indicate the position and the floor size of the target obstacle in the map;
and displaying a map based on the map information, and displaying the type and the occupied area of the target obstacle in the map.
Optionally, the obstacle display method further includes:
searching an obstacle indication map matched with the type of the target obstacle and the occupied area of the target obstacle in a map in an obstacle map library to serve as a target obstacle indication map;
and displaying the target obstacle indication map in the occupied area of the target obstacle in a map.
Optionally, the obstacle map library includes a map set and an icon set;
the searching an obstacle indication map matched with the type of the target obstacle and the occupation area of the target obstacle in a map in an obstacle map library comprises:
determining a target atlas from the set of tiles and the set of icons based on a zoom size of a map;
if the target atlas is the map atlas, searching a map which is matched with the type of the target obstacle and the occupied area of the target obstacle in a map from the map atlas;
if the target atlas is the icon set, searching icons matched with the type of the target obstacle and the occupied area of the target obstacle in the map from the icon set.
Optionally, if the target obstacle indication map is a target map, displaying the target obstacle indication map in a floor area of the target obstacle in the map, where the displaying includes:
processing the target map into a map that matches a footprint size and/or orientation of the target obstacle in a map;
and displaying the processed target map in the occupied area of the target obstacle in the map.
Optionally, the obstacle display method further includes:
when the user zooms the map, adjusting the type of the target obstacle indication map based on the zoomed size of the map;
and/or, when the user translates and/or rotates the map, adjusting an orientation and/or a position of the target obstacle indication map in the map based on a translation distance and/or a rotation angle of the map.
An obstacle recognition apparatus applied to a processing device, the apparatus comprising: the system comprises an obstacle data acquisition module, an obstacle type identification module, an obstacle occupied area determination module and a map information sending module;
the obstacle data acquisition module is used for acquiring images and distance information of obstacles in the advancing direction of the robot in the moving process of the robot so as to obtain obstacle images and obstacle distance information corresponding to the obstacle images;
the obstacle type identification module is used for identifying the type of an obstacle contained in the obstacle image based on the obstacle image and the obstacle distance information;
the obstacle floor area determination module is used for determining the floor area of a target obstacle in a map based on an obstacle image and/or obstacle distance information of the target obstacle, wherein the target obstacle is an identified type of obstacle, and the floor area can indicate the position and the floor area size of the target obstacle in the map;
the map information sending module is used for sending the map information containing the type of the target obstacle and the occupied area of the target obstacle in the map to the display device, so that the display device can display the type of the target obstacle and the occupied area of the target obstacle in the map.
An obstacle display apparatus applied to a display device, the apparatus comprising: the map information display device comprises a map information receiving module and a map information display module;
the map information receiving module is used for receiving map information which is sent by a processing device and contains the type of a target obstacle and the occupied area of the target obstacle in a map, wherein the target obstacle is the obstacle of which the type is identified by the processing device, and the occupied area can indicate the position and the occupied size of the target obstacle in the map;
and the map information display module is used for displaying a map based on the map information and displaying the type and the occupied area of the target barrier in the map.
A processing device, comprising: a memory and a processor;
the memory is used for storing programs;
the processor is configured to execute the program to implement the steps of any one of the obstacle identification methods.
A display device, comprising: a memory and a processor;
the memory is used for storing programs;
the processor is configured to execute the program to implement the steps of the obstacle display method according to any one of the above-described embodiments.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the individual steps of the obstacle identification method of any of the above and/or the individual steps of the obstacle display method of any of the above.
A processing system, comprising: a processing device and a display device;
the processing device is used for acquiring images and distance information of obstacles in the advancing direction of the robot in the moving process of the robot to obtain obstacle images and obstacle distance information corresponding to the obstacle images, identifying the types of the obstacles contained in the obstacle images based on the obstacle images and the obstacle distance information, determining the floor area of a target obstacle in a map based on the obstacle images and/or the obstacle distance information of the target obstacle, and sending map information containing the types of the target obstacle and the floor area of the target obstacle in the map to the display device; wherein the target obstacle is an identified type of obstacle, and the footprint area is capable of indicating a location and a footprint size of the target obstacle in a map;
and the display equipment is used for displaying a map according to the map information after receiving the map information, and displaying the type and the occupied area of the target barrier in the map.
The method for identifying the obstacle comprises the steps of firstly acquiring an image and distance information of an obstacle in the advancing direction of the robot in the moving process of the robot to obtain an obstacle image and obstacle distance information corresponding to the obstacle image, then identifying the type of the obstacle contained in the obstacle image based on the obstacle image and the obstacle distance information, then determining the occupied area of the obstacle of the identified type in a map based on the obstacle image and/or the obstacle distance information of the obstacle of the identified type, and finally sending map information of the occupied area of the obstacle of the identified type and the identified type in the map to a display device, wherein the display device displays the identified type and the occupied area of the obstacle of the identified type in the map by adopting the obstacle display method provided by the invention. The obstacle identification method provided by the invention can identify the type of the obstacle in the use scene of the robot and can also determine the occupied area of the obstacle of the identified type in the map, the obstacle display method provided by the invention can display the identified type and the occupied area of the obstacle of the identified type in the map, a user can know the type of the obstacle, the position of the obstacle in the map and the occupied area size through displayed information, and the user can know the specific situation of the obstacle and is favorable for eliminating the question of the user about the working capacity of the robot, namely, the display of the specific information of the obstacle is favorable for enhancing the trust of the user on the working capacity of the robot, in addition, the display of the specific information of the obstacle can also help the user to search for lost articles, and the user experience is better.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flow chart of an obstacle identification method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of the process of determining the occupied area of the target obstacle in the map based on the obstacle distance information of the target obstacle according to the embodiment of the present invention;
fig. 3 is a schematic flowchart of determining a floor area of a target obstacle in a map based on obstacle image information of the target obstacle according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of determining a floor area of a target obstacle in a map based on an obstacle image of the target obstacle and obstacle distance information according to an embodiment of the present invention;
fig. 5 is a schematic flow chart illustrating an obstacle display method according to an embodiment of the present invention;
fig. 6 is a schematic diagram illustrating a land area of a target obstacle displayed on a map according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an embodiment of the present invention showing only target obstacle indication maps;
fig. 8 is a schematic diagram illustrating a land occupation area of a target obstacle and a target obstacle indication map simultaneously displayed according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a patch panel of the present invention showing both patch panel mapping and patch panel wire footprint area;
fig. 10 is a schematic structural diagram of an obstacle recognition device according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an obstacle display device according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The existing obstacle identification and display method only enables a user to know where an obstacle exists in a map, but cannot enable the user to know the specific situation of the obstacle, and the specific situation of the obstacle cannot be known, so that the user can question the working capacity of the robot.
The method comprises the steps of carrying out research aiming at the defects of the existing obstacle identification and display method, and finally providing an obstacle identification method and an obstacle display method through continuous research, wherein the provided obstacle identification method can identify the type of an obstacle in a scene where a robot is located and the occupied area of the obstacle in a map, the provided obstacle display method can display the type of the obstacle and the occupied area identified by the obstacle identification method in the map, and a user can know the obstacle according to the type of the obstacle and the occupied area displayed in the map, and the position and the occupied area of the obstacle in the map.
Before describing the obstacle recognition method and the obstacle display method provided by the present invention, a hardware architecture related to the present invention will be described.
In one possible implementation, the hardware architecture related to the present invention may include: a robot and a display device, wherein the robot and the display device are communicable.
By way of example, the robot may be, but is not limited to, a cleaning robot, a transfer robot, and the like.
By way of example, the display device may be, but is not limited to, a PC, notebook, smart tv, PAD, cell phone, etc.
The method comprises the steps that in the moving process of the robot, relevant data of obstacles in the advancing direction of the robot are obtained, the obtained data are identified, the types of the obstacles in the scene where the robot is located and the occupied areas of the obstacles in a map are obtained, then map information including the types of the obstacles and the occupied areas of the obstacles in the map is sent to display equipment, the display equipment displays the map, and the types of the obstacles and the occupied areas of the obstacles in the map are displayed in the map.
In another possible implementation, the hardware architecture related to the present invention may include: the robot comprises a robot, a processing device and a display device, wherein the robot can be communicated with the processing device, and the processing device can be communicated with the display device.
By way of example, the robot may be, but is not limited to, a cleaning robot, a transfer robot, and the like.
For example, the processing device may be, but is not limited to, a server, and the server may be one server, a server cluster composed of multiple servers, or a cloud computing server center. The server may include a processor, memory, and a network interface, among others.
By way of example, the display device may be, but is not limited to, a PC, notebook, smart tv, PAD, cell phone, etc.
The method comprises the steps that in the moving process of the robot, relevant data of obstacles in the advancing direction of the robot are obtained, the obtained relevant data of the obstacles are sent to processing equipment (such as a server), the processing equipment identifies and processes the received data to obtain the type of the obstacles in the scene where the robot is located and the occupied area of the obstacles in a map, then map information including the type of the obstacles and the occupied area of the obstacles in the map is sent to display equipment, the display equipment displays the map, and the type of the obstacles and the occupied area of the obstacles in the map are displayed in the map.
It will be understood by those skilled in the art that the above-described robots, processing devices, and display devices are merely examples, and that other existing or future devices may be suitable for use with the present invention and are included within the scope of the present invention and are hereby incorporated by reference.
Next, the obstacle recognition method and the obstacle display method according to the present invention will be described by the following embodiments.
Referring to fig. 1, a flowchart of an obstacle identification method according to an embodiment of the present invention is shown, where the obstacle identification method is applicable to a processing device, where the processing device may be a robot or other devices with data processing capability, such as a server, and the obstacle identification method may include:
step S101: in the moving process of the robot, the image and the distance information of the obstacle in the advancing direction of the robot are obtained, so that the obstacle image and the obstacle distance information corresponding to the obstacle image are obtained.
The obstacle distance information is distance information between the obstacle and the robot.
Alternatively, the image of the obstacle in the advancing direction of the robot may be acquired based on an image capturing device provided on the robot. Illustratively, the image capture device may be a vision sensor.
Optionally, distance information of an obstacle in the advancing direction of the robot may be acquired based on distance measuring equipment arranged on the robot, so as to obtain obstacle distance information corresponding to an obstacle image. Illustratively, the ranging device may be a ranging sensor.
It should be noted that, in addition to obtaining the obstacle distance information based on the distance measuring device arranged on the robot, other methods may also be adopted, for example, depth information corresponding to an obstacle image may be predicted based on a depth information prediction model, and used as the obstacle distance information corresponding to the obstacle image.
Step S102: the type of the obstacle included in the obstacle image is identified based on the obstacle image and obstacle distance information corresponding to the obstacle image.
The type of obstacle comprised in the obstacle image is identified, i.e. which types of obstacles are comprised in the obstacle image. In this embodiment, a plurality of obstacle types may be preset according to a usage scenario of the robot, and when the type of the obstacle is recognized, the type of the obstacle included in the obstacle image is determined from the plurality of obstacle types.
Step S103: determining a footprint of the target obstacle in the map based on the obstacle image and/or the obstacle distance information of the target obstacle.
The target obstacle is an obstacle of which the type is identified, the obstacle image of the target obstacle is an image including the target obstacle, and the obstacle distance information of the target obstacle is distance information between the robot and the target obstacle.
Assuming that the obstacle included in the obstacle image is recognized as a sofa, the target obstacle is a sofa, and the step S103 is to determine the occupied area of the sofa in the map based on the image including the sofa and/or the information on the distance between the robot and the sofa.
It should be noted that the occupied area of the target obstacle in the map can indicate the position and the occupied size of the target obstacle in the map.
Optionally, after obtaining the type of the target obstacle and the floor area of the target obstacle in the map, each time an image including the target obstacle is obtained, the following process may be performed for each grid in the map that is associated with the currently obtained image: acquiring the probability of the existence of obstacles of each set type and the absence of obstacles in the grid determined based on the currently obtained image, and taking the probability as the probability of the currently obtained image corresponding to the grid; fusing the probability of the currently obtained image corresponding to the grid with the probabilities of a plurality of historical obstacle images corresponding to the grid respectively, wherein the probability of the historical obstacle image corresponding to the grid is the probability of the grid having obstacles of various set types and the probability of no obstacles determined based on the historical obstacle image; determining whether the grid has an obstacle or not and the type of the obstacle when the obstacle exists according to the fused probability, and taking the type of the obstacle as a corresponding recognition result of the grid; and updating the type of the target barrier and the occupied area of the target barrier in the map according to the determined identification result corresponding to each grid.
Illustratively, the currently acquired image is I 1 The images of several historical obstacles are I 2 、I 3 (I 1 、I 2 、I 3 All images containing the target obstacle), m-1 obstacle types are set, and a non-obstacle type is added, so that m types are provided in total, and I 1 The probability corresponding to the grid is P 1 ={p 11 ,p 12 ,…p 1m }, I 2 The probability corresponding to the grid is P 2 ={p 21 ,p 22 ,…p 2m }, I 3 The probability corresponding to the grid is P 3 ={p 31 ,p 32 ,…p 3m Is then p is 11、 p 21、 p 31 Fusing (e.g. weighted summation) to obtain a fused probability p 1 Let p be 12、 p 22、 p 32 Fusing (e.g. weighted summation) to obtain a fused probability p 2 …, p is 1m、 p 2m、 p 3m Fusing (e.g. weighted summation) to obtain a fused probability p m At the time of obtaining p 1 ~p m Then, p can be substituted 1 ~p m The type corresponding to the maximum probability in the first group is determined as the type corresponding to the grid, and if the type corresponding to the grid is a certain obstacle typeType, it indicates that there is an obstacle of that type in the grid.
Step S104: and sending map information containing the type of the target obstacle and the occupied area of the target obstacle in the map to a display device so that the display device displays the type of the target obstacle and the occupied area of the target obstacle in the map.
The display device may be any device with a display function, such as a PC, a notebook computer, a mobile phone, a smart television, a PAD, and the like.
The map information can also comprise the movement track information of the robot, the display equipment displays the map after obtaining the map information, and displays the type of the target obstacle, the occupied area of the target obstacle in the map and the movement track of the robot in the map, and a user can know what obstacle the robot encounters and can also know the position and the occupied area size of the obstacle through the displayed information.
The obstacle identification method provided by the embodiment of the invention can identify the type of the obstacle in the use scene of the robot, can also determine the occupied area of the obstacle of the identified type in the map, and after the map information containing the information is sent to the display device to be displayed, a user can know the type of the obstacle, the position of the obstacle in the map and the occupied area size, and the user can know the specific situation of the obstacle, so that the doubt of the user on the working capacity of the robot can be eliminated (for example, the user can know the occupied area size of the obstacle and can know why the robot can adopt a larger obstacle-surrounding distance to surround the obstacle), namely, the display of the specific information of the obstacle is beneficial to enhancing the trust of the user on the working capacity of the robot, and in addition, the display of the specific information of the obstacle can also help the user to find lost articles, so that the user experience is better.
In another embodiment of the present invention, as for the "step S102: a specific implementation procedure of identifying the type of an obstacle included in the obstacle image based on the obstacle image and the obstacle distance information "will be described.
In one possible implementation, the process of identifying the type of obstacle included in the obstacle image based on the obstacle image and the obstacle distance information may include: and predicting the type of the obstacle contained in the obstacle image based on an obstacle type prediction model obtained by pre-training by combining obstacle distance information corresponding to the obstacle image.
Specifically, the obstacle image and the obstacle distance information corresponding to the obstacle image may be input to an obstacle type prediction model obtained through pre-training, so as to obtain the type of an obstacle included in the obstacle image output by the obstacle type prediction model.
It should be noted that, in the moving process of the robot, images and distance information of obstacles in the advancing direction of the robot are continuously acquired, and after each obstacle image and corresponding obstacle distance information are acquired, the currently acquired obstacle image and corresponding obstacle distance information are input into the obstacle type prediction model to obtain the type of the obstacle included in the currently acquired obstacle image. Of course, after obtaining the obstacle distance information corresponding to the plurality of obstacle images and the plurality of obstacle images, each obstacle image and the corresponding obstacle distance information may be input into the obstacle type prediction model.
The obstacle type prediction model in this embodiment is obtained by training a training obstacle image labeled with an obstacle type and obstacle distance information corresponding to the training obstacle image.
In a possible implementation manner, the obstacle type prediction model may be an obstacle type prediction model based on image semantic segmentation, in this implementation manner, a type of each pixel point in a training obstacle image needs to be labeled, that is, labeling information of the training obstacle image is a type of each pixel point in the training obstacle image, the type of each pixel point is one of a plurality of types, and the plurality of types include a plurality of set obstacle types and non-obstacle types.
After an obstacle type prediction model based on image semantic segmentation is obtained through training, the model can predict the probability that each pixel point in an input obstacle image belongs to an obstacle and a non-obstacle of each set obstacle type, the type of each pixel point in the input obstacle image can be determined on the basis, and then different types of obstacles can be judged and marked out through edge connectivity.
In another possible implementation, the obstacle type prediction model may be an obstacle type prediction model based on target detection, in which an obstacle needs to be framed in a training obstacle image, and a type of the framed obstacle is labeled, and the type of the framed obstacle is one of a plurality of set obstacle types.
After the model for predicting the type of the obstacle based on target detection is obtained through training, the model can frame the obstacle in the input obstacle image through a rectangular frame and give the type of the framed obstacle.
Regardless of the type of obstacle prediction model, after the obstacle image and the obstacle distance information corresponding to the obstacle image are input into the obstacle type prediction model, the obstacle type prediction model can determine the position of the obstacle in the obstacle image and the type of the obstacle.
In another embodiment of the present invention, as for the "step S103: an implementation process for determining a floor area of a target obstacle in a map based on an obstacle image and/or obstacle distance information of the target obstacle is described.
Before describing a specific implementation of step S103, first, an obstacle image of a target obstacle and obstacle distance information of the target obstacle will be described.
In the process of moving the robot, the image capturing device disposed on the robot captures images including the target obstacle from a plurality of different angles, that is, the obstacle image of the target obstacle includes obstacle images corresponding to a plurality of angles, respectively (the obstacle image corresponding to one angle is an image including the target obstacle captured at the angle by the image capturing device disposed on the robot).
The obstacle distance information of the target obstacle includes obstacle distance information corresponding to a plurality of angles, and the obstacle distance information corresponding to one angle is the obstacle distance information corresponding to the obstacle image corresponding to the angle. In the case where the robot is provided with the distance measuring sensor, the obstacle distance information corresponding to one angle may be obtained based on the distance measuring sensor (that is, the obstacle distance information corresponding to one angle is distance information measured at the angle with respect to a measurable point on the target obstacle by using the distance measuring device provided on the robot), may also be predicted by the depth information prediction model based on the obstacle image corresponding to the angle, and in the case where the robot is not provided with the distance measuring sensor, the obstacle distance information corresponding to one angle may be predicted by the depth information prediction model based on the obstacle image corresponding to the angle.
Firstly, a realization process of determining the occupied area of the target obstacle in the map based on the obstacle distance information of the target obstacle is introduced.
Referring to fig. 2, a schematic flow chart illustrating a process of determining a floor area of a target obstacle in a map based on obstacle distance information of the target obstacle may include:
step S201: and determining local map information corresponding to the plurality of angles based on the obstacle distance information corresponding to the plurality of angles.
The obstacle distance information corresponding to each angle comprises the distance between the robot and a plurality of measurable point positions on the obstacle.
For each angle in the plurality of angles, the distances between the robot and a plurality of point locations on the obstacle are converted onto a map, and then local map information corresponding to the angle can be obtained. Wherein the local map information includes shape information of the target obstacle in the local map.
Step S202: and determining the occupied area of the target obstacle in the map based on the local map information corresponding to the plurality of angles respectively.
Specifically, the process of determining the occupied area of the target obstacle in the map based on the local map information corresponding to the plurality of angles may include:
step S2021: and determining the shape information of the target obstacle in the whole map based on the local map information respectively corresponding to the plurality of angles, the pose of the robot and the position of the robot in the map.
The local map information is not information in the map coordinate system, and therefore needs to be processed into information in the map coordinate system, and for this purpose, the local map information corresponding to each of the plurality of angles is processed into information in the map coordinate system in accordance with the pose of the robot and the position of the robot in the map, thereby obtaining shape information of the target obstacle in the entire map.
Step S2022: and determining the occupied area of the target obstacle in the map based on the shape information of the target obstacle in the whole map.
Specifically, based on the shape information of the target obstacle in the entire map, the process of determining the occupied area of the target obstacle in the map may include: determining the outline of the target obstacle in the map based on the shape information of the target obstacle in the whole map; and determining an area surrounded by the outline of the target obstacle in the map as a floor area of the target obstacle in the map.
Next, a description will be given of an implementation process of determining a floor area of a target obstacle in a map based on obstacle image information of the target obstacle.
Referring to fig. 3, a schematic flow chart illustrating a process of determining a floor area of a target obstacle in a map based on obstacle image information of the target obstacle may include:
step S301: and inputting the obstacle image corresponding to each angle into a pre-established map information prediction model to obtain local map information output by the map information prediction model so as to obtain local map information corresponding to a plurality of angles respectively.
Wherein the local map information includes shape information of the target obstacle in the local map.
The map information prediction model adopts a training obstacle image and local map information corresponding to the training obstacle image, so that the local map information predicted according to the training obstacle image and the local map information corresponding to the training obstacle image tend to be consistent and are obtained by target training. It should be noted that the local map information corresponding to the training obstacle image includes shape information of an obstacle in the training obstacle image in the local map.
Step S302: and determining the occupied area of the target obstacle in the map based on the local map information corresponding to the plurality of angles respectively.
The specific implementation process of step S302 may refer to the implementation process of step S202, which is not described herein again in this embodiment.
Finally, a realization process of determining the occupied area of the target obstacle in the map based on the obstacle image of the target obstacle and the obstacle distance information is introduced.
Referring to fig. 4, a schematic flow chart illustrating a process of determining a floor area of a target obstacle in a map based on an obstacle image of the target obstacle and obstacle distance information may include:
step S401: and inputting the obstacle image and the obstacle distance information corresponding to each angle into a pre-established map information prediction model to obtain local map information output by the map information prediction model so as to obtain local map information corresponding to a plurality of angles respectively.
Wherein the local map information includes shape information of the target obstacle in the local map.
The map information prediction model adopts a training obstacle image, obstacle distance information corresponding to the training obstacle image and local map information corresponding to the training obstacle image, so that the local map information predicted according to the training obstacle image and the obstacle distance information corresponding to the training obstacle image and the local map information corresponding to the training obstacle image tend to be consistent and are trained as a target.
Step S402: and determining the occupied area of the target obstacle in the map based on the local map information corresponding to the plurality of angles respectively.
For a specific implementation process of step S402, refer to the implementation process of step S202, which is not described herein again in this embodiment.
Through the method provided by the embodiment, the occupied area of the target obstacle in the map can be determined.
On the basis of the obstacle identification method provided by the above embodiment, an embodiment of the present invention provides an obstacle display method, where the obstacle display method is applied to a display device, please refer to fig. 5, which shows a flowchart of the obstacle display method, and may include:
step S501: and receiving map information which is sent by the processing equipment and contains the type of the target obstacle and the occupied area of the target obstacle in the map.
The target obstacle is an obstacle of which the type is identified by the processing equipment according to the image of the obstacle in the advancing direction of the robot and the distance information, and the occupied area of the target obstacle in the map can indicate the position and the occupied size of the target obstacle in the map. The type of the target obstacle and the occupied area of the target obstacle in the map are obtained by the processing device by adopting the obstacle identification method provided by the embodiment.
Step S502: and displaying a map based on the map information, and displaying the type and the occupied area of the target obstacle in the map.
Referring to fig. 6, an example of a floor area where a target obstacle is displayed in a map is shown, and a user can know the position and the floor area size of the obstacle through the floor area where the target obstacle is displayed in the displayed map, and can know what kind of obstacle is the floor area through the type of the target obstacle displayed in the map.
The map information may include the type of the target obstacle and the area occupied by the target obstacle in the map, and may also include the movement track information of the robot. The user can clearly observe whether the movement track of the robot is in accordance with expectation or not through the type of the obstacles, the occupied area of the obstacles and the movement track of the robot displayed in the map, and particularly for the cleaning robot, the user can more clearly observe and know the cleaning coverage rate of the cleaning robot.
Optionally, the obstacle display method provided in the embodiment of the present invention may further include:
step S503: and searching an obstacle indication map matched with the type of the target obstacle and the occupation area of the target obstacle in the map in an obstacle map library to serve as the target obstacle indication map.
In one possible implementation, the obstacle map library may include a map set including a plurality of obstacle maps of set obstacle types and an icon set including a plurality of obstacle icons of set obstacle types.
The process of finding an obstacle indication map in the obstacle map library that matches the type of the target obstacle and the footprint area of the target obstacle in the map may include:
step S5031 determines a target atlas from the set of maps and the set of icons based on the scaled size of the map.
The essence of step S5031 is to determine what type of obstacle indication map is displayed in the floor area of the target obstacle in the map, i.e., to determine whether a map or an icon is displayed at the floor area of the target obstacle in the map.
And if the current map size does not meet the use requirement of the map, determining the target atlas as an icon set.
Step S5032a, if the target atlas is a map atlas, searching a map atlas matching the type of the target obstacle and the floor area of the target obstacle in the map from the map atlas.
Illustratively, the type of the target obstacle is a sofa, a map of the sofa can be found from the map set, the map of the sofa can be a map of a single sofa, a map of a twin sofa, a map of a three-person sofa, a map of an L-shaped sofa, and the like, and a map matching the shape of the floor area of the sofa in the map can be further found from several maps of the sofa, such as a map of a twin sofa.
Step S5032b, if the target atlas is the icon set, searching for an icon matching the type of the target obstacle and the floor area of the target obstacle in the map from the icon set.
Illustratively, the type of the target obstacle is a sofa, an icon of the sofa can be found from the icon set, the map of the sofa can have an icon of a single sofa, an icon of a twin sofa, an icon of a three-person sofa, an icon of an L-shaped sofa, and the like, and an icon matching the shape of the floor area of the sofa in the map can be further found from several kinds of sofa icons, such as an icon of a twin sofa.
Step S504: and displaying a target obstacle indication map in the occupied area of the target obstacle in the map.
Optionally, if the target obstacle indication map is a target map, when the target obstacle indication map is displayed in the occupied area of the target obstacle in the map, the target map may be processed (for example, rotated, zoomed, and the like), the target map may be processed into a map that matches the size and/or orientation of the occupied area of the target obstacle in the map, and then the processed target map may be displayed in the occupied area of the target obstacle in the map.
Alternatively, when the target obstacle indication map is displayed in the occupied area of the target obstacle in the map, only the target obstacle indication map may be displayed, as shown in fig. 7, or the occupied area of the target obstacle and the target obstacle indication map may be displayed at the same time, as shown in fig. 8. Meanwhile, the occupied area of the target obstacle and the target obstacle indication map are displayed, so that a user can visually know the type and the occupied area of the target obstacle, and the user can clearly observe and know the obstacle avoidance movement track of the robot. As shown in fig. 9, the length of the electric wire carried by the patch board is long, the occupied area of the patch board is small when the patch board is displayed as a map in the map, and a user can clearly observe and know the obstacle avoidance movement track of the robot through the occupied area of the electric wire.
Optionally, when the occupied area of the target obstacle and the target obstacle indication map are displayed simultaneously, the occupied area of the target obstacle and the target obstacle indication map may be displayed in a layered manner.
Optionally, the occupied area of the target obstacle in the map may be displayed semi-transparently, and the target obstacle indication map may also be displayed semi-transparently.
Optionally, when different obstacles intersect in the floor area of the map, the obstacle indication maps of the different obstacles may be displayed in layers. For example, if a weighing scale is placed under the bed, the map of the bed and the map of the weighing scale can be displayed in layers. The barrier indication diagrams of different barriers are displayed in a layered mode, so that on one hand, a user can be helped to find lost articles, on the other hand, the user can understand the motion trail of the robot conveniently, and particularly for the cleaning robot, the user can understand the obstacle avoidance motion trail of the robot during cleaning at the bottom of a sofa or a bed conveniently.
Optionally, the obstacle display method provided in the embodiment of the present invention may further include: when the user zooms the map, the type of the target obstacle indication map is dynamically adjusted based on the zoomed size of the map. For example, when the map occupies the area of the target obstacle, the map is displayed at the beginning, and then the user adjusts the zoom size of the map (the size of the map becomes smaller), and the adjusted size of the map no longer meets the use requirement of the map, the map displayed in the area of the target obstacle in the map is adjusted to be the icon. For another example, the occupied area of the target obstacle in the map is displayed as the icon at first, and then the user adjusts the zoom size of the map (the size of the map becomes larger), and the size of the map after adjustment meets the use requirement of the map, so that the icon displayed in the occupied area of the target obstacle in the map is adjusted to the map.
Optionally, the obstacle display method provided in the embodiment of the present invention may further include: when the user translates and/or rotates the map, the orientation and/or position of the target obstacle indication map in the map is dynamically adjusted based on the translation distance and/or rotation angle of the map. For example, when the user rotates the map 90 ° counterclockwise, the target obstacle indication map displayed on the map is also rotated 90 ° counterclockwise. For another example, when the user drags the map 2cm to the left, the target obstacle indication map also moves 2cm.
The obstacle display method provided by the embodiment of the invention can display the recognized obstacle type and the occupied area of the obstacle of the recognized obstacle type in the map, a user can know the type of the obstacle, the position and the occupied area of the obstacle in the map through the displayed information, and the user can know the specific situation of the obstacle, so that the doubt of the user on the working capacity of the robot can be eliminated, namely, the display of the specific information of the obstacle is favorable for enhancing the trust of the user on the working capacity of the robot, in addition, the display of the specific information of the obstacle can also help the user to find lost articles, and the user experience is better.
The following describes the obstacle recognition apparatus provided in the embodiment of the present invention, and the obstacle recognition apparatus described below and the obstacle recognition method described above may be referred to in correspondence with each other.
Referring to fig. 10, a schematic structural diagram of an obstacle identification apparatus according to an embodiment of the present invention is shown, where the obstacle identification apparatus is applied to a processing device, and the obstacle identification apparatus may include: the system comprises an obstacle data acquisition module 1001, an obstacle type identification module 1002, an obstacle occupation area determination module 1003 and a map information sending module 1004.
The obstacle data acquiring module 1001 is configured to acquire an image and distance information of an obstacle in a forward direction of the robot in a moving process of the robot, so as to obtain an obstacle image and obstacle distance information corresponding to the obstacle image.
An obstacle type identifying module 1002, configured to identify a type of an obstacle included in the obstacle image based on the obstacle image and the obstacle distance information.
An obstacle floor area determination module 1003, configured to determine a floor area of the target obstacle in the map based on the obstacle image and/or the obstacle distance information of the target obstacle.
The target obstacle is an obstacle of which the type is identified, and the occupied area can indicate the position and the occupied size of the target obstacle in the map. A map information sending module 1004, configured to send map information including the type of the target obstacle and the occupied area of the target obstacle in the map to the display device, so that the display device displays the type of the target obstacle and the occupied area of the target obstacle in the map.
Optionally, when the obstacle type identifying module 1002 identifies the type of the obstacle included in the obstacle image based on the obstacle image and the obstacle distance information, the obstacle type identifying module is specifically configured to:
and predicting the type of the obstacle contained in the obstacle image based on an obstacle type prediction model obtained by pre-training by combining obstacle distance information corresponding to the obstacle image.
The obstacle type prediction model is obtained by training a training obstacle image marked with an obstacle type and obstacle distance information corresponding to the training obstacle image.
Optionally, the obstacle distance information of the target obstacle includes obstacle distance information corresponding to a plurality of angles, and the obstacle distance information corresponding to one angle is distance information measured at the angle by using a distance measuring device arranged on the robot for a measurable point on the target obstacle.
The obstacle occupied area determining module 1003 is specifically configured to, when determining the occupied area of the target obstacle in the map based on the obstacle distance information of the target obstacle:
determining local map information corresponding to the plurality of angles respectively based on the obstacle distance information corresponding to the plurality of angles respectively, wherein the local map information comprises shape information of the target obstacle in a local map;
and determining the occupied area of the target barrier in the map based on the local map information corresponding to the plurality of angles respectively.
Optionally, the obstacle image of the target obstacle includes obstacle images corresponding to a plurality of angles, and the obstacle image corresponding to one angle is an image including the target obstacle acquired at the angle by using an image acquisition device disposed on the robot.
The obstacle occupied area determining module 1003 is specifically configured to, when determining the occupied area of the target obstacle in the map based on the obstacle image of the target obstacle,:
inputting the obstacle image corresponding to each angle into a pre-established map information prediction model to obtain local map information output by the map information prediction model so as to obtain local map information corresponding to a plurality of angles; the local map information comprises shape information of a target obstacle in a local map, and the map information prediction model is obtained by training a training obstacle image and local map information corresponding to the training obstacle image;
and determining the occupied area of the target obstacle in the map based on the local map information respectively corresponding to the plurality of angles.
Optionally, the obstacle image of the target obstacle includes a plurality of obstacle images corresponding to a plurality of angles, respectively, and the obstacle image corresponding to one angle is an image including the target obstacle acquired at the angle by using an image acquisition device disposed on the robot; the obstacle distance information of the target obstacle comprises obstacle distance information corresponding to a plurality of angles respectively, and the obstacle distance information corresponding to one angle is the obstacle distance information corresponding to the obstacle image corresponding to the angle.
The obstacle occupied area determining module 1003 is specifically configured to, when determining the occupied area of the target obstacle in the map based on the obstacle image of the target obstacle and the obstacle distance information:
inputting the obstacle image and the obstacle distance information corresponding to each angle into a pre-established map information prediction model to obtain local map information output by the map information prediction model so as to obtain local map information corresponding to a plurality of angles; the local map information comprises shape information of a target obstacle in a local map, and the map information prediction model is obtained by training a training obstacle image, obstacle distance information corresponding to the training obstacle image and local map information corresponding to the training obstacle image;
and determining the occupied area of the target barrier in the map based on the local map information corresponding to the plurality of angles respectively.
Optionally, when the occupied area of the target obstacle in the map is determined based on the local map information corresponding to the multiple angles, the obstacle occupied area determining module 1003 is specifically configured to:
determining shape information of the target obstacle in the whole map based on local map information, the pose of the robot and the position of the robot in the map, wherein the local map information corresponds to a plurality of angles respectively;
and determining the occupied area of the target obstacle in the map based on the shape information of the target obstacle in the whole map.
Optionally, when the obstacle occupied area determining module 1003 determines the occupied area of the target obstacle in the map based on the shape information of the target obstacle in the entire map, the method is specifically configured to:
determining the outline of the target obstacle in the map based on the shape information of the target obstacle in the whole map;
and determining an area surrounded by the outline of the target obstacle in the map as a floor area of the target obstacle in the map.
Optionally, the obstacle avoidance device provided in the embodiment of the present invention may further include: and identifying a result optimization module. An identification result optimization module to:
after obtaining the type of the target obstacle and the floor area of the target obstacle in the map, each obtaining an image containing the target obstacle, for each grid in the map associated with the currently obtained image:
acquiring the probability that the grid has obstacles of various set types and does not have obstacles determined based on the currently acquired image as the probability that the currently acquired image corresponds to the grid;
fusing the probability of the currently obtained image corresponding to the grid with the probabilities of a plurality of historical obstacle images corresponding to the grid respectively, wherein the probability of the historical obstacle image corresponding to the grid is the probability of the grid having obstacles of various set types and the probability of no obstacles determined based on the historical obstacle image;
determining whether the grid has an obstacle or not and the type of the obstacle when the obstacle exists according to the fused probability, and taking the type of the obstacle as a corresponding recognition result of the grid;
and updating the type of the target barrier and the occupied area of the target barrier in the map according to the determined identification result corresponding to each grid.
The obstacle identification device provided by the embodiment of the invention can identify the type of the obstacle in the use scene of the robot, can also determine the occupied area of the obstacle of the identified type in the map, and after the map information containing the information is sent to the display equipment to be displayed, a user can know the type of the obstacle, the position of the obstacle in the map and the occupied area size, and the user can know the specific situation of the obstacle, so that the doubt of the user on the working capacity of the robot can be eliminated (for example, the user can know the occupied area size of the obstacle and can know why the robot can adopt a larger obstacle-surrounding distance to surround the obstacle), namely, the display of the specific information of the obstacle is beneficial to enhancing the trust of the user on the working capacity of the robot, in addition, the display of the specific information of the obstacle can also help the user to find the lost object, and the user experience is better.
The embodiment of the invention also provides an obstacle display device, which is described below, and the obstacle display device described below and the obstacle display method described above can be referred to correspondingly.
Referring to fig. 11, a schematic structural diagram of an obstacle display apparatus according to an embodiment of the present invention is shown, where the obstacle display apparatus is applied to a display device, and the obstacle display apparatus may include: a map information receiving module 1101 and a map information display module 1102.
The map information receiving module 1101 is configured to receive map information that includes the type of the target obstacle and a occupied area of the target obstacle in the map and is sent by the processing device.
The target obstacle is an obstacle of which the type is recognized by the processing device, and the occupied area of the target obstacle in the map can indicate the position and the occupied size of the target obstacle in the map.
And a map information display module 1102 for displaying a map based on the map information, and displaying the type and the occupied area of the target obstacle in the map.
Optionally, the obstacle display device provided in the embodiment of the present invention may further include: and an obstacle indication map display module. The obstacle indication map display module is to:
searching an obstacle indication map matched with the type of the target obstacle and the occupied area of the target obstacle in a map in an obstacle map library to serve as a target obstacle indication map; and displaying a target obstacle indication map in the occupied area of the target obstacle in the map.
Optionally, the obstacle map library includes a map set and an icon set.
When the obstacle indication map display module searches an obstacle indication map matched with the type of the target obstacle and the occupied area of the target obstacle in the map in the obstacle map library, the obstacle indication map display module is specifically configured to:
determining a target atlas from the collage set and the icon set according to the zoom size of the map;
if the target atlas is the map atlas, searching a map which is matched with the type of the target obstacle and the occupied area of the target obstacle in the map from the map atlas; and if the target atlas is the icon set, searching icons matched with the type of the target obstacle and the occupied area of the target obstacle in the map from the icon set.
Optionally, if the target obstacle indication map is the target map, the obstacle indication map display module is specifically configured to, when displaying the target obstacle indication map in the occupied area of the target obstacle in the map:
processing the target map into a map matched with the size and/or orientation of the occupied area of the target obstacle in the map;
and displaying the processed target map in the occupied area of the target obstacle in the map.
Optionally, the obstacle display device provided in the embodiment of the present invention may further include: and an adjusting module. The adjustment module is used for:
when a user zooms in and zooms out the map, the type of the target obstacle indication map is adjusted according to the zooming size of the map;
and/or, when the user translates and/or rotates the map, adjusting the orientation and/or position of the target obstacle indication map in the map according to the translation distance and/or rotation angle of the map.
The obstacle display device provided by the embodiment of the invention can display the recognized obstacle type and the occupied area of the obstacle recognizing the obstacle type in the map, a user can know the type of the obstacle, the position of the obstacle in the map and the occupied area size through displayed information, and the user can know the specific situation of the obstacle, so that the doubt of the user on the working capacity of the robot can be eliminated, namely, the display of the specific information of the obstacle is favorable for enhancing the trust of the user on the working capacity of the robot, in addition, the display of the specific information of the obstacle can also help the user to find lost articles, and the user experience is better.
An embodiment of the present invention further provides a processing device, please refer to fig. 12, which shows a schematic structural diagram of the processing device, where the processing device may include: a processor 1201, a communication interface 1202, a memory 1203, and a communication bus 1204;
in the embodiment of the present invention, the number of the processor 1201, the communication interface 1202, the memory 1203 and the communication bus 1204 is at least one, and the processor 1201, the communication interface 1202 and the memory 1203 complete mutual communication through the communication bus 1204;
the processor 1201 may be a central processing unit CPU, or an Application Specific Integrated Circuit ASIC (Application Specific Integrated Circuit), or one or more Integrated circuits configured to implement embodiments of the present invention, etc.;
the memory 1203 may include a high-speed RAM memory, and may also include a non-volatile memory (non-volatile memory) or the like, such as at least one disk memory;
wherein the memory stores a program and the processor can call the program stored in the memory, the program for:
in the moving process of the robot, acquiring images and distance information of obstacles in the advancing direction of the robot to obtain obstacle distance information corresponding to the obstacle images and the obstacle images;
identifying the type of an obstacle contained in the obstacle image based on the obstacle image and the obstacle distance information;
determining the occupied area of the target obstacle in the map based on the obstacle image and/or the obstacle distance information of the target obstacle, wherein the target obstacle is an identified type of obstacle, and the occupied area of the target obstacle in the map can indicate the position and the occupied size of the target obstacle in the map;
and sending map information containing the type of the target obstacle and the occupied area of the target obstacle in the map to a display device so that the display device displays the type of the target obstacle and the occupied area of the target obstacle in the map.
Alternatively, the detailed function and the extended function of the program may be as described above.
An embodiment of the present invention further provides a computer-readable storage medium, where a program suitable for being executed by a processor is stored, where the program is configured to:
in the moving process of the robot, acquiring images and distance information of obstacles in the advancing direction of the robot to obtain obstacle distance information corresponding to the obstacle images and the obstacle images;
identifying a type of an obstacle contained in the obstacle image based on the obstacle image and the obstacle distance information;
determining the occupied area of the target obstacle in the map based on the obstacle image and/or the obstacle distance information of the target obstacle, wherein the target obstacle is an identified type of obstacle, and the occupied area of the target obstacle in the map can indicate the position and the occupied size of the target obstacle in the map;
and sending map information containing the type of the target obstacle and the occupied area of the target obstacle in the map to a display device so that the display device displays the type of the target obstacle and the occupied area of the target obstacle in the map.
Alternatively, the detailed function and the extended function of the program may be as described above.
An embodiment of the present invention further provides a display device, where the display device may include: the device comprises a display unit, a processor, a communication interface, a memory and a communication bus;
in the embodiment of the invention, the number of the display unit, the processor, the communication interface, the memory and the communication bus is at least one, and the display unit, the processor, the communication interface and the memory complete mutual communication through the communication bus;
the processor may be a central processing unit CPU, or an Application Specific Integrated Circuit ASIC (Application Specific Integrated Circuit), or one or more Integrated circuits configured to implement embodiments of the present invention, or the like;
the memory may include a high-speed RAM memory, and may further include a non-volatile memory (non-volatile memory) or the like, such as at least one disk memory;
wherein the memory stores a program and the processor can call the program stored in the memory, the program for:
receiving map information which is sent by a processing device and contains the type of a target obstacle and the occupied area of the target obstacle in a map, wherein the target obstacle is the obstacle of which the type is identified by the processing device, and the occupied area can indicate the position and the occupied size of the target obstacle in the map;
and displaying the map according to the map information, and displaying the type and the occupied area of the target obstacle in the map.
Alternatively, the detailed function and the extended function of the program may be as described above.
An embodiment of the present invention further provides a computer-readable storage medium, where a program suitable for being executed by a processor is stored, where the program is configured to:
receiving map information which is sent by processing equipment and contains types of target obstacles and occupied areas of the target obstacles in a map, wherein the target obstacles are the obstacles of which the types are identified by the processing equipment, and the occupied areas can indicate the positions and the occupied sizes of the target obstacles in the map;
and displaying the map according to the map information, and displaying the type and the occupied area of the target obstacle in the map.
Alternatively, the detailed function and the extended function of the program may be as described above.
An embodiment of the present invention further provides a processing system, where the processing system may include: a processing device and a display device.
The processing device is used for acquiring images and distance information of obstacles in the advancing direction of the robot in the moving process of the robot so as to obtain obstacle distance information corresponding to the obstacle images and the obstacle images, identifying the types of the obstacles contained in the obstacle images on the basis of the obstacle images and the obstacle distance information, determining the occupied area of the target obstacle in the map on the basis of the obstacle images and/or the obstacle distance information of the target obstacle, and sending the types of the target obstacle and the map information of the occupied area of the target obstacle in the map to the display device.
The target obstacle is an obstacle of which the type is identified, and the occupied area of the target obstacle in the map can indicate the position and the occupied size of the target obstacle in the map.
For the specific implementation process of the processing device for identifying the type of the obstacle and the occupied area of the obstacle in the map, reference may be made to relevant parts in the above-described obstacle identification method embodiment, which is not described herein again.
Optionally, the processing device may be a robot, or may also be other devices with data processing capability, for example, a server capable of communicating with the robot, where the server may be one server, a server cluster composed of multiple servers, or a cloud computing server center.
And the display equipment is used for displaying the map according to the map information after receiving the map information and displaying the type and the occupied area of the target obstacle in the map.
The display device may be, but is not limited to, a PC, notebook, smart tv, PAD, mobile phone, etc.
Optionally, the display device is further configured to search, in the obstacle map library, an obstacle indication map matching the type of the target obstacle and a floor area of the target obstacle in the map, as a target obstacle indication map, and display the target obstacle indication map in the floor area of the target obstacle in the map.
Optionally, the obstacle map library includes a map set and an icon set, when the display device searches for an obstacle indication map matching the type of the target obstacle and the occupied area of the target obstacle in the map in the obstacle map library, the display device may determine the target map set from the map set and the icon set according to the zoom size of the map, if the target map set is the map set, search for a map matching the type of the target obstacle and the occupied area of the target obstacle in the map from the map set, and if the target map set is the icon set, search for an icon matching the type of the target obstacle and the occupied area of the target obstacle in the map from the icon set.
Optionally, if the target obstacle indication map is a target map, when the display device displays the target obstacle indication map in the occupied area of the target obstacle in the map, the target map may be processed into a map that matches the size and/or orientation of the occupied area of the target obstacle in the map, and the processed target map is displayed in the occupied area of the target obstacle in the map.
Optionally, the display device is further configured to, when the user zooms in or out the map, adjust the type of the target obstacle indication map according to a zoom size of the map; and/or, when the user translates and/or rotates the map, adjusting the orientation and/or position of the target obstacle indication map in the map according to the translation distance and/or rotation angle of the map.
For a more specific manner and related description of information display performed by the display device, reference may be made to related parts in the foregoing embodiment of the obstacle display method, which is not described herein again.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (19)

1. An obstacle recognition method applied to a processing device, the method comprising:
in the moving process of the robot, acquiring an image and distance information of an obstacle in the advancing direction of the robot to obtain an obstacle image and obstacle distance information corresponding to the obstacle image;
identifying a type of an obstacle contained in the obstacle image based on the obstacle image and the obstacle distance information;
determining a footprint area of a target obstacle in a map based on an obstacle image and/or obstacle distance information of the target obstacle, wherein the target obstacle is an identified type of obstacle, and the footprint area is capable of indicating a position and a footprint size of the target obstacle in the map;
and sending map information containing the type of the target obstacle and the occupied area of the target obstacle in a map to display equipment so that the display equipment can display the type of the target obstacle and the occupied area of the target obstacle in the map.
2. The obstacle identifying method according to claim 1, wherein the identifying a type of an obstacle contained in the obstacle image based on the obstacle image and the obstacle distance information includes:
predicting the type of an obstacle contained in the obstacle image based on an obstacle type prediction model obtained by pre-training by combining obstacle distance information corresponding to the obstacle image;
the obstacle type prediction model is obtained by training a training obstacle image marked with an obstacle type and obstacle distance information corresponding to the training obstacle image.
3. The obstacle recognition method according to claim 1, wherein the obstacle distance information of the target obstacle includes obstacle distance information corresponding to a plurality of angles, respectively, and the obstacle distance information corresponding to one angle is distance information measured at the angle with respect to a measurable point on the target obstacle using a distance measuring device provided on the robot;
determining a footprint area of a target obstacle in a map based on obstacle distance information of the target obstacle, comprising:
determining local map information corresponding to the plurality of angles respectively based on obstacle distance information corresponding to the plurality of angles respectively, wherein the local map information comprises shape information of the target obstacle in a local map;
and determining the occupied area of the target obstacle in the map based on the local map information respectively corresponding to the plurality of angles.
4. The obstacle recognition method according to claim 1, wherein the obstacle image of the target obstacle includes obstacle images corresponding to a plurality of angles, respectively, and the obstacle image corresponding to one angle is an image including the target obstacle acquired at the angle using an image acquisition device provided on the robot;
determining a footprint of a target obstacle in a map based on an obstacle image of the target obstacle, comprising:
inputting the obstacle image corresponding to each angle into a pre-established map information prediction model to obtain local map information output by the map information prediction model so as to obtain local map information corresponding to a plurality of angles; the local map information comprises shape information of the target obstacle in a local map, and the map information prediction model is obtained by training a training obstacle image and local map information corresponding to the training obstacle image;
and determining the occupied area of the target obstacle in the map based on the local map information respectively corresponding to the plurality of angles.
5. The obstacle recognition method according to claim 1, wherein the obstacle image of the target obstacle includes obstacle images corresponding to a plurality of angles, respectively, and the obstacle image corresponding to one angle is an image including the target obstacle acquired at the angle using an image acquisition device provided on the robot;
the obstacle distance information of the target obstacle comprises obstacle distance information corresponding to the plurality of angles respectively, and the obstacle distance information corresponding to one angle is the obstacle distance information corresponding to the obstacle image corresponding to the angle;
determining a footprint area of a target obstacle in a map based on an obstacle image of the target obstacle and obstacle distance information, comprising:
inputting the obstacle image and the obstacle distance information corresponding to each angle into a pre-established map information prediction model to obtain local map information output by the map information prediction model so as to obtain local map information corresponding to the plurality of angles respectively; the local map information comprises shape information of the target obstacle in a local map, and the map information prediction model is obtained by training a training obstacle image, obstacle distance information corresponding to the training obstacle image and local map information corresponding to the training obstacle image;
and determining the occupied area of the target obstacle in the map based on the local map information respectively corresponding to the plurality of angles.
6. The method of claim 3~5 wherein the determining the footprint of the target obstacle in the map based on the local map information corresponding to the plurality of angles comprises:
determining shape information of the target obstacle in the whole map based on the local map information, the pose of the robot and the position of the robot in the map, wherein the local map information corresponds to the plurality of angles respectively;
and determining the occupied area of the target obstacle in the map based on the shape information of the target obstacle in the whole map.
7. The obstacle identification method according to claim 6, wherein the determining a footprint area of the target obstacle in the map based on shape information of the target obstacle in the entire map comprises:
determining the outline of the target obstacle in the map based on the shape information of the target obstacle in the whole map;
and determining an area surrounded by the outline of the target obstacle in the map as a floor area of the target obstacle in the map.
8. The obstacle recognition method according to claim 2, further comprising:
after obtaining the type of the target obstacle and the floor area of the target obstacle in the map, each obtaining an image containing the target obstacle, for each grid in the map associated with the currently obtained image:
acquiring the probability that the grid has obstacles of various set types and does not have obstacles determined based on the currently acquired image as the probability that the currently acquired image corresponds to the grid;
fusing the probability of the currently obtained image corresponding to the grid with the probabilities of a plurality of historical obstacle images corresponding to the grid respectively, wherein the probability of one historical obstacle image corresponding to the grid is the probability that obstacles of each set type exist in the grid and the probability that no obstacle exists in the grid, which is determined based on the historical obstacle images;
determining whether the grid has an obstacle or not and the type of the obstacle when the obstacle exists on the basis of the fused probability, wherein the type of the obstacle is used as a recognition result corresponding to the grid;
and updating the type of the target obstacle and the occupied area of the target obstacle in the map based on the identification result corresponding to each determined grid.
9. An obstacle display method applied to a display device, comprising:
receiving map information which is sent by a processing device and contains a type of a target obstacle and a floor area of the target obstacle in a map, wherein the target obstacle is the obstacle of which the type is identified by the processing device, and the floor area can indicate the position and the floor size of the target obstacle in the map;
and displaying a map based on the map information, and displaying the type and the occupied area of the target obstacle in the map.
10. The obstacle display method according to claim 9, further comprising:
searching an obstacle indication map matched with the type of the target obstacle and the occupied area of the target obstacle in a map in an obstacle map library to serve as a target obstacle indication map;
and displaying the target obstacle indication map in the occupied area of the target obstacle in a map.
11. The obstacle display method according to claim 10, wherein the obstacle map library includes a map set and an icon set;
the searching an obstacle indication map matched with the type of the target obstacle and the occupation area of the target obstacle in a map in an obstacle map library comprises:
determining a target atlas from the set of tiles and the set of icons based on a zoom size of a map;
if the target atlas is the map atlas, searching a map which is matched with the type of the target obstacle and the occupied area of the target obstacle in a map from the map atlas;
if the target atlas is the icon set, searching icons matched with the type of the target obstacle and the occupied area of the target obstacle in the map from the icon set.
12. The obstacle display method according to claim 11, wherein if the target obstacle indication map is a target map, the displaying the target obstacle indication map in a footprint area of the target obstacle on a map includes:
processing the target map into a map that matches a footprint size and/or orientation of the target obstacle in a map;
and displaying the processed target map in the occupied area of the target obstacle in the map.
13. The obstacle display method according to any one of claims 10 to 12, further comprising:
when a user zooms the map, adjusting the type of the target obstacle indication map according to the zooming size of the map;
and/or when the user translates and/or rotates the map, adjusting the orientation and/or position of the target obstacle indication map in the map according to the translation distance and/or the rotation angle of the map.
14. An obstacle recognition apparatus, applied to a processing device, the apparatus comprising: the system comprises an obstacle data acquisition module, an obstacle type identification module, an obstacle occupied area determination module and a map information sending module;
the obstacle data acquisition module is used for acquiring images and distance information of obstacles in the advancing direction of the robot in the moving process of the robot so as to obtain obstacle images and obstacle distance information corresponding to the obstacle images;
the obstacle type identification module is used for identifying the type of an obstacle contained in the obstacle image based on the obstacle image and the obstacle distance information;
the obstacle floor area determination module is used for determining the floor area of a target obstacle in a map based on an obstacle image and/or obstacle distance information of the target obstacle, wherein the target obstacle is an identified type of obstacle, and the floor area can indicate the position and the floor area size of the target obstacle in the map;
the map information sending module is used for sending the map information containing the type of the target obstacle and the occupied area of the target obstacle in the map to the display device, so that the display device can display the type of the target obstacle and the occupied area of the target obstacle in the map.
15. An obstacle display apparatus, applied to a display device, the apparatus comprising: the map information display device comprises a map information receiving module and a map information display module;
the map information receiving module is used for receiving map information which is sent by a processing device and contains the type of a target obstacle and the occupied area of the target obstacle in a map, wherein the target obstacle is the obstacle of which the type is identified by the processing device, and the occupied area can indicate the position and the occupied size of the target obstacle in the map;
and the map information display module is used for displaying a map according to the map information and displaying the type and the occupied area of the target barrier in the map.
16. A processing device, comprising: a memory and a processor;
the memory is used for storing programs;
the processor, configured to execute the program, implementing the steps of the method according to any one of claims 1~8.
17. A display device, comprising: a memory and a processor;
the memory is used for storing programs;
the processor is configured to execute the program to implement the steps of the obstacle display method according to any one of claims 9 to 13.
18. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the obstacle identification method according to any one of claims 1~8 and/or carries out the steps of the obstacle display method according to any one of claims 9 to 13.
19. A processing system, comprising: a processing device and a display device;
the processing device is used for acquiring images and distance information of obstacles in the advancing direction of the robot in the moving process of the robot to obtain obstacle images and obstacle distance information corresponding to the obstacle images, identifying the types of the obstacles contained in the obstacle images based on the obstacle images and the obstacle distance information, determining the floor area of a target obstacle in a map based on the obstacle images and/or the obstacle distance information of the target obstacle, and sending map information containing the types of the target obstacle and the floor area of the target obstacle in the map to the display device; wherein the target obstacle is an identified type of obstacle, and the footprint area is capable of indicating a location and a footprint size of the target obstacle in a map;
and the display equipment is used for displaying a map according to the map information after receiving the map information, and displaying the type and the occupied area of the target barrier in the map.
CN202310075235.6A 2023-02-07 2023-02-07 Obstacle recognition method, obstacle display method, related equipment and system Active CN115797817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310075235.6A CN115797817B (en) 2023-02-07 2023-02-07 Obstacle recognition method, obstacle display method, related equipment and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310075235.6A CN115797817B (en) 2023-02-07 2023-02-07 Obstacle recognition method, obstacle display method, related equipment and system

Publications (2)

Publication Number Publication Date
CN115797817A true CN115797817A (en) 2023-03-14
CN115797817B CN115797817B (en) 2023-05-30

Family

ID=85430271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310075235.6A Active CN115797817B (en) 2023-02-07 2023-02-07 Obstacle recognition method, obstacle display method, related equipment and system

Country Status (1)

Country Link
CN (1) CN115797817B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050264557A1 (en) * 2004-06-01 2005-12-01 Fuji Jukogyo Kabushiki Kaisha Three-dimensional object recognizing system
CN108898669A (en) * 2018-07-17 2018-11-27 网易(杭州)网络有限公司 Data processing method, device, medium and calculating equipment
CN109190485A (en) * 2018-07-31 2019-01-11 李明 Data processing method, device, computer equipment and storage medium
CN110968083A (en) * 2018-09-30 2020-04-07 科沃斯机器人股份有限公司 Method for constructing grid map, method, device and medium for avoiding obstacles
CN111336984A (en) * 2020-03-20 2020-06-26 北京百度网讯科技有限公司 Obstacle ranging method, device, equipment and medium
CN111462192A (en) * 2020-02-24 2020-07-28 江苏大学 Space-time double-current fusion convolutional neural network dynamic obstacle avoidance method for sidewalk sweeping robot
CN111931765A (en) * 2020-07-24 2020-11-13 上海明略人工智能(集团)有限公司 Food sorting method, system and computer readable storage medium
US20210272304A1 (en) * 2018-12-28 2021-09-02 Nvidia Corporation Distance to obstacle detection in autonomous machine applications
CN113358110A (en) * 2021-06-15 2021-09-07 云鲸智能(深圳)有限公司 Method and device for constructing robot obstacle map, robot and storage medium
US20210341603A1 (en) * 2020-05-04 2021-11-04 Hyundai Motor Company Obstacle recognition device, vehicle system including the same, and method thereof
US20220057806A1 (en) * 2020-08-18 2022-02-24 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for obstacle detection using a neural network model, depth maps, and segmentation maps
US20220066456A1 (en) * 2016-02-29 2022-03-03 AI Incorporated Obstacle recognition method for autonomous robots
CN114532918A (en) * 2022-01-26 2022-05-27 深圳市杉川机器人有限公司 Cleaning robot, target detection method and device thereof, and storage medium
CN114750696A (en) * 2022-04-18 2022-07-15 智道网联科技(北京)有限公司 Vehicle vision presenting method, vehicle-mounted equipment and vehicle
CN114913338A (en) * 2022-04-19 2022-08-16 支付宝(杭州)信息技术有限公司 Segmentation model training method and device, and image recognition method and device
CN115376109A (en) * 2022-10-25 2022-11-22 杭州华橙软件技术有限公司 Obstacle detection method, obstacle detection device, and storage medium
CN115469312A (en) * 2022-09-15 2022-12-13 重庆长安汽车股份有限公司 Method and device for detecting passable area of vehicle, electronic device and storage medium
CN115629612A (en) * 2022-12-19 2023-01-20 科大讯飞股份有限公司 Obstacle avoidance method, device, equipment and storage medium
CN115690739A (en) * 2022-10-31 2023-02-03 阿波罗智能技术(北京)有限公司 Multi-sensor fusion obstacle existence detection method and automatic driving vehicle

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050264557A1 (en) * 2004-06-01 2005-12-01 Fuji Jukogyo Kabushiki Kaisha Three-dimensional object recognizing system
US20220066456A1 (en) * 2016-02-29 2022-03-03 AI Incorporated Obstacle recognition method for autonomous robots
CN108898669A (en) * 2018-07-17 2018-11-27 网易(杭州)网络有限公司 Data processing method, device, medium and calculating equipment
CN109190485A (en) * 2018-07-31 2019-01-11 李明 Data processing method, device, computer equipment and storage medium
CN110968083A (en) * 2018-09-30 2020-04-07 科沃斯机器人股份有限公司 Method for constructing grid map, method, device and medium for avoiding obstacles
US20210272304A1 (en) * 2018-12-28 2021-09-02 Nvidia Corporation Distance to obstacle detection in autonomous machine applications
CN111462192A (en) * 2020-02-24 2020-07-28 江苏大学 Space-time double-current fusion convolutional neural network dynamic obstacle avoidance method for sidewalk sweeping robot
CN111336984A (en) * 2020-03-20 2020-06-26 北京百度网讯科技有限公司 Obstacle ranging method, device, equipment and medium
US20210341603A1 (en) * 2020-05-04 2021-11-04 Hyundai Motor Company Obstacle recognition device, vehicle system including the same, and method thereof
CN111931765A (en) * 2020-07-24 2020-11-13 上海明略人工智能(集团)有限公司 Food sorting method, system and computer readable storage medium
US20220057806A1 (en) * 2020-08-18 2022-02-24 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for obstacle detection using a neural network model, depth maps, and segmentation maps
CN113358110A (en) * 2021-06-15 2021-09-07 云鲸智能(深圳)有限公司 Method and device for constructing robot obstacle map, robot and storage medium
CN114532918A (en) * 2022-01-26 2022-05-27 深圳市杉川机器人有限公司 Cleaning robot, target detection method and device thereof, and storage medium
CN114750696A (en) * 2022-04-18 2022-07-15 智道网联科技(北京)有限公司 Vehicle vision presenting method, vehicle-mounted equipment and vehicle
CN114913338A (en) * 2022-04-19 2022-08-16 支付宝(杭州)信息技术有限公司 Segmentation model training method and device, and image recognition method and device
CN115469312A (en) * 2022-09-15 2022-12-13 重庆长安汽车股份有限公司 Method and device for detecting passable area of vehicle, electronic device and storage medium
CN115376109A (en) * 2022-10-25 2022-11-22 杭州华橙软件技术有限公司 Obstacle detection method, obstacle detection device, and storage medium
CN115690739A (en) * 2022-10-31 2023-02-03 阿波罗智能技术(北京)有限公司 Multi-sensor fusion obstacle existence detection method and automatic driving vehicle
CN115629612A (en) * 2022-12-19 2023-01-20 科大讯飞股份有限公司 Obstacle avoidance method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
顾玉宛 等: "连续状态空间下机器人避障方法研究" *

Also Published As

Publication number Publication date
CN115797817B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
US10580206B2 (en) Method and apparatus for constructing three-dimensional map
CN109785368B (en) Target tracking method and device
EP3134870B1 (en) Electronic device localization based on imagery
CN108198044B (en) Commodity information display method, commodity information display device, commodity information display medium and electronic equipment
CN109035304B (en) Target tracking method, medium, computing device and apparatus
JP4478510B2 (en) Camera system, camera, and camera control method
CN111340864A (en) Monocular estimation-based three-dimensional scene fusion method and device
CN109325456B (en) Target identification method, target identification device, target identification equipment and storage medium
CN110111388B (en) Three-dimensional object pose parameter estimation method and visual equipment
EP2405393B1 (en) Device, method and program for creating information for object position estimation
CN110986969B (en) Map fusion method and device, equipment and storage medium
CN112050810B (en) Indoor positioning navigation method and system based on computer vision
CN111814752B (en) Indoor positioning realization method, server, intelligent mobile device and storage medium
KR20110104431A (en) Control apparatus, control method and program
US20220375220A1 (en) Visual localization method and apparatus
CN113447923A (en) Target detection method, device, system, electronic equipment and storage medium
KR20180039436A (en) Cleaning robot for airport and method thereof
JP2022542413A (en) Projection method and projection system
CN113910224A (en) Robot following method and device and electronic equipment
CN107193820A (en) Location information acquisition method, device and equipment
WO2021164000A1 (en) Image processing method, apparatus, device and medium
CN115797817A (en) Obstacle identification method, obstacle display method, related equipment and system
CN112085842B (en) Depth value determining method and device, electronic equipment and storage medium
CN116051636A (en) Pose calculation method, device and equipment
Kawaji et al. An image-based indoor positioning for digital museum applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant