CN108521808B - Obstacle information display method, display device, unmanned aerial vehicle and system - Google Patents

Obstacle information display method, display device, unmanned aerial vehicle and system Download PDF

Info

Publication number
CN108521808B
CN108521808B CN201780006058.9A CN201780006058A CN108521808B CN 108521808 B CN108521808 B CN 108521808B CN 201780006058 A CN201780006058 A CN 201780006058A CN 108521808 B CN108521808 B CN 108521808B
Authority
CN
China
Prior art keywords
obstacle
image
type
aerial vehicle
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201780006058.9A
Other languages
Chinese (zh)
Other versions
CN108521808A (en
Inventor
周游
刘洁
唐克坦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN108521808A publication Critical patent/CN108521808A/en
Application granted granted Critical
Publication of CN108521808B publication Critical patent/CN108521808B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0011Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement
    • G05D1/0038Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement by providing the operator with simple or augmented images from one or more cameras located onboard the vehicle, e.g. tele-operation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft

Abstract

An obstacle information display method, a display device, an unmanned aerial vehicle and a system are provided, wherein the method comprises the following steps: a display device receives a first type of image sent by the unmanned aerial vehicle, wherein the first type of image is obtained by shooting through a shooting device mounted on the unmanned aerial vehicle; determining obstacle information corresponding to the first type of image, wherein the obstacle information comprises information of obstacle points in the flight direction of the unmanned aerial vehicle; identifying the obstacle point on the first type of image according to the obstacle information; and displaying the first type of image identifying the obstacle point, so that the obstacle avoidance difficulty can be reduced to a certain extent.

Description

Obstacle information display method, display device, unmanned aerial vehicle and system
Technical Field
The invention relates to the technical field of image processing, in particular to an obstacle information display method, a display device, an unmanned aerial vehicle and a system.
Background
Unmanned aerial vehicle, unmanned aerial vehicle promptly compares in manned aircraft, can carry out some dangerous, the strong task of repetitive nature, is used in fields such as military affairs, agriculture, survey and drawing, aerial photograph, plant protection at present widely.
Unmanned aerial vehicle can accomplish the barrier and detect at the in-process of flight by automatic control unmanned aerial vehicle, however, under most circumstances, people still are used to and keep away the barrier through display device manual control unmanned aerial vehicle such as remote controller, terminal. At present, when manual control unmanned aerial vehicle, the operator can rely on camera device that unmanned aerial vehicle has to provide the visual angle, accomplishes to keep away the barrier and handles based on the operating capability of self and flight experience.
However, unmanned aerial vehicle's camera device is used for shooing the scenery under most of the circumstances, and is great with people's field of vision gap, and at this moment, the image that relies on the macroscopy camera device to provide keeps away the barrier, and it is higher to keep away the barrier degree of difficulty.
Disclosure of Invention
The embodiment of the invention discloses an obstacle information display method, an obstacle information display device, an unmanned aerial vehicle and a system, which can reduce the obstacle avoidance difficulty to a certain extent.
The first aspect of the embodiments of the present invention discloses an obstacle information display method, which is applied to a display device, wherein the display device is connected with an unmanned aerial vehicle through a wireless link, and the method includes:
receiving a first type of image sent by the unmanned aerial vehicle, wherein the first type of image is obtained by shooting by a shooting device mounted on the unmanned aerial vehicle;
determining obstacle information corresponding to the first type of image, wherein the obstacle information comprises information of obstacle points in the flight direction of the unmanned aerial vehicle;
identifying the obstacle point on the first type of image according to the obstacle information;
displaying a first type of image identifying the obstacle point.
In some possible embodiments, the drone may determine information of an obstacle point in the flight direction of the drone, identify the obstacle point on the first type of image, and send the first type of image with the identified obstacle point to the display device for display.
The second aspect of the embodiments of the present invention discloses a display device, which is connected to an unmanned aerial vehicle via a wireless link, and includes: a memory and a processor;
the memory to store program instructions;
the processor is configured to execute the program instructions stored in the memory, and when executed, is configured to:
receiving a first type of image sent by the unmanned aerial vehicle, wherein the first type of image is obtained by shooting by a shooting device mounted on the unmanned aerial vehicle;
determining obstacle information corresponding to the first type of image, wherein the obstacle information comprises information of obstacle points in the flight direction of the unmanned aerial vehicle;
identifying the obstacle point on the first type of image according to the obstacle information;
displaying a first type of image identifying the obstacle point.
The third aspect of the embodiment of the invention discloses an unmanned aerial vehicle, wherein the unmanned aerial vehicle mounting shooting device comprises a communication element, a memory and a processor;
the communication element is used for communicating with the display device;
the memory to store program instructions;
the processor is configured to execute the program instructions stored in the memory, and when executed, is configured to:
acquiring a first type of image through the shooting device;
obtaining obstacle data, wherein the obstacle data comprises data of obstacle points in the flight direction of the unmanned aerial vehicle;
and sending the first type of images and the obstacle data to a display device through the communication element, so that the display device determines obstacle information corresponding to the first type of images according to the obstacle data and identifies the obstacle point on the first type of images according to the obstacle information.
A fourth aspect of the embodiments of the present invention discloses a system, including:
the display device according to the second aspect;
a drone as described in the third aspect above.
In the embodiment of the invention, the display device can receive the first type of image sent by the unmanned aerial vehicle, determine the obstacle information in the flight direction of the unmanned aerial vehicle, mark the obstacle point on the first type of image according to the obstacle information, display the first type of image marking the obstacle point, directly mark and display the obstacle point on the first type of image, and compared with the visual observation of the image shot by the camera device by an operator, the invention can more accurately judge the condition of the front obstacle and reduce the obstacle avoidance difficulty.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is an overall architecture diagram for displaying obstacle information according to an embodiment of the present invention;
fig. 2 is a schematic flow chart illustrating an obstacle information display method according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating another obstacle information display method according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of another obstacle information display according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart illustrating a display of obstacle information according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a display device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an unmanned aerial vehicle according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a system according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Unmanned aerial vehicle can accomplish obstacle detection and obstacle detour by the automatic operation aircraft when automatic flight.
However, in most cases, the operator still is used to manually operate the unmanned aerial vehicle to perform obstacle avoidance processing. When an operator manually operates the unmanned aerial vehicle, images (namely first-class images) collected by a camera (such as a camera, an aerial camera and the like) mounted on the unmanned aerial vehicle can be received through a display device with a display screen, such as a terminal, a remote controller and the like, so that the obstacle avoidance processing of the unmanned aerial vehicle is completed depending on the operation capability and flight experience of the operator.
However, the camera device mounted on the unmanned aerial vehicle is usually used for shooting scenes, and the Field angle (FOV) Of the camera device is larger than the line-Of-sight angle Of a person, so that when an operator watches an image collected by the camera device, the distance between objects cannot be reasonably judged, and which Of the images are obstacle points cannot be reasonably judged.
In order to solve the technical problem, the invention provides an obstacle information display method and a display device. For a more detailed explanation, the overall architecture of the present application is first described below.
Fig. 1 is a general architecture diagram for displaying obstacle information according to an embodiment of the present invention. The overall architecture shown in fig. 1 includes an unmanned aerial vehicle and a display device.
It should be noted that the unmanned aerial vehicle is an unmanned aerial vehicle, and specifically may be an aircraft, an aerial photography device, an agricultural unmanned aerial vehicle, a military unmanned aerial vehicle, and the like, and the embodiment of the present invention does not limit this.
It should be further noted that the unmanned aerial vehicle can mount a camera device, and the camera device can be a camera, an aerial camera, etc., and can be used for shooting a scene image (i.e., a first type of image) and displaying the scene image on a display device.
It should be further noted that the unmanned aerial vehicle may further mount an obstacle detection device (the unmanned aerial vehicle shown in fig. 1 is not shown) for obstacle detection, and the obstacle detection device may be any one or more of a binocular vision system, a radar, an infrared camera, a red, green and blue camera.
It should be noted that the display device may be a device with a display screen, such as a terminal or a remote controller. The terminal can be a smart phone, a tablet computer, wearable equipment and the like; this remote controller can be for the remote controller that is used for the flight of remote control unmanned aerial vehicle. It should be understood that the above display devices are only examples and not exhaustive, and include, but are not limited to, the above alternative display devices.
It should also be noted that the display device may be connected to the drone via a wireless link. The display device can acquire a flight path planned in advance by the unmanned aerial vehicle, and the flight path comprises a flight direction.
In one embodiment, the unmanned aerial vehicle can acquire the first type of images in real time through the shooting device, can acquire obstacle data in real time through the obstacle detection device, and sends the first type of images and the acquired obstacle data to the display device through the wireless link.
In one embodiment, the obstacle detection device is a binocular vision system, and the obstacle data acquired by the unmanned aerial vehicle is data of obstacle points determined according to the depth image. Or, this obstacle detection device is infrared camera, and the obstacle data that unmanned aerial vehicle obtained are the data of the obstacle point that confirm according to infrared image. Or, the obstacle detection device is a red-green-blue camera, and the obstacle data acquired by the unmanned aerial vehicle is the data of the obstacle points determined according to the red-green-blue images. Or the obstacle detection device is a radar, and the obstacle data of the unmanned aerial vehicle is data of an obstacle point obtained according to the time difference between the emission and the reflection of a radar signal.
For example, when the obstacle detection device is a binocular vision system, the unmanned aerial vehicle may obtain a depth image, and determine an obstacle point (for example, a point having a depth value greater than a preset depth threshold value) that may collide with the unmanned aerial vehicle in the flight direction according to depth information provided by the depth image, where data of the obstacle point that may collide with the unmanned aerial vehicle in the flight direction is the obstacle data.
In one embodiment, the drone may also determine a correspondence of the obstacle data to the first type of image. For example, the unmanned aerial vehicle may determine the corresponding relationship between the obstacle data and the first type of image according to the pose relationship. For example, if it is determined from the pose relationship that the depth image corresponding to the obstacle data is captured at the same time as the first type of image, the obstacle data corresponds to the first type of image. After determining the correspondence, the drone may send the correspondence to a display device.
In one embodiment, the unmanned aerial vehicle may asynchronously send the obstacle data and the first type of image to the display device, that is, when the display device receives the first type of image, the display device may not receive the obstacle data determined simultaneously with the first type of image, and therefore, the display device may match the currently received first type of image with the obstacle data received from the unmanned aerial vehicle before according to the correspondence between the obstacle data and the first type of image, determine target obstacle data with the highest degree of correlation with the currently received first type of image, and determine obstacle information corresponding to the currently received first type of image according to the target obstacle data.
In one embodiment, the display device may first perform segmentation processing on the first type image when receiving the first type image. Since the operator is used to distinguish each object (i.e. the characteristic object in the image) in the image, and then further observes which object the drone may collide with. Therefore, in order to conform to the human body operation habit, the display device may divide the first type image into at least one divided region based on the image recognition technology and the graph division technology, and each divided region may be used for representing one feature object in the image.
In one embodiment, the display device may determine obstacle information corresponding to the currently received first type image according to the received obstacle data and the correspondence between the received obstacle data and the first type image after performing the segmentation processing on the first type image. For example, obstacle data closest to the time stamp of the currently received first type image is selected as target obstacle data, and then obstacle information corresponding to the currently received first type image is determined from the target obstacle data.
Further, the display device may identify the obstacle point on the currently received first type image according to the obstacle information. Specifically, the display device may identify the obstacle point in the divided region corresponding to the first type of image according to the obstacle information. Wherein, if the obstacle point in the divided area a is larger than the preset obstacle point threshold, the display device may highlight the divided area a and may simultaneously display information of the obstacle point related to the divided area a to prompt the operator that the collision possibility of the divided area a is high.
In one embodiment, the information about the relevant obstacle points in the partitioned area a may be any one of a distance value of the drone from the obstacle point, a collision time predicted to collide with the obstacle point, and a feature identification of the obstacle point; the feature identifier of the obstacle point may include a feature object name corresponding to the obstacle point.
Therefore, the obstacle point identification method and the unmanned aerial vehicle can identify and display the obstacle point in the first type of image through the display device, assist an operator in judging the flight obstacle situation of the unmanned aerial vehicle, enable the operator not to be limited to observing the first type of image through naked eyes to judge the distance between objects and further determine the obstacle point, and reduce obstacle avoidance difficulty when the unmanned aerial vehicle is manually operated to avoid obstacles.
For better illustration, method embodiments of the present application are described below.
Fig. 2 is a schematic flow chart of a method for displaying obstacle information according to an embodiment of the present invention. The obstacle information display method described in this embodiment includes:
s201, receiving the first type of image sent by the unmanned aerial vehicle.
It should be noted that the execution main body of the embodiment of the present invention may be a display device, and the display device is connected to the unmanned aerial vehicle through a wireless link. The display device may be, for example, a virtual reality device, an intelligent terminal, a remote controller, a ground station, a mobile phone with a control APP, or a tablet computer, and the embodiment of the present invention is not limited at all.
The first type of images are obtained by shooting through a shooting device mounted on the unmanned aerial vehicle.
It should be noted that, the shooting device of the unmanned aerial vehicle may be, for example, a main camera mounted on the unmanned aerial vehicle, specifically, the main camera may be an aerial camera, a digital camera, a camera, and the like, which is not limited in this respect in the embodiment of the present invention.
It should also be noted that the first type of image may be transmitted to the display device in real time for display.
S202, determining obstacle information corresponding to the first type of image.
Wherein the obstacle information includes information of obstacle points in a flight direction of the unmanned aerial vehicle.
It should be noted that the information of the obstacle point in the flight direction of the drone may include the obstacle point in the flight direction of the drone and information associated with the obstacle point.
Wherein the information associated with the obstacle point may include: any one of a distance value of the unmanned aerial vehicle from the obstacle point, a collision time of the unmanned aerial vehicle colliding with the obstacle point, and a feature identifier of the obstacle point; the feature identifier of the obstacle point includes a feature object name corresponding to the obstacle point.
It should be noted that the feature object name may refer to a name of an object in the first type image. For example, if there is a tree in the first type of image and the obstacle point is a point on the tree, then the feature identifier of the obstacle point may be the name of the tree.
In one embodiment, the obstacle information corresponding to the first type of image is determined from obstacle data of the drone.
Wherein the obstacle data may include data of obstacle points obtained by the drone according to a time difference between the radar signal emission and the reflection.
The obstacle data may further include data of obstacle points determined by the unmanned aerial vehicle according to a second type of image, wherein the second type of image includes any one or more of a depth image, an infrared image, a red, green and blue image.
In one embodiment, the second type of image is acquired by an obstacle detection device on the drone other than said camera. For example, the obstacle detection device may be a binocular vision system, a radar device, an infrared camera, a red, green, and blue camera, and the like, which is not limited in this embodiment of the present invention.
In one embodiment, determining the obstacle information corresponding to the first type of image includes: receiving obstacle data determined by the unmanned aerial vehicle and the corresponding relation between the obstacle data and the first type of images; and determining obstacle information corresponding to the first type of image according to the obstacle data and the corresponding relation.
In some possible embodiments, the drone may continuously acquire the first type of image and send the first type of image to the display device for display, that is, the first type of image may have multiple images. Simultaneously, this unmanned aerial vehicle also can be continuous acquire obstacle data, that is to say, this obstacle data also can have the multiunit. Each group of obstacle data may correspond to one first type image, and the correspondence between the obstacle data and the first type image may be determined according to the time stamp.
For example, the obstacle data a is obtained by the drone according to a depth image, the depth image is captured at 12:00, the timestamp of the obstacle data a may be 12:00, the obstacle data B is obtained by the drone according to the depth image, the depth image is captured at 12:01, the timestamp of the obstacle data B may be 12:01, the first type image a is captured by the drone at 12:00, the timestamp of the first type image a may be 12:00, the first type image B is captured by the drone at 12:01, the timestamp of the first type image B may be 12:01, and the corresponding relationship between the obstacle data and the first type image may be: the obstacle data a corresponds to the first type image a, and has a time stamp of 12: 00; the obstacle data B corresponds to the first-type image B, and the time stamp is 12: 00.
In some possible embodiments, the unmanned aerial vehicle may send the first type image a and simultaneously send the obstacle data a corresponding to the first type image a, that is, the first type image and the obstacle data are sent synchronously, in this case, the display device may receive the obstacle data a and the corresponding relationship between the obstacle data a and the first type image a synchronously when receiving the first type image a, and the display device may determine the obstacle information corresponding to the first type image according to the obstacle data a when determining that the obstacle data a is the obstacle data corresponding to the first type image.
In some possible embodiments, the drone may also asynchronously send the first type of image and the obstacle data. For example, the unmanned aerial vehicle may encapsulate the first type of image a and the obstacle data a in different data packets, and send the data packets to the display device at the same time, encapsulate the first type of image B and the obstacle data B in different data packets, and send the data packets to the display device. Due to asynchronous transmission, the first type image a and the obstacle data a may not arrive at the display device at the same time, the first type image B and the obstacle data B may not arrive at the display device at the same time, and the display device may determine the obstacle information corresponding to the first type image B according to the previously received obstacle data (for example, the obstacle data a) and the corresponding relationship between the obstacle data and the first type image when receiving the first type image B.
In one embodiment, the display device may determine the degree of correlation between the plurality of sets of obstacle data and the currently received first type image according to the correspondence; and selecting the target obstacle data with the highest correlation degree from the multiple groups of obstacle data, and predicting obstacle information corresponding to the currently received first-class images according to the target obstacle data.
It should be noted that the degree of correlation between the sets of obstacle data and the currently received first-type image may be determined according to the time stamps in the correspondence relationship, and the closer the time stamps are, the higher the degree of correlation is.
For example, the plurality of sets of obstacle data are, for example, obstacle data x and obstacle data y, the currently received first-class image is the first-class image Z, the timestamp of the obstacle data x is farther from the timestamp of the first-class image Z, and the timestamp of the obstacle data y is closer to the timestamp of the first-class image Z, so that the display device can select the obstacle data y with the highest correlation degree as the target obstacle data and predict obstacle information corresponding to the currently received first-class image according to the target obstacle data.
In one embodiment, predicting obstacle information corresponding to the currently received first-class image according to the target obstacle data may include: performing matching processing according to the target obstacle data and the currently received first-class image to determine the position parameters of the target obstacle data in the first-class image; and determining obstacle information corresponding to the currently received first-class image according to the position parameter and the target obstacle data.
The matching process may be an image recognition process.
It should be noted that the position parameter may be a position coordinate.
In some possible embodiments, the display device may perform image recognition on the second type image corresponding to the target obstacle data and the currently received first type image to determine the position parameter of the target obstacle data in the first type image.
For example, the target obstacle data is determined according to the fact that the binocular vision system captures a depth image 1 at the time 1, an obstacle object 1 exists in the depth image 1, the capturing device captures a first type image 2 at the time 2, the first type image 2 is a first type image currently received by the display device, and the display device can identify an object with the same characteristics as the obstacle object 1 in the depth image 1 in the second type image 2 by using an image identification method.
In some possible embodiments, the display device may determine the obstacle information corresponding to the currently received first type image according to the position parameter and the target obstacle data, that is, the obstacle information corresponding to the currently received first type image may include a position parameter of an obstacle point in the target obstacle data in the first type image.
S203, identifying the obstacle point on the first-class image according to the obstacle information.
It should be noted that the obstacle information may have a position parameter of the obstacle point in the target obstacle data in the first type image, and the display device may identify the obstacle point on the first type image according to the position parameter.
In one embodiment, identifying the obstacle information before on the first type of image further comprises: and carrying out segmentation processing on the first-class image so as to segment the first-class image into at least one segmentation region.
And the difference value of adjacent pixel values in the segmentation region is within a preset variation range, and the segmentation region is used for identifying the characteristic object in the first-class image.
The display device may perform the segmentation processing on the first type image based on a graph segmentation technique or an image recognition technique. For example, the display device may be based on the MST graph segmentation method (minimum spanning tree-based segmentation) And carrying out segmentation processing on the first type of image.
For example, the first type of image has characteristic objects such as a wall, a person, a table, etc., and the display device may divide the first type of image into 3 divided regions based on a graph division technique or an image recognition technique, where one divided region is used for identifying the wall, one divided region is used for identifying the person, and one divided region is used for identifying the table.
In one embodiment, the display device may further determine a segmentation region corresponding to the obstacle point according to the position parameter, and project the obstacle point to the determined segmentation region; and if the barrier points in the determined divided areas are larger than a preset number value, highlighting the determined divided areas.
It should be noted that the preset quantity value may be any value preset by the display device, and the embodiment of the present invention does not limit this.
For example, the display device may determine, according to the position parameter, that the partition region of the obstacle point 1 in the first-class image is the partition region 1, and then the display device may project the obstacle point 1 to a position corresponding to the partition region, and if the number of the obstacle points projected in the partition region 1 is greater than a preset number value, the display device may highlight the partition region 1, so as to prompt the user that the collision probability of the partition region 1 is high.
And S204, displaying the first type of image identifying the obstacle point.
In some possible embodiments, the display device may display information associated with the obstacle point on the first type image.
In some possible embodiments, the display device may highlight the divided region determined in the first type of image and the information associated with the obstacle point in the divided region, and identify the obstacle point.
It should be further noted that the manner of identifying the obstacle point may specifically be: the obstacle point is framed in the first type image, or an indication message of imminent collision is displayed at a position corresponding to the obstacle point, or the obstacle point is marked with a special color (for example, a color with high contrast with the main color of the image) in the first type image, and so on, which is not limited in this embodiment of the present invention.
In a possible embodiment, the display device may further display a divided region where the obstacle point is located, and make the divided region where the obstacle point is located vibrate in the first type image. Alternatively, the display device may display the obstacle point and the divided area where the obstacle point is located, and vibrate the obstacle point and the divided area where the obstacle point is located in the first type image.
Therefore, the first type of images of the unmanned aerial vehicle are received through the display device, the obstacle information corresponding to the first type of images is determined, the obstacle points are identified and displayed in the first type of images, the judgment of the flight obstacle condition of the unmanned aerial vehicle by an operator is assisted, the operator does not need to observe the first type of images by naked eyes to judge the distance between objects and further determine the obstacle points, and the obstacle avoidance difficulty in manually operating the unmanned aerial vehicle to avoid obstacles is reduced.
Please refer to fig. 3, which is a flowchart illustrating another obstacle information displaying method according to an embodiment of the present invention. The obstacle information display method described in this embodiment includes:
s301, receiving the first type of image sent by the unmanned aerial vehicle.
It should be noted that the execution main body of the embodiment of the present invention may be a display device, and the display device is connected to the unmanned aerial vehicle through a wireless link. The display device may be, for example, a virtual reality device, an intelligent terminal, a remote controller, a ground station, a mobile phone with a control APP, or a tablet computer, and the embodiment of the present invention is not limited at all.
The first type of images are obtained by shooting through a shooting device mounted on the unmanned aerial vehicle.
S302, obstacle information corresponding to the first type of image is determined.
Wherein the obstacle information includes information of obstacle points in a flight direction of the unmanned aerial vehicle.
S303, identifying the obstacle point on the first-class image according to the obstacle information.
It should be noted that, for the specific implementation process of S301 to S303 shown in the embodiment of the present invention, reference may be made to the specific implementation process of S201 to S203 in the foregoing method embodiment, which is not described herein again.
S304, determining an obstacle avoidance path of the unmanned aerial vehicle according to the obstacle information, and identifying the obstacle avoidance path in the first type of image.
And the obstacle avoidance path is used for avoiding obstacles of the obstacle points indicated by the obstacle information.
It should be noted that the obstacle information may include a distance from the obstacle point to the drone, a time of collision to the obstacle point, and the like, and the display device may plan an obstacle avoidance path that can avoid the obstacle point according to any one or more of the distance, the time, a speed of the drone itself, and a current flight direction of the drone.
It should be further noted that the unmanned aerial vehicle can identify the planned obstacle avoidance path in the first type of image.
It should be further noted that the obstacle avoidance path may be one or more, and the embodiment of the present invention does not limit this.
S305, controlling the unmanned aerial vehicle to fly according to the obstacle avoidance path, and displaying an obstacle avoidance prompt message on a display interface.
It should be noted that, the display device may control the unmanned aerial vehicle to change the original flight direction after planning the obstacle avoidance path, fly according to the obstacle avoidance path, and may display an obstacle avoidance prompt message on the display interface.
It should be further noted that the obstacle avoidance prompting message is used to prompt the user that the unmanned aerial vehicle has performed obstacle avoidance processing.
In some possible embodiments, the display device may further highlight the obstacle point avoided by the obstacle avoidance path and the information associated with the obstacle point, and display the obstacle avoidance path at the same time.
In one embodiment, the obstacle avoidance path has a plurality of paths, and the determining an obstacle avoidance path of the drone according to the obstacle information and after identifying the obstacle avoidance path in the first type of image further includes: when a selection instruction for the obstacle avoidance path is received, determining the obstacle avoidance path indicated by the selection instruction as a target obstacle avoidance path; and controlling the unmanned aerial vehicle to fly according to the target obstacle avoidance path.
For example, the display device may display the plurality of obstacle avoidance paths on the first-class image for the user to select, and when a selection instruction of the user for one of the obstacle avoidance paths is received, the obstacle avoidance path selected by the user may be determined as a target obstacle avoidance path, and the unmanned aerial vehicle is controlled to fly according to the target obstacle avoidance path.
In one embodiment, the display device may also predict a virtual flight path according to the current flight direction of the drone.
It should be noted that the virtual flight path may be a path that the display device predicts will pass if the drone continues to fly according to the current flight direction.
S306, displaying the first type of image which identifies the obstacle point.
In one embodiment, displaying the first type of image that identifies the obstacle point includes: displaying a first type of image identifying the obstacle point and the virtual flight path.
In one embodiment, the display device may further perform a message prompt for the obstacle point, where the message prompt includes a voice prompt and/or a vibration prompt; and the vibration prompt comprises a vibration prompt according to the position relation between the obstacle point and the unmanned aerial vehicle.
It should be noted that, after the display device displays the obstacle point and the first type image of the virtual flight path, the display device may also perform message presentation on the obstacle point in the first type image. For example, information associated with an obstacle point is subjected to voice broadcast, or a display device is controlled to vibrate.
In some possible embodiments, the controlling the display device to vibrate may be a vibration prompt according to the orientation relationship between the obstacle point and the drone. For example, the world coordinates of the obstacle point with the highest collision probability indicate that the obstacle point is right to the drone, and then the display device may control itself to vibrate to the right to prompt the user that the obstacle point is right.
Therefore, in the embodiment of the invention, the display device can receive the first type of image sent by the unmanned aerial vehicle, determine the obstacle information corresponding to the first type of image, identify the obstacle point on the first type of image according to the obstacle information, determine the obstacle avoidance path of the unmanned aerial vehicle according to the obstacle information, identify the obstacle avoidance path in the first type of image, control the unmanned aerial vehicle to fly according to the obstacle avoidance path, display the obstacle avoidance prompt message on the display interface, and finally display the first type of image identifying the obstacle point, so that the operator does not need to observe the path avoiding the obstacle point with naked eyes, and the obstacle avoidance difficulty in manually operating the unmanned aerial vehicle for obstacle avoidance is reduced.
Please refer to fig. 4, which is a flowchart illustrating a method for displaying obstacle information according to another embodiment of the present invention. The obstacle information display method described in the embodiment of the present invention may be executed by an unmanned aerial vehicle, and the unmanned aerial vehicle mounted shooting apparatus includes:
s401, acquiring a first type of image through the shooting device.
It should be noted that the camera may acquire the first type of image in real time, or the camera may acquire the first type of image periodically, which is not limited in this embodiment of the present invention.
S402, obstacle data are obtained.
Wherein the obstacle data includes data of obstacle points in a flight direction of the unmanned aerial vehicle.
In one embodiment, the unmanned aerial vehicle mounts an obstacle detection device, and the obstacle detection device is configured to obtain the obstacle data, where the obstacle data includes data of obstacle points determined according to a second type of image, where the second type of image includes any one or more of a depth image, an infrared image, and a red, green, and blue image.
In some possible embodiments, the obstacle data may also include data of obstacle points derived from a time difference between the emission and reflection of the radar signal.
For example, the drone may acquire the depth image, and determine, according to depth information provided by the depth image, that a point whose depth value is greater than a preset depth threshold is an obstacle point that may collide in a flight direction of the drone, where data of the obstacle point that may collide in the flight direction may be the obstacle data.
S403, determining the corresponding relation between the obstacle data and the first type of image.
It should be noted that the unmanned aerial vehicle may acquire the first type of image through the photographing device and acquire the obstacle data through the obstacle detecting device at the same time, so that the unmanned aerial vehicle may determine the corresponding relationship between the obstacle data and the first type of image.
In one embodiment, the determining the corresponding relationship between the obstacle data and the first type of image includes: and determining the corresponding relation between the obstacle data and the first type of image according to the time stamp.
For example, if the time stamp of the obstacle data a is 12:00 and the time stamp of the first type image a is 12:00, it may be determined that the obstacle data a corresponds to the first type image.
In one embodiment, the determining the corresponding relationship between the obstacle data and the first type of image includes: and determining the pose relationship between the shooting device and the obstacle detection device, and determining the corresponding relationship between the obstacle data and the first type of image according to the pose relationship.
Wherein the pose relationship includes a displacement relationship and a rotation relationship between the image pickup device and the obstacle detection device.
Wherein, this displacement relation can obtain through the design drawing of inquiry unmanned aerial vehicle, perhaps through unmanned aerial vehicle's setting information, and this unmanned aerial vehicle can obtain this rotation relation and send for display device from the setting information of self. Wherein the rotational relationship can be obtained by reading the pan-tilt angle.
For example, the obstacle data is obtained from a depth image, the unmanned aerial vehicle can determine, through the pose relationship, which first-type image the depth image corresponding to the obstacle data is taken with, and if the depth image J and the first-type image J are taken at the same time, which is obtained through the pose relationship, the unmanned aerial vehicle can determine that the obstacle data obtained from the depth image J corresponds to the first-type image J.
In one embodiment, the determining the corresponding relationship between the obstacle data and the first type of image according to the pose relationship includes: extracting feature points of the first type image and feature points of the second type image, and generating feature description according to the feature points of the first type image and the feature points of the second type image; and determining the corresponding relation between the obstacle data and the first type of image according to the feature description and the pose relation.
For example, the drone may use a SIFT-invariant feature transform (Scale-invariant feature transform) algorithm to match the second type of image and the first type of image to determine the correspondence between the obstacle data and the first type of image.
S404, sending the corresponding relation between the obstacle data and the first type of images to the display device.
It should be noted that the drone may communicate with the display device through a wireless link (e.g., bluetooth, WiFi, cellular data network, etc.), and send the corresponding relationship between the obstacle data and the first type of image to the display device.
S405, the first-class images and the obstacle data are sent to a display device.
In some possible embodiments, the unmanned aerial vehicle may send the first type of image, the obstacle data, and the correspondence between the obstacle data and the first type of image to a display device, so that the display device determines obstacle information corresponding to the first type of image according to the obstacle data, and identifies the obstacle point on the first type of image according to the obstacle information.
In some possible embodiments, the drone may determine information of an obstacle point in the flight direction of the drone, identify the obstacle point on the first type of image, and send the first type of image with the identified obstacle point to the display device for display.
In the embodiment of the invention, the unmanned aerial vehicle can acquire the first type of image through the shooting device, acquire the obstacle data, determine the corresponding relation between the obstacle data and the first type of image, and finally send the first type of image, the obstacle data and the corresponding relation to the display device, so that the display device can display the first type of image which marks the obstacle point, and the obstacle avoidance difficulty in manually operating the unmanned aerial vehicle for obstacle avoidance is reduced.
The embodiment of the invention also provides an obstacle information display method, which is applied to a display device and an unmanned aerial vehicle, wherein the display device is connected with the unmanned aerial vehicle. Referring to fig. 5, a schematic flow chart of another obstacle information display method according to an embodiment of the present invention is shown, where the method includes:
s501, the unmanned aerial vehicle acquires a first type of image and obstacle data.
The first type of images are obtained by shooting through a shooting device mounted on the unmanned aerial vehicle, and the obstacle data comprise data of obstacle points in the flight direction of the unmanned aerial vehicle.
In one embodiment, the obstacle data includes data of obstacle points determined by the drone according to a second type of image, where the second type of image includes any one or more of a depth image, an infrared image, a red, green, and blue image.
In one embodiment, the second type of image is acquired by an obstacle detection device on the drone other than the camera.
S502, the unmanned aerial vehicle sends the first type of images and the obstacle data to the display device.
S503, the display device receives the first type of images of the unmanned aerial vehicle.
S504, the display device determines obstacle information corresponding to the first type of image according to the obstacle data.
Wherein the obstacle information includes information of obstacle points in a flight direction of the unmanned aerial vehicle.
And S505, the display device identifies the obstacle point on the first type of image according to the obstacle information.
In one embodiment, before the display device identifies the obstacle point on the first type of image according to the obstacle information, the method further includes: the display device carries out segmentation processing on the first type image so as to segment the first type image into at least one segmentation area.
In one embodiment, the difference value of the adjacent pixel values in the segmentation region is within a preset variation range, and the segmentation region is used for identifying the feature object in the first-class image.
In one embodiment, the method further comprises: the display device determines a segmentation area corresponding to the obstacle point according to the position parameter and projects the obstacle point to the determined segmentation area; and if the barrier points in the determined divided areas are larger than a preset number value, the display device highlights the determined divided areas.
S506, the display device displays the first type of image which identifies the obstacle point.
In one embodiment, the method further comprises: the display device determines an obstacle avoidance path of the unmanned aerial vehicle according to the obstacle information, and identifies the obstacle avoidance path in the first type of image, wherein the obstacle avoidance path is used for avoiding obstacles of obstacle points indicated by the obstacle information.
In one embodiment, the obstacle avoidance path has a plurality of paths, the display device determines the obstacle avoidance path of the unmanned aerial vehicle according to the obstacle information, and after the obstacle avoidance path is identified in the first type of image, the method further includes: when the display device receives a selection instruction for the obstacle avoidance path, determining the obstacle avoidance path indicated by the selection instruction as a target obstacle avoidance path; and the display device controls the unmanned aerial vehicle to fly according to the target obstacle avoidance path.
In one embodiment, after the display device determines the obstacle avoidance path of the drone according to the obstacle information, the method further includes: the display device controls the unmanned aerial vehicle to fly according to the obstacle avoidance path, and displays an obstacle avoidance prompt message on a display interface, wherein the obstacle avoidance prompt message is used for prompting a user that the unmanned aerial vehicle carries out obstacle avoidance processing.
In one embodiment, the method further comprises: and the display device predicts a virtual flight path according to the current flight direction of the unmanned aerial vehicle.
In one embodiment, the display device displays a first type of image identifying the obstacle point, including: the display device displays a first type of image identifying the obstacle point and the virtual flight path.
In one embodiment, the method further comprises: the display device carries out message prompt aiming at the obstacle point, wherein the message prompt comprises a voice prompt and/or a vibration prompt; and the vibration prompt comprises a vibration prompt according to the position relation between the obstacle point and the unmanned aerial vehicle.
In one embodiment, the information of the obstacle point in the flight direction includes: any one of a distance value of the unmanned aerial vehicle from the obstacle point, a collision time of a predicted collision to the obstacle point, and a feature identifier of the obstacle point; the feature identification of the obstacle point comprises a feature object name corresponding to the obstacle point.
In one embodiment, the display device displays a first type of image identifying the obstacle point, including: displaying the determined segmentation area and the obstacle point in the determined segmentation area, and enabling the determined segmentation area and/or the obstacle point in the determined segmentation area to vibrate in the first-class image;
alternatively, the displaying the first type of image identifying the obstacle point includes: and framing the obstacle point in the first type image, or displaying an imminent collision indication message at a position corresponding to the obstacle point, or marking the obstacle point by using color in the first type image.
It should be further noted that the principle and implementation of the display device are similar to those of the above embodiments, and the description of the display device with reference to the foregoing embodiments may be omitted here for brevity.
It should be further noted that the principle and implementation of the unmanned aerial vehicle are similar to those of the above embodiments, and the description of the unmanned aerial vehicle by referring to the foregoing embodiments may be omitted here for brevity.
The embodiment of the invention provides a display device. Referring to fig. 6, a schematic structural diagram of a display device according to an embodiment of the present invention is shown, where the display device described in this embodiment is connected to an unmanned aerial vehicle through a wireless link, and includes:
a memory 601, a processor 602;
the memory 601 is used for storing program instructions;
the processor 602 is configured to execute the memory-stored program instructions, which when executed, are configured to:
receiving a first type of image sent by the unmanned aerial vehicle, wherein the first type of image is obtained by shooting by a shooting device mounted on the unmanned aerial vehicle;
determining obstacle information corresponding to the first type of image, wherein the obstacle information comprises information of obstacle points in the flight direction of the unmanned aerial vehicle;
identifying the obstacle point on the first type of image according to the obstacle information;
displaying a first type of image identifying the obstacle point.
In one embodiment, the obstacle information corresponding to the first type of image is determined from obstacle data of the drone; the obstacle data comprises data of obstacle points determined by the unmanned aerial vehicle according to a second type of image, wherein the second type of image comprises any one or more of a depth image, an infrared image and a red, green and blue image.
In one embodiment, the second type of image is acquired by an obstacle detection device on the drone other than the camera.
In an embodiment, when the processor 602 is configured to determine the obstacle information corresponding to the first type of image, it is specifically configured to: receiving obstacle data determined by the unmanned aerial vehicle and the corresponding relation between the obstacle data and the first type of images; and determining obstacle information corresponding to the first type of image according to the obstacle data and the corresponding relation.
In one embodiment, the obstacle data includes a plurality of sets of obstacle data received by the display device; the processor 602 is configured to, when determining obstacle information corresponding to the first type of image according to the obstacle data and the correspondence, specifically: determining the degree of correlation between the multiple groups of obstacle data and the currently received first-class image according to the corresponding relation; and selecting the target obstacle data with the highest correlation degree from the multiple groups of obstacle data, and predicting obstacle information corresponding to the currently received first-class images according to the target obstacle data.
In an embodiment, the processor 602, when predicting obstacle information corresponding to the currently received first-class image according to the target obstacle data, is specifically configured to: carrying out first re-projection processing on the target obstacle data and the currently received first-class image to determine the position parameters of the target obstacle data in the first-class image; and determining obstacle information corresponding to the currently received first-class image according to the position parameters and the target obstacle data.
In one embodiment, the processor 602 is further configured to: and carrying out segmentation processing on the first-class image so as to segment the first-class image into at least one segmentation region.
In one embodiment, the difference value of the adjacent pixel values in the segmentation region is within a preset variation range, and the segmentation region is used for identifying the feature object in the first-class image.
In one embodiment, the processor 602 is further configured to: determining a segmentation area corresponding to the obstacle point according to the position parameter, and projecting the obstacle point to the determined segmentation area; and if the barrier points in the determined divided areas are larger than a preset number value, highlighting the determined divided areas.
In one embodiment, the processor 602 is further configured to: and determining an obstacle avoidance path of the unmanned aerial vehicle according to the obstacle information, and identifying the obstacle avoidance path in the first type of image, wherein the obstacle avoidance path is used for avoiding obstacles of obstacle points indicated by the obstacle information.
In an embodiment, the obstacle avoidance path has a plurality of paths, and the processor 602 is configured to determine the obstacle avoidance path of the drone according to the obstacle information, and after the obstacle avoidance path is identified in the first type of image, further configured to: when a selection instruction for the obstacle avoidance path is received, determining the obstacle avoidance path indicated by the selection instruction as a target obstacle avoidance path; and controlling the unmanned aerial vehicle to fly according to the target obstacle avoidance path.
In one embodiment, after determining the obstacle avoidance path of the drone according to the obstacle information, the processor 602 is further configured to: and controlling the unmanned aerial vehicle to fly according to the obstacle avoidance path, and displaying an obstacle avoidance prompt message on a display interface, wherein the obstacle avoidance prompt message is used for prompting a user that the unmanned aerial vehicle carries out obstacle avoidance processing.
In one embodiment, the processor 602 is further configured to: and predicting a virtual flight path according to the current flight direction of the unmanned aerial vehicle.
In an embodiment, the processor 602, when being configured to display the first type of image identifying the obstacle point, is specifically configured to: displaying a first type of image identifying the obstacle point and the virtual flight path.
In one embodiment, the processor 602 is further configured to: performing message prompt aiming at the obstacle point, wherein the message prompt comprises a voice prompt and/or a vibration prompt; and the vibration prompt comprises a vibration prompt according to the position relation between the obstacle point and the unmanned aerial vehicle.
In one embodiment, the information of the obstacle point in the flight direction includes: any one of a distance value of the unmanned aerial vehicle from the obstacle point, a collision time of a predicted collision to the obstacle point, and a feature identifier of the obstacle point; the feature identification of the obstacle point comprises a feature object name corresponding to the obstacle point.
The embodiment of the invention provides an unmanned aerial vehicle. Please refer to fig. 7, which is a schematic structural diagram of an unmanned aerial vehicle according to an embodiment of the present invention, where the unmanned aerial vehicle described in this embodiment includes an unmanned aerial vehicle body and a shooting device mounted on the unmanned aerial vehicle body, and the unmanned aerial vehicle further includes:
memory 701, processor 702, and communication element 703;
the communication element 703 is used for communicating with a display device;
the memory 701 is used for storing program instructions;
the processor 702 is configured to execute the program instructions stored in the memory 701, and when executed, is configured to:
acquiring a first type of image through the shooting device;
obtaining obstacle data, wherein the obstacle data comprises data of obstacle points in the flight direction of the unmanned aerial vehicle;
the first type of image and the obstacle data are sent to a display device through the communication element 703, so that the display device determines obstacle information corresponding to the first type of image according to the obstacle data, and identifies the obstacle point on the first type of image according to the obstacle information.
In one embodiment, the unmanned aerial vehicle mounts an obstacle detection device, and the obstacle detection device is configured to acquire obstacle data, where the obstacle data includes data of obstacle points determined according to a second type of image, where the second type of image includes any one or more of a depth image, an infrared image, and a red, green, and blue image.
In one embodiment, the drone is further configured to: determining the corresponding relation between the obstacle data and the first type of image; the corresponding relationship between the obstacle data and the first type of image is sent to the display drone through the communication element 703.
In an embodiment, when the processor 702 is configured to determine the corresponding relationship between the obstacle data and the first type of image, it is specifically configured to: and determining the pose relationship between the shooting unmanned aerial vehicle and the obstacle detection unmanned aerial vehicle, and determining the corresponding relationship between the obstacle data and the first type of image according to the pose relationship.
In one embodiment, the pose relationship includes a displacement relationship and a rotation relationship between the camera drone and the obstacle detection drone.
In an embodiment, when the processor 702 is configured to determine the corresponding relationship between the obstacle data and the first type of image according to the pose relationship, it is specifically configured to: extracting feature points of the first type image and feature points of the second type image, and generating feature description according to the feature points of the first type image and the feature points of the second type image; and determining the corresponding relation between the obstacle data and the first type of image according to the feature description and the pose relation.
The embodiment of the invention provides a system. Fig. 8 is a schematic structural diagram of a system according to an embodiment of the present invention. As shown in fig. 8, the system includes: display device 801 and unmanned aerial vehicle 802.
The display device 801 is the display device disclosed in the embodiment of the present invention, and the unmanned aerial vehicle 802 is the unmanned aerial vehicle disclosed in the embodiment of the present invention.
Specifically, the display device 801 and the drone 802 may be connected through a wireless link, and the display device 801 may receive, in real time, the first type of image sent by the drone 802, and display, after determining the obstacle point corresponding to the first type of image, the first type of image identifying the obstacle point.
In one embodiment, the display device 801 may be used as a remote control device of the drone 802 to control the flight of the drone 802, so as to implement obstacle avoidance processing for the drone 802.
Note that the unmanned aerial vehicle 802 may be equipped with a camera and an obstacle detection device (not shown in fig. 8), and the camera may be mounted on the main body of the unmanned aerial vehicle 802 via a pan/tilt head or other mounting equipment. The camera is used for capturing images or videos in the flight process of the drone 802, and includes but is not limited to a multispectral imager, a hyperspectral imager, a visible light camera, and the like. The obstacle detection device may be a binocular vision system, a radar, an infrared camera, a red, green, and blue camera, etc., for performing obstacle detection to acquire obstacle data, including but not limited to the above-mentioned optional devices.
In one embodiment, the unmanned aerial vehicle 802 may record the flight direction of the unmanned aerial vehicle 802, acquire the first type of image in real time through the shooting device, acquire the obstacle data through the obstacle detection device, and the corresponding relationship between the obstacle data and the first type of image, and the unmanned aerial vehicle 802 may transmit the first type of image, the obstacle data, and the corresponding relationship between the obstacle data and the first type of image to the display device in real time, so that the display device displays the obstacle data.
It should be further noted that the principle and implementation of the display device 801 are similar to those of the foregoing embodiments, and the description of the display device with reference to the foregoing embodiments may be omitted here for brevity.
It should be further noted that the principle and implementation of the unmanned aerial vehicle 802 are similar to those of the above embodiments, and the description of the unmanned aerial vehicle in the foregoing embodiments may be referred to, which is not repeated herein.
It should be noted that, for simplicity of description, the above-mentioned embodiments of the method are described as a series of acts or combinations, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The obstacle information display method, the display device, the unmanned aerial vehicle and the system provided by the embodiment of the invention are described in detail, a specific embodiment is applied in the text to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (54)

1. An obstacle information display method is applied to a display device, wherein the display device is connected with an unmanned aerial vehicle through a wireless link, and the method comprises the following steps:
receiving a first type of image sent by the unmanned aerial vehicle, wherein the first type of image is obtained by shooting by a shooting device mounted on the unmanned aerial vehicle;
determining obstacle information corresponding to the first type of image, wherein the obstacle information includes information of obstacle points in a flight direction of the unmanned aerial vehicle, the information of the obstacle points includes information associated with the obstacle points, and the information associated with the obstacle points includes: predicting one or more of a time of collision to the obstacle point and a feature identification of the obstacle point; the feature identification of the obstacle point comprises a feature object name corresponding to the obstacle point;
identifying the obstacle point on the first type of image according to the obstacle information;
predicting a virtual flight path according to the current flight direction of the unmanned aerial vehicle, wherein the virtual flight path is used for indicating the path which the unmanned aerial vehicle can pass by when continuing to fly according to the current flight direction;
displaying a first type of image identifying the obstacle point and the virtual flight path, and displaying the information associated with the obstacle point on the first type of image identifying the obstacle point and the virtual flight path.
2. The method of claim 1, wherein the obstacle information corresponding to the first type of image is determined from obstacle data of the drone;
the obstacle data comprises data of obstacle points determined by the unmanned aerial vehicle according to a second type of image, wherein the second type of image comprises any one or more of a depth image, an infrared image and a red, green and blue image.
3. The method of claim 2, wherein the second type of image is acquired by an obstacle detection device on the drone other than the camera.
4. The method of claim 2 or 3, wherein said determining obstacle information corresponding to the first type of image comprises:
receiving obstacle data determined by the unmanned aerial vehicle and the corresponding relation between the obstacle data and the first type of images;
and determining obstacle information corresponding to the first type of image according to the obstacle data and the corresponding relation.
5. The method of claim 4, wherein the obstacle data includes a plurality of sets of obstacle data received by the display device;
the determining obstacle information corresponding to the first type of image according to the obstacle data and the corresponding relation includes:
determining the degree of correlation between the multiple groups of obstacle data and the currently received first-class image according to the corresponding relation;
and selecting the target obstacle data with the highest correlation degree from the multiple groups of obstacle data, and predicting obstacle information corresponding to the currently received first-class images according to the target obstacle data.
6. The method of claim 5, wherein predicting obstacle information corresponding to the currently received first type of image based on the target obstacle data comprises:
performing first matching processing according to the target obstacle data and the currently received first-class image to determine the position parameters of the target obstacle data in the first-class image;
and determining obstacle information corresponding to the currently received first-class image according to the position parameters and the target obstacle data.
7. The method of claim 6, wherein prior to identifying the obstacle point on the first type of image based on the obstacle information, further comprising:
and carrying out segmentation processing on the first-class image so as to segment the first-class image into at least one segmentation region.
8. The method according to claim 7, wherein the difference value of the adjacent pixel values in the segmentation region is within a preset variation range, and the segmentation region is used for identifying the feature object in the first type image.
9. The method of claim 7 or 8, wherein the method further comprises:
determining a segmentation area corresponding to the obstacle point according to the position parameter, and projecting the obstacle point to the determined segmentation area;
and if the barrier points in the determined divided areas are larger than a preset number value, highlighting the determined divided areas.
10. The method of claim 1, wherein the method further comprises:
and determining an obstacle avoidance path of the unmanned aerial vehicle according to the obstacle information, and identifying the obstacle avoidance path in the first type of image, wherein the obstacle avoidance path is used for avoiding obstacles of obstacle points indicated by the obstacle information.
11. The method of claim 10, wherein the obstacle avoidance path has a plurality of pieces, and wherein determining the obstacle avoidance path of the drone according to the obstacle information and after identifying the obstacle avoidance path in the first type of image further comprises:
when a selection instruction for the obstacle avoidance path is received, determining the obstacle avoidance path indicated by the selection instruction as a target obstacle avoidance path;
and controlling the unmanned aerial vehicle to fly according to the target obstacle avoidance path.
12. The method of claim 10, wherein after determining an obstacle avoidance path for the drone based on the obstacle information, further comprising:
and controlling the unmanned aerial vehicle to fly according to the obstacle avoidance path, and displaying an obstacle avoidance prompt message on a display interface, wherein the obstacle avoidance prompt message is used for prompting a user that the unmanned aerial vehicle carries out obstacle avoidance processing.
13. The method of claim 1, wherein the method further comprises:
performing message prompt aiming at the obstacle point, wherein the message prompt comprises a voice prompt and/or a vibration prompt;
and the vibration prompt comprises a vibration prompt according to the position relation between the obstacle point and the unmanned aerial vehicle.
14. The method of claim 1, wherein the information associated with the obstacle point further includes a distance value of the drone from the obstacle point.
15. The method of claim 9, wherein said displaying a first type of image that identifies the obstacle point comprises:
displaying the determined segmentation area and the obstacle point in the determined segmentation area, and enabling the determined segmentation area and/or the obstacle point in the determined segmentation area to vibrate in the first-class image;
alternatively, the displaying the first type of image identifying the obstacle point includes: and framing the obstacle point in the first type image, or displaying an imminent collision indication message at a position corresponding to the obstacle point, or marking the obstacle point by using color in the first type image.
16. The obstacle information display method is applied to an unmanned aerial vehicle, the unmanned aerial vehicle is mounted with a shooting device, and the method comprises the following steps:
acquiring a first type of image through the shooting device;
obtaining obstacle data, wherein the obstacle data comprises data of obstacle points in the flight direction of the unmanned aerial vehicle;
sending the first type of images and the obstacle data to a display device, so that the display device can determine obstacle information corresponding to the first type of images according to the obstacle data, identify the obstacle point on the first type of images according to the obstacle information, predict a virtual flight path according to the current flight direction of the unmanned aerial vehicle, display the first type of images identifying the obstacle point and the virtual flight path, and display the information related to the obstacle point on the first type of images identifying the obstacle point and the virtual flight path;
the virtual flight path is used for indicating the path which the unmanned aerial vehicle can pass by continuing flying according to the current flight direction; the obstacle information includes information of obstacle points in a flight direction of the unmanned aerial vehicle, the information of obstacle points includes information associated with the obstacle points, the information associated with the obstacle points includes: predicting one or more of a time of collision to the obstacle point and a feature identification of the obstacle point; the feature identification of the obstacle point comprises a feature object name corresponding to the obstacle point.
17. The method of claim 16, wherein the drone mounts an obstacle detection device for obtaining obstacle data including data of obstacle points determined from a second type of image, wherein the second type of image includes any one or more of depth images, infrared images, red, green, and blue images.
18. The method of claim 17, wherein the method further comprises:
determining the corresponding relation between the obstacle data and the first type of image;
and sending the corresponding relation between the obstacle data and the first type of images to the display device.
19. The method of claim 18, wherein said determining a correspondence of said obstacle data to said first type of image comprises:
and determining the pose relationship between the shooting device and the obstacle detection device, and determining the corresponding relationship between the obstacle data and the first type of image according to the pose relationship.
20. The method according to claim 19, wherein the pose relationship includes a displacement relationship and a rotation relationship between the photographing device and the obstacle detecting device.
21. The method according to claim 19 or 20, wherein the determining the correspondence of the obstacle data to the first type of image according to the pose relationship includes:
extracting feature points of the first type image and feature points of the second type image, and generating feature description according to the feature points of the first type image and the feature points of the second type image;
and determining the corresponding relation between the obstacle data and the first type of image according to the feature description and the pose relation.
22. The obstacle information display method is applied to a display device and an unmanned aerial vehicle, wherein the display device is connected with the unmanned aerial vehicle, and the method comprises the following steps:
the unmanned aerial vehicle acquires a first type of image and obstacle data, wherein the first type of image is acquired by shooting through a shooting device mounted on the unmanned aerial vehicle, and the obstacle data comprises data of obstacle points in the flight direction of the unmanned aerial vehicle;
the unmanned aerial vehicle sends the first type of images and the obstacle data to the display device;
the display device receives a first type of image of the unmanned aerial vehicle;
the display device determines obstacle information corresponding to the first type of image according to the obstacle data, wherein the obstacle information comprises information of obstacle points in the flight direction of the unmanned aerial vehicle, the information of the obstacle points comprises information associated with the obstacle points, and the information associated with the obstacle points comprises: predicting one or more of a time of collision to the obstacle point and a feature identification of the obstacle point; the feature identification of the obstacle point comprises a feature object name corresponding to the obstacle point;
the display device identifies the obstacle point on the first type of image according to the obstacle information;
the display device predicts a virtual flight path according to the current flight direction of the unmanned aerial vehicle, and the virtual flight path is used for indicating the path which the unmanned aerial vehicle can pass by continuing to fly according to the current flight direction;
the display device displays a first type image which identifies the obstacle point and the virtual flight path, and displays the information associated with the obstacle point on the first type image which identifies the obstacle point and the virtual flight path.
23. The method of claim 22,
the obstacle data comprises data of obstacle points determined by the unmanned aerial vehicle according to a second type of image, wherein the second type of image comprises any one or more of a depth image, an infrared image and a red, green and blue image.
24. The method of claim 23, wherein the second type of image is acquired by an obstacle detection device on the drone other than the camera.
25. The method of any of claims 22-24, wherein the display device, prior to identifying the obstacle point on the first type of image based on the obstacle information, further comprises:
the display device carries out segmentation processing on the first type image so as to segment the first type image into at least one segmentation area.
26. The method according to claim 25, wherein the difference between adjacent pixel values in the segmented region is within a preset variation range, and the segmented region is used for identifying the feature object in the first type image.
27. The method of claim 25, wherein the method further comprises:
the display device determines a segmentation area corresponding to the obstacle point according to the position parameter and projects the obstacle point to the determined segmentation area;
and if the barrier points in the determined divided areas are larger than a preset number value, the display device highlights the determined divided areas.
28. The method of claim 22, wherein the method further comprises:
the display device determines an obstacle avoidance path of the unmanned aerial vehicle according to the obstacle information, and identifies the obstacle avoidance path in the first type of image, wherein the obstacle avoidance path is used for avoiding obstacles of obstacle points indicated by the obstacle information.
29. The method of claim 28, wherein the obstacle avoidance path has a plurality of paths, the display device determines the obstacle avoidance path of the drone according to the obstacle information, and further comprising, after identifying the obstacle avoidance path in the first type of image:
when the display device receives a selection instruction for the obstacle avoidance path, determining the obstacle avoidance path indicated by the selection instruction as a target obstacle avoidance path;
and the display device controls the unmanned aerial vehicle to fly according to the target obstacle avoidance path.
30. The method of claim 28, wherein after the display device determines the obstacle avoidance path of the drone according to the obstacle information, further comprising:
the display device controls the unmanned aerial vehicle to fly according to the obstacle avoidance path, and displays an obstacle avoidance prompt message on a display interface, wherein the obstacle avoidance prompt message is used for prompting a user that the unmanned aerial vehicle carries out obstacle avoidance processing.
31. The method of claim 22, wherein the method further comprises:
the display device carries out message prompt aiming at the obstacle point, wherein the message prompt comprises a voice prompt and/or a vibration prompt;
and the vibration prompt comprises a vibration prompt according to the position relation between the obstacle point and the unmanned aerial vehicle.
32. The method of claim 22, wherein the information associated with the obstacle point further includes a distance value of the drone from the obstacle point; the feature identification of the obstacle point comprises a feature object name corresponding to the obstacle point.
33. The method of claim 27, wherein the display device displays a first type of image identifying the obstacle point, comprising:
displaying the determined segmentation area and the obstacle point in the determined segmentation area, and enabling the determined segmentation area and/or the obstacle point in the determined segmentation area to vibrate in the first-class image;
alternatively, the displaying the first type of image identifying the obstacle point includes: and framing the obstacle point in the first type image, or displaying an imminent collision indication message at a position corresponding to the obstacle point, or marking the obstacle point by using color in the first type image.
34. A display device, characterized in that, display device is connected with unmanned aerial vehicle through wireless link, includes: a memory and a processor;
the memory to store program instructions;
the processor is configured to execute the program instructions stored in the memory, and when executed, is configured to:
receiving a first type of image sent by the unmanned aerial vehicle, wherein the first type of image is obtained by shooting by a shooting device mounted on the unmanned aerial vehicle;
determining obstacle information corresponding to the first type of image, wherein the obstacle information includes information of obstacle points in a flight direction of the unmanned aerial vehicle, the information of the obstacle points includes information associated with the obstacle points, and the information associated with the obstacle points includes: predicting one or more of a time of collision to the obstacle point and a feature identification of the obstacle point; the feature identification of the obstacle point comprises a feature object name corresponding to the obstacle point;
identifying the obstacle point on the first type of image according to the obstacle information;
predicting a virtual flight path according to the current flight direction of the unmanned aerial vehicle, wherein the virtual flight path is used for indicating the path which the unmanned aerial vehicle can pass by when continuing to fly according to the current flight direction;
displaying a first type of image identifying the obstacle point and the virtual flight path, and displaying the information associated with the obstacle point on the first type of image identifying the obstacle point and the virtual flight path.
35. The apparatus of claim 34, wherein the obstacle information corresponding to the first type of image is determined from obstacle data of the drone;
the obstacle data comprises data of obstacle points determined by the unmanned aerial vehicle according to a second type of image, wherein the second type of image comprises any one or more of a depth image, an infrared image and a red, green and blue image.
36. The apparatus of claim 35, wherein the second type of image is acquired by an obstacle detection device on the drone other than the camera.
37. The apparatus according to claim 35 or 36, wherein the processor, when determining the obstacle information corresponding to the first type of image, is specifically configured to:
receiving obstacle data determined by the unmanned aerial vehicle and the corresponding relation between the obstacle data and the first type of images;
and determining obstacle information corresponding to the first type of image according to the obstacle data and the corresponding relation.
38. The apparatus of claim 36, wherein the obstacle data comprises a plurality of sets of obstacle data received by the display device;
the processor is specifically configured to, when determining obstacle information corresponding to the first type of image according to the obstacle data and the correspondence relationship:
determining the degree of correlation between the multiple groups of obstacle data and the currently received first-class image according to the corresponding relation;
and selecting the target obstacle data with the highest correlation degree from the multiple groups of obstacle data, and predicting obstacle information corresponding to the currently received first-class images according to the target obstacle data.
39. The apparatus as claimed in claim 38, wherein the processor, when configured to predict the obstacle information corresponding to the currently received first-class image according to the target obstacle data, is specifically configured to:
carrying out first re-projection processing on the target obstacle data and the currently received first-class image to determine the position parameters of the target obstacle data in the first-class image;
and determining obstacle information corresponding to the currently received first-class image according to the position parameters and the target obstacle data.
40. The apparatus of claim 39, wherein the processor is further configured to:
and carrying out segmentation processing on the first-class image so as to segment the first-class image into at least one segmentation region.
41. The apparatus of claim 40, wherein the difference value of the adjacent pixel values in the segmentation region is within a preset variation range, and the segmentation region is used for identifying the feature object in the first type image.
42. The apparatus of claim 40 or 41, wherein the processor is further configured to:
determining a segmentation area corresponding to the obstacle point according to the position parameter, and projecting the obstacle point to the determined segmentation area;
and if the barrier points in the determined divided areas are larger than a preset number value, highlighting the determined divided areas.
43. The apparatus of claim 34, wherein the processor is further configured to:
and determining an obstacle avoidance path of the unmanned aerial vehicle according to the obstacle information, and identifying the obstacle avoidance path in the first type of image, wherein the obstacle avoidance path is used for avoiding obstacles of obstacle points indicated by the obstacle information.
44. The apparatus of claim 43, wherein the obstacle avoidance path has a plurality of pieces, and wherein the processor is configured to determine an obstacle avoidance path for the drone from the obstacle information, and further configured to, after identifying the obstacle avoidance path in the first type of image:
when a selection instruction for the obstacle avoidance path is received, determining the obstacle avoidance path indicated by the selection instruction as a target obstacle avoidance path;
and controlling the unmanned aerial vehicle to fly according to the target obstacle avoidance path.
45. The apparatus of claim 43, wherein the processor, after determining an obstacle avoidance path for the drone based on the obstacle information, is further configured to:
and controlling the unmanned aerial vehicle to fly according to the obstacle avoidance path, and displaying an obstacle avoidance prompt message on a display interface, wherein the obstacle avoidance prompt message is used for prompting a user that the unmanned aerial vehicle carries out obstacle avoidance processing.
46. The apparatus of claim 34, wherein the processor is further configured to:
performing message prompt aiming at the obstacle point, wherein the message prompt comprises a voice prompt and/or a vibration prompt;
and the vibration prompt comprises a vibration prompt according to the position relation between the obstacle point and the unmanned aerial vehicle.
47. The apparatus of claim 34, wherein the information associated with the obstacle point further comprises a distance value of the drone from the obstacle point; the feature identification of the obstacle point comprises a feature object name corresponding to the obstacle point.
48. An unmanned aerial vehicle is characterized in that the unmanned aerial vehicle mounting shooting device comprises a communication element, a memory and a processor;
the communication element is used for communicating with the display device;
the memory to store program instructions;
the processor is configured to execute the program instructions stored in the memory, and when executed, is configured to:
acquiring a first type of image through the shooting device;
obtaining obstacle data, wherein the obstacle data comprises data of obstacle points in the flight direction of the unmanned aerial vehicle;
sending the first type of images and the obstacle data to a display device through the communication element, so that the display device determines obstacle information corresponding to the first type of images according to the obstacle data, identifies the obstacle point on the first type of images according to the obstacle information, predicts a virtual flight path according to the current flight direction of the unmanned aerial vehicle, displays the first type of images identifying the obstacle point and the virtual flight path, and displays the information associated with the obstacle point on the first type of images identifying the obstacle point and the virtual flight path;
the virtual flight path is used for indicating the path which the unmanned aerial vehicle can pass by continuing flying according to the current flight direction; the obstacle information includes information of obstacle points in a flight direction of the unmanned aerial vehicle, the information of obstacle points includes information associated with the obstacle points, the information associated with the obstacle points includes: predicting one or more of a time of collision to the obstacle point and a feature identification of the obstacle point.
49. A drone as claimed in claim 48, wherein the drone is mounted obstacle detection means for obtaining obstacle data including data of obstacle points determined from a second type of image, wherein the second type of image includes any one or more of depth images, infrared images, red green blue images.
50. The drone of claim 49, wherein the drone is further to:
determining the corresponding relation between the obstacle data and the first type of image;
and sending the corresponding relation between the obstacle data and the first type of images to the display device through the communication element.
51. An unmanned aerial vehicle as claimed in claim 50, wherein the processor, when determining the correspondence of the obstacle data to the first type of image, is specifically configured to:
and determining the pose relationship between the shooting device and the obstacle detection device, and determining the corresponding relationship between the obstacle data and the first type of image according to the pose relationship.
52. A drone of claim 51, wherein the pose relationships include a displacement relationship and a rotation relationship between the camera and the obstacle detection device.
53. An unmanned aerial vehicle as claimed in claim 51 or 52, wherein the processor is configured to, when determining the correspondence between the obstacle data and the first type of image according to the pose relationship, specifically:
extracting feature points of the first type image and feature points of the second type image, and generating feature description according to the feature points of the first type image and the feature points of the second type image;
and determining the corresponding relation between the obstacle data and the first type of image according to the feature description and the pose relation.
54. A system, comprising:
the display device of any one of claims 34-47;
an unmanned aerial vehicle as claimed in any of claims 48-53.
CN201780006058.9A 2017-10-31 2017-10-31 Obstacle information display method, display device, unmanned aerial vehicle and system Expired - Fee Related CN108521808B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/108671 WO2019084797A1 (en) 2017-10-31 2017-10-31 Obstacle information display method, display device, unmanned aerial vehicle, and system

Publications (2)

Publication Number Publication Date
CN108521808A CN108521808A (en) 2018-09-11
CN108521808B true CN108521808B (en) 2021-12-07

Family

ID=63434477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780006058.9A Expired - Fee Related CN108521808B (en) 2017-10-31 2017-10-31 Obstacle information display method, display device, unmanned aerial vehicle and system

Country Status (2)

Country Link
CN (1) CN108521808B (en)
WO (1) WO2019084797A1 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020062356A1 (en) * 2018-09-30 2020-04-02 深圳市大疆创新科技有限公司 Control method, control apparatus, control terminal for unmanned aerial vehicle
CN110892353A (en) * 2018-09-30 2020-03-17 深圳市大疆创新科技有限公司 Control method, control device and control terminal of unmanned aerial vehicle
CN109583384A (en) * 2018-11-30 2019-04-05 百度在线网络技术(北京)有限公司 Barrier-avoiding method and device for automatic driving car
CN110244760A (en) * 2019-06-06 2019-09-17 深圳市道通智能航空技术有限公司 A kind of barrier-avoiding method, device and electronic equipment
CN111813142A (en) * 2019-07-18 2020-10-23 中国石油化工股份有限公司 Unmanned aerial vehicle autonomous obstacle avoidance control method for crude oil pipeline inspection
WO2021087782A1 (en) * 2019-11-05 2021-05-14 深圳市大疆创新科技有限公司 Obstacle detection method and system, ground end device, and autonomous mobile platform
CN111968376B (en) * 2020-08-28 2022-06-28 北京市商汤科技开发有限公司 Road condition prompting method and device, electronic equipment and storage medium
CN112357100B (en) * 2020-10-27 2022-06-28 苏州臻迪智能科技有限公司 Method, device and computer-readable storage medium for displaying obstacle information
CN112362077A (en) * 2020-11-13 2021-02-12 歌尔光学科技有限公司 Head-mounted display device, obstacle avoidance method thereof and computer-readable storage medium
CN112977441A (en) * 2021-03-03 2021-06-18 恒大新能源汽车投资控股集团有限公司 Driving decision method and device and electronic equipment
CN113612920A (en) * 2021-06-23 2021-11-05 广西电网有限责任公司电力科学研究院 Method and device for shooting power equipment image by unmanned aerial vehicle
WO2023115390A1 (en) * 2021-12-22 2023-06-29 深圳市大疆创新科技有限公司 Image processing method and device, movable platform, control terminal, and system
CN114415726B (en) * 2022-01-18 2023-01-03 江苏锐天智能科技股份有限公司 Unmanned aerial vehicle obstacle avoidance control system and method based on image analysis
WO2023184487A1 (en) * 2022-04-01 2023-10-05 深圳市大疆创新科技有限公司 Unmanned aerial vehicle obstacle avoidance method and apparatus, unmanned aerial vehicle, remote control device and storage medium
CN117170411B (en) * 2023-11-02 2024-02-02 山东环维游乐设备有限公司 Vision assistance-based auxiliary obstacle avoidance method for racing unmanned aerial vehicle

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1134896A (en) * 1995-02-09 1996-11-06 大宇电子株式会社 Method for avoiding collision of vehicle and apparatus for performing the same
CN104597910A (en) * 2014-11-27 2015-05-06 中国人民解放军国防科学技术大学 Instantaneous impact point based unmanned aerial vehicle non-collaborative real-time obstacle avoidance method
CN105955298A (en) * 2016-06-03 2016-09-21 腾讯科技(深圳)有限公司 Automatic obstacle avoidance method and apparatus for aircraft
CN106155091A (en) * 2016-08-31 2016-11-23 众芯汉创(北京)科技有限公司 A kind of unmanned plane barrier-avoiding method and device
CN106650708A (en) * 2017-01-19 2017-05-10 南京航空航天大学 Visual detection method and system for automatic driving obstacles
CN107077145A (en) * 2016-09-09 2017-08-18 深圳市大疆创新科技有限公司 Show the method and system of the obstacle detection of unmanned vehicle

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1105954C (en) * 1999-07-02 2003-04-16 贾敏忠 Route planning, terrain evading and fly environment warming system for general-purpose aviation
US6798423B2 (en) * 2001-10-11 2004-09-28 The Boeing Company Precision perspective flight guidance symbology system
EP1462767B1 (en) * 2003-02-27 2014-09-03 The Boeing Company Aircraft guidance system and method providing perspective flight guidance
JP2011065202A (en) * 2009-09-15 2011-03-31 Hitachi Ltd Autonomous mobile device
CN101923789B (en) * 2010-03-24 2011-11-16 北京航空航天大学 Safe airplane approach method based on multisensor information fusion
CN102259618B (en) * 2010-05-25 2015-04-22 德尔福(中国)科技研发中心有限公司 Warning treatment method for fusion of vehicle backward ultrasonic and camera
CN103072537B (en) * 2013-02-05 2014-03-26 湖南大学 Automotive collision avoidance safety protecting method based on infrared image processing
US20150106005A1 (en) * 2013-10-14 2015-04-16 Gulfstream Aerospace Corporation Methods and systems for avoiding a collision between an aircraft on a ground surface and an obstacle
US9558408B2 (en) * 2013-10-15 2017-01-31 Ford Global Technologies, Llc Traffic signal prediction
US9297668B2 (en) * 2014-01-27 2016-03-29 Honeywell International Inc. System and method for displaying flight path information in rotocraft
WO2016084359A1 (en) * 2014-11-26 2016-06-02 Ricoh Company, Ltd. Imaging device, object detector and mobile device control system
CN104808682B (en) * 2015-03-10 2017-12-29 成都优艾维智能科技有限责任公司 Small-sized rotor wing unmanned aerial vehicle automatic obstacle avoiding flight control method
CN105511478B (en) * 2016-02-23 2019-11-26 百度在线网络技术(北京)有限公司 Applied to the control method of sweeping robot, sweeping robot and terminal
CN106292704A (en) * 2016-09-07 2017-01-04 四川天辰智创科技有限公司 The method and device of avoiding barrier
CN106168810A (en) * 2016-09-18 2016-11-30 中国空气动力研究与发展中心高速空气动力研究所 A kind of unmanned plane during flying obstacle avoidance system based on RTK and method
WO2018053815A1 (en) * 2016-09-23 2018-03-29 深圳市大疆创新科技有限公司 Information notification method applicable to remote control, and remote control
CN106843251A (en) * 2017-02-20 2017-06-13 上海大学 Crowded crowd promptly dredges unmanned plane

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1134896A (en) * 1995-02-09 1996-11-06 大宇电子株式会社 Method for avoiding collision of vehicle and apparatus for performing the same
CN104597910A (en) * 2014-11-27 2015-05-06 中国人民解放军国防科学技术大学 Instantaneous impact point based unmanned aerial vehicle non-collaborative real-time obstacle avoidance method
CN105955298A (en) * 2016-06-03 2016-09-21 腾讯科技(深圳)有限公司 Automatic obstacle avoidance method and apparatus for aircraft
CN106155091A (en) * 2016-08-31 2016-11-23 众芯汉创(北京)科技有限公司 A kind of unmanned plane barrier-avoiding method and device
CN107077145A (en) * 2016-09-09 2017-08-18 深圳市大疆创新科技有限公司 Show the method and system of the obstacle detection of unmanned vehicle
CN106650708A (en) * 2017-01-19 2017-05-10 南京航空航天大学 Visual detection method and system for automatic driving obstacles

Also Published As

Publication number Publication date
CN108521808A (en) 2018-09-11
WO2019084797A1 (en) 2019-05-09

Similar Documents

Publication Publication Date Title
CN108521808B (en) Obstacle information display method, display device, unmanned aerial vehicle and system
CN108702444B (en) Image processing method, unmanned aerial vehicle and system
US9965701B2 (en) Image processing apparatus and method
JP5740884B2 (en) AR navigation for repeated shooting and system, method and program for difference extraction
CN106406343B (en) Control method, device and system of unmanned aerial vehicle
US9412202B2 (en) Client terminal, server, and medium for providing a view from an indicated position
US20190356936A9 (en) System for georeferenced, geo-oriented realtime video streams
WO2018176376A1 (en) Environmental information collection method, ground station and aircraft
CN110910460B (en) Method and device for acquiring position information and calibration equipment
TW201823983A (en) Method and system for creating virtual message onto a moving object and searching the same
US9418299B2 (en) Surveillance process and apparatus
US20210112207A1 (en) Method, control apparatus and control system for controlling an image capture of movable device
CN112207821B (en) Target searching method of visual robot and robot
CN114943773A (en) Camera calibration method, device, equipment and storage medium
CN112399084A (en) Unmanned aerial vehicle aerial photography method and device, electronic equipment and readable storage medium
JP2016118994A (en) Monitoring system
TW201722145A (en) 3D video surveillance system capable of automatic camera dispatching function, and surveillance method for using the same
CN110930437B (en) Target tracking method and device
CN112613358A (en) Article identification method, article identification device, storage medium, and electronic device
CN112106112A (en) Point cloud fusion method, device and system and storage medium
KR101781158B1 (en) Apparatus for image matting using multi camera, and method for generating alpha map
JP7437930B2 (en) Mobile objects and imaging systems
KR101954711B1 (en) Modification Method Of Building Image
CN113597596A (en) Target calibration method, device and system and remote control terminal of movable platform
EP3287912A1 (en) Method for creating location-based space object, method for displaying space object, and application system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20211207

CF01 Termination of patent right due to non-payment of annual fee