WO2021146971A1 - 基于可通行空域判断的飞行控制方法、装置及设备 - Google Patents

基于可通行空域判断的飞行控制方法、装置及设备 Download PDF

Info

Publication number
WO2021146971A1
WO2021146971A1 PCT/CN2020/073658 CN2020073658W WO2021146971A1 WO 2021146971 A1 WO2021146971 A1 WO 2021146971A1 CN 2020073658 W CN2020073658 W CN 2020073658W WO 2021146971 A1 WO2021146971 A1 WO 2021146971A1
Authority
WO
WIPO (PCT)
Prior art keywords
unmanned aerial
aerial vehicle
interest
airspace
image
Prior art date
Application number
PCT/CN2020/073658
Other languages
English (en)
French (fr)
Inventor
刘宝恩
李鑫超
王涛
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/073658 priority Critical patent/WO2021146971A1/zh
Priority to CN202080004232.8A priority patent/CN112585555A/zh
Publication of WO2021146971A1 publication Critical patent/WO2021146971A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • G05D1/0808Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft

Definitions

  • This application relates to the field of flight technology, and in particular to a flight control method, device and equipment based on the judgment of the passable airspace.
  • unmanned aerial vehicles can be used for street scene shooting and power inspection.
  • the unmanned aerial vehicle is equipped with a radar sensor for obstacle detection, and the position of the obstacle relative to the unmanned aerial vehicle can be obtained through the detection result of the radar sensor, so that the unmanned aerial vehicle can avoid the obstacle during the flight. Things.
  • the UAV needs to judge whether it can pass.
  • the embodiments of the present application provide a flight control method, device, and equipment based on the judgment of the passable airspace to solve the problem of how to implement a low-cost flight control method that can avoid obstacles in the prior art.
  • an embodiment of the present application provides a flight control method based on a passable airspace judgment, the method including:
  • the airspace of interest is a passable airspace, and the unmanned aerial vehicle is controlled to pass through the airspace of interest.
  • an embodiment of the present application provides a flight control device based on a passable airspace judgment, the device including: a memory and a processor;
  • the memory is used to store program code
  • the processor calls the program code, and when the program code is executed, is used to perform the following operations:
  • the airspace of interest is a passable airspace, and the unmanned aerial vehicle is controlled to pass through the airspace of interest.
  • an embodiment of the present application provides an unmanned aerial vehicle, including: a fuselage, and a power system, an image acquisition device, and a flight control device provided on the fuselage;
  • the power system is used to provide power for the unmanned aerial vehicle
  • the image acquisition device is used to acquire environmental images of the passage direction of the unmanned aerial vehicle
  • the flight control device includes a memory and a processor
  • the memory is used to store program code
  • the processor calls the program code, and when the program code is executed, is used to perform the following operations:
  • the airspace of interest is a passable airspace, and the unmanned aerial vehicle is controlled to pass through the airspace of interest.
  • an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes at least one piece of code, the at least one piece of code can be executed by a computer to control the The computer executes the method described in any one of the above-mentioned first aspects.
  • an embodiment of the present application provides a computer program, when the computer program is executed by a computer, it is used to implement the method described in any one of the above-mentioned first aspects.
  • the embodiments of the present application provide a flight control method, device and equipment based on the judgment of the passable airspace.
  • the environment image of the passage direction of the unmanned aerial vehicle By acquiring the environment image of the passage direction of the unmanned aerial vehicle, the interest in the environment image corresponding to the airspace of interest used for the pass judgment of the unmanned aerial vehicle is determined. Area, and determine whether there are objects of preset categories in the area of interest.
  • the airspace of interest is a passable airspace
  • the unmanned aerial vehicle is controlled to pass through the airspace of interest to realize the control of the unmanned aerial vehicle to avoid obstacles based on the environment image of the unmanned aerial vehicle
  • the cost of the image acquisition device is lower in the flight control method of the object, and the image acquisition device is usually provided on the unmanned aerial vehicle, thereby providing a low-cost flight control method that can avoid obstacles.
  • FIG. 1 is a schematic diagram of an application scenario of a flight control method based on a passable airspace judgment provided by an embodiment of the application;
  • FIG. 2 is a schematic flowchart of a flight control method based on a passable airspace judgment provided by an embodiment of this application;
  • FIG. 3 is a schematic diagram of an airspace of interest provided by an embodiment of this application.
  • FIG. 4 is a schematic flowchart of a flight control method based on a passable airspace judgment according to another embodiment of the application;
  • 5 to 6 are schematic diagrams of the relationship between the field of view and the airspace of interest of the image acquisition device provided by the embodiments of this application;
  • FIG. 7 is a schematic diagram of obstacles provided in an embodiment of the application affecting the upward flight of the unmanned aerial vehicle
  • FIG. 8 is a schematic diagram of an airspace of interest provided by another embodiment of this application.
  • FIG. 9 is a schematic diagram of an airspace of interest provided by another embodiment of this application.
  • FIG. 10A is an environment image provided by an embodiment of this application.
  • FIG. 10B is a schematic diagram of the semantic recognition result and the region of interest based on the environmental image shown in FIG. 10A;
  • FIG. 11 is a schematic flowchart of a flight control method based on a passable airspace judgment according to another embodiment of this application.
  • FIG. 12 is a schematic diagram of an unmanned aerial vehicle provided by an embodiment of the application rotating around a first position
  • FIG. 13 is a schematic structural diagram of a flight control device based on a passable airspace judgment according to an embodiment of the application;
  • FIG. 14 is a schematic structural diagram of an unmanned aerial vehicle provided by an embodiment of the application.
  • the flight control method based on the judgment of the passable airspace provided in the embodiments of the present application can be applied to the unmanned aerial vehicle 10 shown in FIG. 1, and the unmanned aerial vehicle 10 may include an image acquisition device 11 and a controller 12.
  • the image acquisition device 11 can be used to collect environmental images of the passage direction of the unmanned aerial vehicle 10
  • the controller 12 can acquire the environmental images of the passage direction of the unmanned aerial vehicle 10 from the image acquisition device 11, and apply this application to the acquired environmental images.
  • the method provided in the embodiment performs processing to control the flight of the unmanned aerial vehicle 10.
  • the passage direction may refer to the target flight direction of the unmanned aerial vehicle 10, for example, the passage direction may be above the unmanned aerial vehicle.
  • the image acquisition device 11 may specifically be a camera, a camera, or the like.
  • the controller 12 may implement the method provided in the embodiment of the present application to determine the passage When the airspace corresponding to the direction is passable, the unmanned aerial vehicle is controlled to pass.
  • the flight control instruction sent by the control terminal of the unmanned aerial vehicle 10 may trigger the condition that needs to control the unmanned aerial vehicle 10 to fly in the direction of travel.
  • the flight control instruction may be, for example, a return-to-home instruction, and the control terminal may, for example, It can be a smart phone, a remote control, etc.
  • a specific event detected by the unmanned aerial vehicle 10 may trigger a condition that needs to control the unmanned aerial vehicle 10 to fly along the passage direction.
  • the specific event may be, for example, an obstacle event, an image transmission interruption event, and the like.
  • the UAV includes the image acquisition device 11 as an example. It is understandable that the UAV may not include the image acquisition device 11, and the image acquisition device 11 can be used as the load of the UAV. Above the unmanned aerial vehicle.
  • the flight control method based on the judgment of the passable airspace is applied to an unmanned aerial vehicle as an example. It is understandable that the flight control method provided by the embodiments of the present application can also be applied to the control terminal of the unmanned aerial vehicle.
  • the control terminal can obtain the environment image of the unmanned aerial vehicle's travel direction from the image acquisition device set on the unmanned aerial vehicle, and The acquired environmental image is processed by the method provided in the embodiment of the present application to control the flight of the unmanned aerial vehicle 10.
  • the flight control method based on the passable airspace judgment provided by the embodiments of the present application obtains the environment image of the passage direction of the unmanned aerial vehicle to determine the interest area in the environment image corresponding to the airspace of interest used by the unmanned aerial vehicle to pass judgment, and determine the interest Whether there are objects of preset categories in the area, if there is, the airspace of interest is passable airspace, and the unmanned aerial vehicle is controlled to pass through the airspace of interest, which realizes the flight control method of the unmanned aerial vehicle to avoid obstacles based on the environmental image control of the unmanned aerial vehicle .
  • FIG. 2 is a schematic flowchart of a flight control method based on a passable airspace judgment provided by an embodiment of this application.
  • the execution subject of this embodiment may be the controller in FIG. 1.
  • the method of this embodiment may include:
  • Step 201 Obtain an environment image of the traveling direction of the unmanned aerial vehicle.
  • the passage direction may refer to the target flight direction of the unmanned aerial vehicle. According to different flight requirements, the passage direction may be, for example, above the UAV, in front of the UAV, and so on.
  • the image acquisition device provided on the unmanned aerial vehicle may collect environmental images of the traveling direction of the unmanned aerial vehicle. Taking the passage direction above the unmanned aerial vehicle as an example, the image acquisition device may specifically be a top-view camera. Taking the passage direction in front of the unmanned aerial vehicle as an example, the image acquisition device may specifically be a front-view camera.
  • the image acquisition device may be set on an unmanned aerial vehicle through a pan/tilt, so that the image acquisition device can move relative to the body of the unmanned aerial vehicle to change the field of view of the image acquisition device.
  • the unmanned aerial vehicle can be equipped with a camera capable of looking up and looking up.
  • the image acquisition device may specifically be a monocular camera, and the cost can be further reduced by using the image acquisition device as a monocular camera.
  • the direction of view of the image acquisition device may be consistent with the direction of passage of the unmanned aerial vehicle.
  • the view direction of the image acquisition device may be the same as the passage direction, or the view direction of the image acquisition device may be different from the passage direction by a certain angle.
  • the environmental image may be a single image acquired by the image acquisition device;
  • the single image of cannot include the image of the spatial domain of interest
  • the environmental image may be a combined image of multiple images with different viewing directions acquired by the image acquisition device.
  • Step 202 Determine an area of interest in the environment image that corresponds to the airspace of interest used by the UAV for passage determination.
  • the airspace of interest is the airspace used by the unmanned aerial vehicle for passing judgment. Based on the judgment result of the airspace of interest, it can be determined whether the unmanned aerial vehicle can pass through the airspace of interest, that is, whether it can travel along the airspace. Flight in the direction corresponding to the airspace of interest. It should be noted that the direction corresponding to the airspace of interest may be the target flight direction of the unmanned aerial vehicle, or the direction corresponding to the airspace of interest may be other directions than the target flight direction.
  • the airspace corresponding to the target flight direction can be regarded as the airspace of interest.
  • the passable airspace can be found between the obstacles, which is beneficial to improve The flexibility of the UAV to avoid obstacles.
  • the determination result of the interest area may specifically be that the unmanned aerial vehicle can pass the interest control or the unmanned aerial vehicle cannot pass the interest area.
  • the unmanned aerial vehicle can pass the airspace of interest, it may indicate that the area of interest is a passable airspace; when the unmanned aerial vehicle cannot pass the airspace of interest, it may indicate that the area of interest is an impassable airspace.
  • the airspace of interest may be as shown in FIG. 3, for example, where O represents the position of the unmanned aerial vehicle, and the airspace within the curve is the airspace of interest. It should be noted that the shape of the airspace of interest in Fig. 3 is only an example.
  • the environment image may include not only the image corresponding to the airspace of interest, but also images outside the airspace of interest, it is necessary to determine the region of interest in the environment image corresponding to the airspace of interest.
  • the image of the region of interest in the environmental image can be understood as the image of the airspace of interest from the perspective of the UAV.
  • Step 203 Determine whether there is an object of a preset category in the region of interest.
  • the objects of the preset category may refer to objects belonging to the obstacle category that need to be detected.
  • the preset category may specifically be a tree category or a building category.
  • the area of interest can be characterized as a passable area; when there are objects of the preset type in the area of interest, the area of interest can be characterized as an impassable area. .
  • image recognition may be performed on the environmental image first to obtain the recognition result of the object of the preset category of the environmental image, and then the recognition result of the object of the preset category of the environmental image is determined. Describe whether there are objects of preset categories in the region of interest.
  • image recognition may be performed directly on the image of the region of interest in the environmental image to obtain a recognition result of objects of a preset type in the region of interest, so as to determine whether there are objects of a preset type in the preset region. Object.
  • Step 204 If there is no object of the preset category in the interest area, the airspace of interest is a passable airspace, and the unmanned aerial vehicle is controlled to pass through the airspace of interest.
  • step 203 it is determined whether there are objects of the preset type in the area of interest. As a result, if there is no object of the preset category in the interest area, the unmanned aerial vehicle can be controlled to pass through the airspace of interest.
  • the airspace of interest may specifically be the airspace above the unmanned aerial vehicle.
  • all the objects can be controlled.
  • the unmanned aerial vehicle flies upwards.
  • the airspace of interest may specifically be the airspace in front of the unmanned aerial vehicle.
  • all the airspaces can be controlled.
  • the unmanned aerial vehicle is flying forward.
  • the unmanned aerial vehicle by acquiring the environment image of the unmanned aerial vehicle's travel direction, determine the area of interest in the environment image corresponding to the airspace of interest used by the unmanned aerial vehicle to determine the passage, and determine whether there is an object of a preset category in the area of interest, if If it exists, the airspace of interest is a passable airspace, and the unmanned aerial vehicle is controlled to pass through the airspace of interest, which realizes the flight control method of the unmanned aerial vehicle to avoid obstacles based on the environment image control of the unmanned aerial vehicle.
  • the cost of the image acquisition device is relatively high. It is low, and the UAV is usually equipped with an image acquisition device, thus providing a low-cost flight control method that can avoid obstacles.
  • the viewing range of the image acquisition device when the environmental area is a combined image of multiple images with different viewing directions, when acquiring environmental images in the traffic direction, the viewing range of the image acquisition device can be changed through the pan/tilt to obtain Multiple images with different viewing directions; alternatively, it is also possible to obtain multiple images with different viewing directions by changing the attitude of the unmanned aerial vehicle.
  • step 201 may specifically include: acquiring an environment image of the traveling direction of the unmanned aerial vehicle during the flight of the unmanned aerial vehicle in a preset flight mode. Specifically, during the flight of the unmanned aerial vehicle in the preset flight mode, multiple images collected by the image acquisition device are acquired, and the multiple images are merged to obtain the environmental image. It is understandable that the purpose of the unmanned aerial vehicle flying in the preset flight mode is to obtain an environmental image including the corresponding image of the airspace of interest, and the preset flight mode can be flexibly implemented according to requirements.
  • FIG. 4 is a schematic flowchart of a flight control method based on a passable airspace judgment provided by another embodiment of this application.
  • This embodiment provides an optional implementation method on the basis of the embodiment shown in FIG. 2.
  • the method of this embodiment may include:
  • Step 401 Acquire an environmental image of the traveling direction of the unmanned aerial vehicle while the unmanned aerial vehicle rotates in situ by a first target angle at the first position.
  • the single image acquired by the image acquisition device may not contain the image of the entire airspace of interest, so the unmanned aerial vehicle can be rotated in situ at the first position.
  • the aforementioned preset flying mode includes rotating the first target angle in situ at the first position.
  • the relationship between the initial field of view of the image acquisition device of the unmanned aerial vehicle at the first position and the airspace of interest can be shown in Figure 5. It can be seen that a single image acquired by the image acquisition device can only contain part of the image of the airspace of interest. On the basis of FIG. 5, as shown in FIG. 6, during the process of the unmanned aerial vehicle rotating in situ at the first position, the field of view of the image acquisition device can be continuously changed to obtain images of other parts of the airspace of interest.
  • the direction of the arrow in FIGS. 5 and 6 may indicate the orientation of the unmanned aerial vehicle, and the change of the arrow direction indicates the change of the orientation when the unmanned aerial vehicle rotates in place.
  • the range corresponding to the two straight lines in FIG. 5 and FIG. 6 represents the field of view of the image acquisition device.
  • the clockwise rotation direction in Fig. 6 is only an example.
  • the airspace of interest is a circular airspace as an example.
  • the first target angle can be 360°, that is, when the unmanned aerial vehicle is in the first In the process of the position rotating one circle in situ, the environment image of the traveling direction of the unmanned aerial vehicle is acquired. It is understandable that when the airspace of interest is an airspace of other shapes, such as a semicircular airspace, the first target angle may be less than 360°, for example, it may be 180°.
  • the unmanned aerial vehicle may respond to a flight control instruction to cause the unmanned aerial vehicle to fly in the preset flight mode.
  • the first position may be the position where the UAV generates the flight control, and it may be any position during the entire flight of the UAV.
  • step 401 may specifically include: during the in-situ rotation of the UAV, acquiring multiple first images collected at intervals of a preset angle, and merging the multiple first images , To get the environmental image.
  • the preset angle can be flexibly implemented according to requirements. Specifically, the larger the preset angle, the smaller the number of first images collected, and the less overlap between the first images collected at adjacent moments; the smaller the preset angle, the smaller the number of the first images collected. The greater the number, the more overlapped content between the first images collected at adjacent moments.
  • the position of the image acquisition device relative to the fuselage of the UAV may remain unchanged .
  • Step 402 Determine an area of interest in the environment image corresponding to the airspace of interest used by the UAV for passage determination.
  • the position of non-fixed obstacles (such as birds) will change during the flight of the unmanned aerial vehicle, and non-fixed obstacles such as birds or hovering drones will follow the drone's position.
  • the environment image is used to record the scene content of the UAV's traveling direction at a specific moment, so the obstacles targeted by the method provided in the embodiments of the present application may mainly be fixed obstacles (such as trees).
  • the airspace of interest may specifically be an annular airspace.
  • the airspace of interest may specifically be an annular airspace, where the annular airspace may be as shown in FIG. 8, for example.
  • the maximum distance between the center of the fuselage of the unmanned aerial vehicle and the wing edge of the unmanned aerial vehicle determines the minimum size of the unmanned aerial vehicle that can pass through the airspace, that is, the smallest possible passageway for the unmanned aerial vehicle under ideal conditions. airspace.
  • UAVs are inevitably affected by air currents during flight, they cannot fly along a straight line. Therefore, the airspace of interest for UAVs can consider not only the airspace corresponding to the maximum distance, but also the airspace corresponding to the maximum distance.
  • the maximum distance corresponds to the airspace outside the airspace.
  • the inner ring diameter of the airspace of interest may be equal to the maximum distance between the center of the fuselage of the unmanned aerial vehicle and the edge of the wing of the unmanned aerial vehicle.
  • the diameter of the inner ring of the airspace of interest is equal to the maximum distance between the center of the unmanned aerial vehicle's fuselage and the edge of the wing of the unmanned aerial vehicle, so that the requirement on the field of view of the image acquisition device can be reduced to the greatest extent.
  • the inner ring diameter of the airspace of interest may also be smaller than the maximum distance. It can be understood that the more the inner ring diameter is smaller than the maximum distance, the greater the requirement on the field of view of the image acquisition device. High, the greater the impact of non-fixed obstacles.
  • the diameter of the outer ring of the airspace of interest may be equal to the sum of the diameter of the inner ring of the area of interest and a certain distance.
  • the diameter of the outer ring of the airspace of interest may be equal to the sum of the diameter of the inner ring of the region of interest and the disturbance distance.
  • the disturbance distance may refer to the distance that the UAV deviates from the flight direction due to the influence of the air flow and the like during the flight.
  • the magnitude of the disturbance distance may be related to the target flight distance of the unmanned aerial vehicle along the flight direction. Of course, the magnitude of the disturbance distance may also be related to other factors, which can be implemented flexibly according to specific requirements.
  • the outer ring diameter L1, the inner ring diameter L2, and the disturbance distance of the annular region of interest can be as shown in Figure 9, where H can represent no
  • H can represent no
  • D may represent the disturbance distance
  • O may represent the position of the unmanned aerial vehicle corresponding to the environmental image
  • O in this embodiment may specifically be the first position.
  • the outer ring diameter of the airspace of interest may also be greater than the sum of the inner ring diameter of the airspace of interest and the disturbance distance. It is understandable that the outer ring diameter is larger than the inner ring diameter and the disturbance distance. The greater the sum of the disturbance distances, the higher the requirement on the field of view of the image acquisition device, and the greater the impact of non-fixed obstacles. Based on the above judgment process, it is possible to judge the presence of obstacles on the flight path of the unmanned aerial vehicle and the existence of a passable path at the same time. In particular, it is not possible to directly judge whether the UAV can pass through the narrow passage.
  • Step 403 Determine whether there are objects of a preset category in the region of interest.
  • the step 403 may specifically include: performing semantic recognition on the image of the region of interest in the environmental image to obtain a semantic recognition result of the region of interest; determining whether the semantic recognition result includes the pre- Let the semantics of the category. If the semantic recognition result includes the semantics corresponding to the preset category, it means that there is an object of the preset category in the region of interest; if the semantic recognition result of the region of interest does not include the preset category The corresponding semantics indicates that there is no object of the preset category in the region of interest.
  • the semantic recognition result of the region of interest may be obtained based on a pre-trained neural network model.
  • the neural network model may specifically be a convolutional neural network (Convolutional Neural Networks, CNN) model.
  • Obtaining the semantic recognition result of the region of interest based on the pre-trained neural network model may specifically include the following steps A and B.
  • Step A Input the image of the region of interest into the neural network model to obtain a model output result of the neural network model.
  • the model output result of the neural network model may include the confidence feature maps respectively output by multiple output channels, and the multiple output channels can correspond to multiple object categories one-to-one, and the pixels of the confidence feature map of a single object category The value is used to characterize the probability that the pixel is the object category.
  • Step B According to the model output result of the neural network model, a feature map containing the semantic information of the object is obtained.
  • the object category corresponding to the confidence feature map with the largest pixel value at the same pixel location in the multiple confidence feature maps one-to-one corresponding to the multiple output channels may be used as the object category of the pixel location to obtain Feature map.
  • the specific way that the feature map contains object semantic information can be that the pixel value in the feature map can represent the object semantics of the corresponding pixel, where the object semantics can include recognizable object categories, such as buildings, trees, grass, rivers, etc. .
  • the object semantics can include recognizable object categories, such as buildings, trees, grass, rivers, etc. .
  • a pixel value of 1 can represent buildings
  • a pixel value of 2 can represent trees
  • a pixel value of 3 can represent grass
  • the feature map obtained by processing the background image the pixel position with the pixel value of 1 is recognized as The pixel position of the building
  • the pixel position with the pixel value of 2 is the pixel position recognized as a tree
  • the pixel position with the pixel value of 3 is the pixel position recognized as the grass.
  • the feature map can be understood as the result of semantic recognition of the region of interest. Assuming that the preset object is a tree, and the pixel value 2 represents a tree, if there is a pixel with a pixel value of 2 in the feature map, it can indicate that there is a predetermined object in the region of interest. Suppose the object of the category; in the case that there is no pixel with the pixel value of 2 in the feature map, it can indicate that there is no object of the preset category in the region of interest.
  • the environmental image may be processed based on a pre-trained neural network model to obtain the semantic recognition result of the environmental image, and according to the semantic recognition result of the environmental image, it is determined whether there is an object of a preset category in the region of interest.
  • the specific method for obtaining the semantic recognition result of the environmental image based on the neural network model is similar to the foregoing, and will not be repeated here. Assuming that the environmental image is shown in Figure 10A, the semantic recognition result can be as shown in Figure 10B. Further assuming that the region of interest is the circular region in Figure 10B and the preset category includes tree categories, it can be determined that there is a pre-existing region of interest. Set the object of the category.
  • Step 404 If there is no object of the preset category in the interest area, the airspace of interest is a passable airspace, and the unmanned aerial vehicle is controlled to pass through the airspace of interest.
  • control method of the unmanned aerial vehicle is not limited in this application for the situation that there are objects of preset categories in the interest area.
  • the unmanned aerial vehicle can be controlled to be in a hovering state, or the unmanned aerial vehicle can be controlled to land.
  • the airspace of interest in the environment image and the unmanned aerial vehicle for passing judgment is determined.
  • the airspace of interest in the environment image and the unmanned aerial vehicle for passing judgment is determined.
  • the area of interest corresponds to the area of interest, and determine whether there are objects of preset categories in the area of interest.
  • the airspace of interest is a passable airspace
  • the unmanned aerial vehicle is controlled to pass through the airspace of interest to realize the control of the unmanned aerial vehicle based on the environmental image of the unmanned aerial vehicle
  • the flight control method that avoids obstacles and obtains environmental images by rotating in-situ at the first position reduces the requirement on the field of view of the image acquisition device installed on the unmanned aerial vehicle, which is beneficial to cost saving.
  • the region of interest is a ring-shaped region, the requirement for the field of view of the image acquisition device can be further reduced, which is beneficial to simplify the implementation.
  • FIG. 11 is a schematic flowchart of a flight control method based on passable airspace judgment according to another embodiment of this application.
  • This embodiment provides an optional implementation method on the basis of the embodiment shown in FIG. 2.
  • the method of this embodiment may include:
  • Step 111 Acquire a plurality of first images collected at intervals of a preset angle while the unmanned aerial vehicle rotates the first target angle in situ at the first position.
  • step 111 refers to the related description of the embodiment shown in FIG. 4, which will not be repeated here.
  • Step 112 in the process of the unmanned aerial vehicle flying from the first position to the second position along the target direction, and rotating the second target angle around the first position as the center, obtain collections at intervals of a preset angle Multiple second images arrived.
  • the image obtained by rotating the UAV in the first position may still not contain the image of the entire region of interest, so that Images are collected at locations other than the first location to obtain an environment image that can contain the entire region of interest image.
  • the unmanned aerial vehicle's in-situ rotation in the first position includes rotating the unmanned aerial vehicle's gimbal and camera as the rotation axis, and the position of the gimbal and camera is basically unchanged during the rotation; or A certain position of the man-machine, for example, the geometric center of gravity or the center of gravity of the drone rotates. During the rotation, the pan/tilt camera rotates with a small radius.
  • the above-mentioned preset flying manner may further include: flying from the first position to a second position in a target direction, and rotating the second target angle with the first position as a center.
  • the distance between the second position and the first position may be small, for example, a distance of several centimeters.
  • the second target angle may be 360°.
  • the target direction is different from the passing direction and its opposite direction. In order to obtain as much scene content in the passing direction as possible, the target direction may be perpendicular to the passing direction.
  • the unmanned aerial vehicle rotates around the first position. Specifically, it may be in a plane perpendicular to the passage direction, with the first position as the center.
  • the position is the center rotation.
  • the passage direction as the upper side of the UAV and the vertical upward direction as an example, it can specifically be rotated in a horizontal plane with the first position as the center, wherein the top view rotating with the first position as the center can be as As shown in FIG. 12, O in FIG. 12 represents the first position, and it should be noted that the counterclockwise rotation in FIG. 12 is only an example.
  • the position of the image acquisition device relative to the UAV fuselage can be maintained. It does not change, and can be the same as the relative position when rotating in place.
  • Step 113 Perform merging processing on the plurality of first images and the plurality of second images to obtain the environment image.
  • a plurality of first images and a plurality of second images may be combined first to obtain a first combined image and a second combined image, where the first combined image may correspond to the plurality of first images,
  • the second merged image may correspond to multiple second images.
  • the first merged image and the second merged image may be merged to obtain the environment image.
  • the combined processing can also be performed in other ways, which is not limited in this application.
  • the environment image obtained based on step 113 has more scene content than the environment image obtained based on step 401.
  • the environment image obtained based on step 113 includes scene content based on the environment image obtained in step 401.
  • Step 114 Determine an area of interest in the environment image that corresponds to the airspace of interest used by the UAV for passing judgment.
  • the airspace of interest may specifically be the airspace used for passing judgment at the first position of the UAV. Accordingly, after obtaining multiple second images, the UAV can be controlled to return to the The first position, so that when the airspace of interest is a passable airspace, the flight can be started from the first position.
  • Step 115 Determine whether there is an object of a preset category in the region of interest.
  • Step 116 If there is no object of the preset category in the interest area, the airspace of interest is a passable airspace, and the unmanned aerial vehicle is controlled to pass through the airspace of interest.
  • step 114 to step 116 reference may be made to the related description of the foregoing embodiment, which will not be repeated here.
  • multiple first images collected at intervals of a preset angle are acquired during the process of rotating the UAV at the first position by the first target angle in situ.
  • a plurality of second images collected at intervals of a preset angle are acquired, and the plurality of first images and the plurality of second images are combined to obtain the
  • the environment image realizes that on the basis of expanding the scene content of the environment image by the flight mode rotating in the first position, the scene content of the environment image is further expanded by the flight mode rotating at the center of the first first position, so that the scene content of the environment image can be expanded. Further reducing the requirements for the field of view of the image acquisition device installed on the unmanned aerial vehicle is conducive to further cost saving.
  • the foregoing method steps may be triggered by a specific condition, that is, the acquisition of the square environment image of the UAV can be triggered by a specific condition.
  • the environment image of the traveling direction of the unmanned aerial vehicle may be acquired.
  • the environment image of the passage direction of the unmanned aerial vehicle is obtained, further based on the acquired environment image, the passability judgment of the airspace of interest is made, and the unmanned aerial vehicle is controlled according to the judgment result, and related
  • the unmanned aerial vehicle defaults to the safe airspace above the unmanned aerial vehicle when it returns to home. Compared with controlling the unmanned aerial vehicle to rise to a certain height, it can improve the safety of returning home and reduce the risk of damage to the unmanned aerial vehicle.
  • the environment image of the traffic direction on the unmanned aerial vehicle may be acquired.
  • the unmanned aerial vehicle By obtaining an environment image of the direction of passage on the UAV when it is determined that the UAV needs to avoid obstacles, further determining the passability of the airspace of interest based on the obtained environment image, and controlling the UAV according to the result of the determination, This enables the unmanned aerial vehicle to choose the passable airspace to continue flying when it encounters obstacles hindering its flight.
  • the unmanned aerial vehicle hovering after encountering an obstacle improves the flexibility of the unmanned aerial vehicle flight control.
  • FIG. 13 is a schematic structural diagram of a flight control device based on a passable airspace judgment provided by an embodiment of the application.
  • the device 130 may include a processor 131 and a memory 132.
  • the memory 132 is used to store program codes
  • the processor 131 calls the program code, and when the program code is executed, is used to perform the following operations:
  • the airspace of interest is a passable airspace, and the unmanned aerial vehicle is controlled to pass through the airspace of interest.
  • the flight control device based on the passable airspace judgment provided in this embodiment can be used to implement the technical solutions of the foregoing method embodiments, and its implementation principles and technical effects are similar to those of the method embodiments, and will not be repeated here.
  • FIG. 14 is a schematic structural diagram of an unmanned aerial vehicle provided by an embodiment of the application.
  • the unmanned aerial vehicle 140 may include: a fuselage 141, a power system 142 and image acquisition provided on the fuselage Device 143 and flight control device 144;
  • the power system 142 is used to provide power for the unmanned aerial vehicle
  • the image acquisition device 143 is used to acquire environmental images of the traveling direction of the unmanned aerial vehicle;
  • the flight control device 144 includes a memory and a processor
  • the memory is used to store program code
  • the processor calls the program code, and when the program code is executed, is used to perform the following operations:
  • the airspace of interest is a passable airspace, and the unmanned aerial vehicle is controlled to pass through the airspace of interest.
  • the unmanned aerial vehicle 140 may further include a pan/tilt 145, and the image acquisition device 143 may be arranged on the fuselage 141 through the pan/tilt 145.
  • the unmanned aerial vehicle may also include other elements or devices, which are not listed here.
  • a person of ordinary skill in the art can understand that all or part of the steps in the foregoing method embodiments can be implemented by a program instructing relevant hardware.
  • the aforementioned program can be stored in a computer readable storage medium. When the program is executed, it executes the steps including the foregoing method embodiments; and the foregoing storage medium includes: ROM, RAM, magnetic disk, or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种基于可通行空域判断的飞行控制方法、装置(130,144)及设备。方法包括:获取无人飞行器(10,140)通行方向的环境图像(201);确定环境图像中与无人飞行器(10,140)用于通行判断的兴趣空域对应的兴趣区域(202);判断兴趣区域内是否存在预设类别的对象(203);若兴趣区域内不存在预设类别的对象,则兴趣空域为可通行空域,并控制无人飞行器(10,140)通过兴趣空域(204)。方法提供了一种低成本的能够避开障碍物的飞行控制方式。

Description

基于可通行空域判断的飞行控制方法、装置及设备 技术领域
本申请涉及飞行技术领域,尤其涉及一种基于可通行空域判断的飞行控制方法、装置及设备。
背景技术
近年来,无人飞行器的应用越来越广泛,无人飞行器例如可以应用于街景拍摄、电力巡检等。
通常,在无人飞行器飞行的过程中,需要躲避障碍物。具体的,无人飞行器上设置有用于进行障碍物探测的雷达传感器,通过雷达传感器的探测结果可以获知障碍物相对于无人飞行器的位置,从而使得无人飞行器在飞行的过程中能够避开障碍物。
然而,雷达传感器的成本较高,因此如何实现一种低成本的能够避开障碍物的飞行控制方式,成为目前亟待解决的问题。
无人机在飞行过程或者自主飞行过程中,当遇到障碍物,但是障碍物之间又存在可通行空域,尤其是窄通道空域,无人飞行器需要判断是否能够通过。
发明内容
本申请实施例提供一种基于可通行空域判断的飞行控制方法、装置及设备,用以解决现有技术中如何实现一种低成本的能够避开障碍物的飞行控制方式的问题。
第一方面,本申请实施例提供一种基于可通行空域判断的飞行控制方法,所述方法包括:
获取无人飞行器通行方向的环境图像;
确定所述环境图像中与所述无人飞行器用于通行判断的兴趣空域对应的兴趣区域;
判断所述兴趣区域内是否存在预设类别的对象;
若所述兴趣区域内不存在所述预设类别的对象,则所述兴趣空域为可通行空域,并控制所述无人飞行器通过所述兴趣空域。
第二方面,本申请实施例提供一种基于可通行空域判断的飞行控制装置,所述装置包括:存储器和处理器;
所述存储器,用于存储程序代码;
所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
获取无人飞行器通行方向的环境图像;
确定所述环境图像中与所述无人飞行器用于通行判断的兴趣空域对应的兴趣区域;
判断所述兴趣区域内是否存在预设类别的对象;
若所述兴趣区域内不存在所述预设类别的对象,则所述兴趣空域为可通行空域,并控制所述无人飞行器通过所述兴趣空域。
第三方面,本申请实施例提供一种无人飞行器,包括:机身、以及设置于所述机身上的动力系统、图像采集装置和飞行控制装置;
所述动力系统,用于为所述无人飞行器提供动力;
所述图像采集装置,用于采集所述无人飞行器通行方向的环境图像;
所述飞行控制装置包括存储器和处理器;
所述存储器,用于存储程序代码;
所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
通过图像采集装置获取所述无人飞行器通行方向的环境图像;
确定所述环境图像中与所述无人飞行器用于通行判断的兴趣空域对应的兴趣区域;
判断所述兴趣区域内是否存在预设类别的对象;
若所述兴趣区域内不存在所述预设类别的对象,则所述兴趣空域为可通行空域,并控制所述无人飞行器通过所述兴趣空域。
第四方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序包含至少一段代码,所述至少一段代码可由计算机执行,以控制所述计算机执行上述第一方面任一项所述的方法。
第五方面,本申请实施例提供一种计算机程序,当所述计算机程序被计算机执行时,用于实现上述第一方面任一项所述的方法。
本申请实施例提供一种基于可通行空域判断的飞行控制方法、装置及设备,通过获取无人飞行器通行方向的环境图像,确定环境图像中与无人飞行器用于通行判断的兴趣空域对应的兴趣区域,并判断兴趣区域内是否存在预设类别的对象,若存在则兴趣空域为可通行空域,并控制无人飞行器通过兴趣空域,实现了基于无人飞行器的环境图像控制无人飞行器避开障碍物的飞行控制方式,与雷达传感器相比图像获取装置成本较低,且无人飞行器上通常会设有图像获取装置,从而提供了一种低成本的能够避开障碍物的飞行控制方式。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的基于可通行空域判断的飞行控制方法的应用场景示意图;
图2为本申请一实施例提供的基于可通行空域判断的飞行控制方法的流程示意图;
图3为本申请一实施例提供的兴趣空域的示意图;
图4为本申请另一实施例提供的基于可通行空域判断的飞行控制方法的流程示意图;
图5-图6为本申请实施例提供的图像获取装置的视野范围与兴趣空域的关系示意图;
图7为本申请实施例提供的障碍物影响无人飞行器向上飞行的示意图;
图8为本申请另一实施例提供的兴趣空域的示意图;
图9为本申请又一实施例提供的兴趣空域的示意图;
图10A为本申请一实施例提供的环境图像;
图10B为基于图10A所示环境图像的语义识别结果及兴趣区域的示意图;
图11为本申请又一实施例提供的基于可通行空域判断的飞行控制方法的流程示意图;
图12为本申请一实施例提供的无人飞行器以第一位置为中心旋转的示意图;
图13为本申请一实施例提供的基于可通行空域判断的飞行控制装置的结构示意图;
图14为本申请一实施例提供的无人飞行器的结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供的基于可通行空域判断的飞行控制方法可以应用于如图1所示的无人飞行器10,无人飞行器10可以包括图像获取装置11以及控制器12。其中,图像获取装置11可以用于采集无人飞行器10通行方向的环境图像,控制器12可以从图像获取装置11获取无人飞行器10通行方向的环境图像,并对获取到的环境图像采用本申请实施例提供的方法进行处理,以控制无人飞行器10的飞行。
其中,通行方向可以是指无人飞行器10的目标飞行方向,示例性的,所述通行方向可以为所述无人飞行器的上方。图像获取装置11具体可以为摄像头、相机等。
在需要控制无人飞行器10沿所述通行方向飞行,但不确定所述通行方向对应的空域是否可通行的情况下,控制器12可以通过本申请实施例提供的方式,实现在确定所述通行方向对应空域可通行的情况下,控制所述无人飞行 器通行。
示例性的,可以由无人飞行器10的控制终端发送的飞行控制指令触发需要控制无人飞行器10沿所述通行方向飞行的条件,所述飞行控制指令例如可以为返航指令,所述控制终端例如可以为智能手机、遥控器等。
示例性的,可以由无人飞行器10检测到特定事件触发需要控制无人飞行器10沿所述通行方向飞行的条件,所述特定事件例如可以为障碍物事件、图传中断事件等。
需要说明的是,图1中以无人飞行器包括图像获取装置11为例,可以理解的是,无人飞行器也可以不包括图像获取装置11,图像获取装置11可以作为无人飞行器的负载设置在无人飞行器之上。
需要说明的是,图1中以基于可通行空域判断的飞行控制方法应用于无人飞行器为例。可以理解的是,本申请实施例提供的飞行控制方法还可以应用于无人飞行器的控制终端,可以由控制终端从无人飞行器上设置的图像获取装置获得无人飞行器通行方向的环境图像,并对获取到的环境图像采用本申请实施例提供的方法进行处理,以控制无人飞行器10的飞行。
本申请实施例提供的基于可通行空域判断的飞行控制方法,通过获取无人飞行器通行方向的环境图像,确定环境图像中与无人飞行器用于通行判断的兴趣空域对应的兴趣区域,并判断兴趣区域内是否存在预设类别的对象,若存在则兴趣空域为可通行空域,并控制无人飞行器通过兴趣空域,实现了基于无人飞行器的环境图像控制无人飞行器避开障碍物的飞行控制方式。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
图2为本申请一实施例提供的基于可通行空域判断的飞行控制方法的流程示意图,本实施例的执行主体可以为图1中的控制器。如图2所示,本实施例的方法可以包括:
步骤201,获取无人飞行器通行方向的环境图像。
本步骤中,通行方向可以是指无人飞行器的目标飞行方向。根据飞行需求的不同,所述通行方向例如可以为所述无人飞行器的上方,所述无人飞行器的前方等。
具体的,可以由所述无人飞行器上设置的图像获取装置采集无人飞行器通行方向的环境图像。以通行方向为无人飞行器的上方为例,图像获取装置 具体可以为上视摄像头。以通行方向为无人飞行器的前方为例,图像获取装置具体可以为前视摄像头。
可选的,所述图像获取装置可以通过云台设置在无人飞行器上,使得所述图像获取装置能够相对于所述无人飞行器的机身移动,以改变图像获取装置的视野范围。基于此,无人飞行器上可以设置有可上视、可前视摄像头。
示例性的,所述图像获取装置具体可以为单目摄像头,通过图像获取装置为单目摄像头可以进一步降低成本。
需要说明的是,所述图像获取装置的视野方向与无人飞行器的通行方向可以一致。在实际应用中,图像获取装置的视野方向与所述通行方向可以相同,或者,图像获取装置的视野方向与所述通行方向可以相差一定角度。
需要说明的是,在所述图像获取装置获取到的单张图像能够包括兴趣空域的图像情况下,所述环境图像可以为图像获取装置获取到的单张图像;在所述图像获取装置获取到的单张图像不能够包括兴趣空域的图像情况下,所述环境图像可以为所述图像获取装置获取到视野方向不同的多张图像的合并图像。
步骤202,确定所述环境图像中与所述无人飞行器用于通行判断的兴趣空域对应的兴趣区域。
本步骤中,所述兴趣空域为所述无人飞行器用于进行通行判断的空域,基于所述兴趣空域的判断结果,可以决定无人飞行器是否能够通过所述兴趣空域,即是否能够沿着所述兴趣空域对应的方向飞行。需要说明的是,所述兴趣空域对应的方向可以为所述无人飞行器的目标飞行方向,或者,所述兴趣空域对应的方向可以为所述目标飞行方向之外的其他方向。
对于前者,可以确定出所述无人飞行器是否能够沿着所述目标飞行方向飞行,基于此,可以将目标飞行方向对应的空域作为兴趣空域。
对于后者,可以先识别出环境图像中不包括障碍物的区域(例如,识别为天空的区域),并基于该区域中的像素分布特点,确定出感兴趣方向,该感兴趣方向即为兴趣空域对应的方向,由此决定兴趣空域,实现了能够根据环境图像找到兴趣空域,在遇到障碍物但是障碍物间存在可通行空域情况下,能够从障碍物间找到可通行空域,有利于提高无人飞行器避障的灵活性。
所述兴趣区域的判断结果具体可以为所述无人飞行器能够通过所述兴趣控制或者所述无人飞行器不能够通过所述兴趣区域。当所述无人飞行器能够 通过所述兴趣空域时,可以表示所述兴趣区域为可通行空域;当所述无人飞行器不能够通过所述兴趣空域时,可以表示所述兴趣区域为不可通行空域。所述兴趣空域例如可以如图3所示,其中,O表示无人飞行器的位置,曲线内的空域即为兴趣空域。需要说明的是,图3中兴趣空域的形状仅为举例。
由于所述环境图像中既可以包括所述兴趣空域对应的图像,又还可以包括所述兴趣空域之外的图像,因此,需要确定环境图像中与兴趣空域对应的兴趣区域。其中,环境图像中兴趣区域的图像可以理解为所述无人飞行器视角下兴趣空域的图像。
步骤203,判断所述兴趣区域内是否存在预设类别的对象。
本步骤中,所述预设类别的对象可以表示需要检测的属于障碍物类别的对象,示例性的,所述预设类别具体可以为树木类别、建筑物类别。在所述兴趣区域内不存在预设类别的对象时,可以表征所述兴趣区域为可通行区域;在所述兴趣区域内存在预设类别的对象时,可以表征所述兴趣区域为不可通行区域。
可选的,可以先对所述环境图像进行图像识别,以获得针对所述环境图像的预设类别对象的识别结果,然后,再根据针对所述环境图像的预设类别对象的识别结果确定所述兴趣区域内是否存在预设类别的对象。或者,可以直接对所述环境图像中所述兴趣区域的图像进行图像识别,以获得针对所述兴趣区域的预设类别对象的识别结果,从而确定所述预设区域内是否存在预设类别的对象。
步骤204,若所述兴趣区域内不存在所述预设类别的对象,则所述兴趣空域为可通行空域,并控制所述无人飞行器通过所述兴趣空域。
本步骤中,由于所述兴趣区域内部不存在所述预设类别的对象,可以表征所述兴趣空域是可通行空域,因此在步骤203判断所述兴趣区域内是否存在预设类别的对象的判断结果为所述兴趣区域内不存在所述预设类别的对象的情况下,可以控制所述无人飞行器通过所述兴趣空域。
以通行方向为无人飞行器的上方为例,所述兴趣空域具体可以为所述无人飞行器上方的空域,在所述兴趣区域内不存在所述预设类别的对象的情况下,可以控制所述无人飞行器向上飞行。
以通行方向为无人飞行器的前方为例,所述兴趣空域具体可以为所述无人飞行器前方的空域,在所述兴趣区域内不存在所述预设类别的对象的情况 下,可以控制所述无人飞行器向前飞行。
本实施例中,通过获取无人飞行器通行方向的环境图像,确定环境图像中与无人飞行器用于通行判断的兴趣空域对应的兴趣区域,并判断兴趣区域内是否存在预设类别的对象,若存在则兴趣空域为可通行空域,并控制无人飞行器通过兴趣空域,实现了基于无人飞行器的环境图像控制无人飞行器避开障碍物的飞行控制方式,与雷达传感器相比图像获取装置成本较低,且无人飞行器上通常会设有图像获取装置,从而提供了一种低成本的能够避开障碍物的飞行控制方式。
前述方法实施例中,在所述环境区域为视野方向不同的多张图像的合并图像情况下,在获取通行方向的环境图像时,可以通过云台改变所述图像获取装置的视野范围,以得到视野方向不同的多张图像;或者,也可以通过改变所述无人飞行器的姿态,以得到视野方向不同的多张图像。
对于后者,步骤201具体可以包括:在所述无人飞行器以预设飞行方式飞行的过程中,获取所述无人飞行器通行方向的环境图像。具体的,在所述无人飞行器以所述预设飞行方式飞行的过程中,获取图像获取装置采集到的多张图像,对多张图像进行合并处理,以得到所述环境图像。可以理解的是,所述无人飞行器以预设飞行方式飞行的目的是为了获得包括兴趣空域对应图像的环境图像,预设飞行方式可以根据需求灵活实现。
以下主要以通行方向为无人飞行器的上方为例,对本申请实施例提供的方法进行具体说明。
图4为本申请另一实施例提供的基于可通行空域判断的飞行控制方法的流程示意图,本实施例在图2所示实施例的基础上给出了一种可选的实现方式。如图4所示,本实施例的方法可以包括:
步骤401,在所述无人飞行器在第一位置原地旋转第一目标角度的过程中,获取所述无人飞行器通行方向的环境图像。
本步骤中,受图像获取装置视野方向以及视场角的限制,所述图像获取装置获取的单张图像可能无法包含整个兴趣空域的图像,由此可以通过无人飞行器在第一位置原地旋转,以得到能包含整个兴趣区域图像的环境图像。相应的,上述预设飞行方式包括在第一位置原地旋转第一目标角度。
无人飞行器在第一位置图像获取装置初始的视野范围,与兴趣空域的关系可以如图5所示,可以看出,图像获取装置获取的单张图像中只能包含部分 兴趣空域的图像。在图5的基础上,如图6所示,在无人飞行器在第一位置原地旋转的过程中,可以不断改变图像获取装置的视野范围,从而获得其他部分兴趣空域的图像。
需要说明的是,图5、图6中箭头方向可以表示无人飞行器的朝向,箭头方向的变化表示无人飞行器原地旋转时朝向的变化。图5、图6中两条直线对应的范围表示图像获取装置的视野范围。图6中顺时针的旋转方向仅为举例。
需要说明的是,图5和图6中以兴趣空域为圆形空域为例,在兴趣空域为圆形空域情况下第一目标角度可以为360°,即可以在所述无人飞行器在第一位置原地旋转一周的过程中,获取所述无人飞行器通行方向的环境图像。可以理解的是,在兴趣空域为其他形状空域,例如半圆形空域时,第一目标角度可以小于360°,例如可以为180°。
其中,所述无人飞行器可以响应于飞行控制指令,使所述无人飞行器以所述预设飞行方式飞行。第一位置可以是所述无人飞行器生成所述飞行控制时所处的位置,其可以是无人飞行器整个飞行过程中的任一位置。
示例性的,步骤401具体可以包括:在所述无人飞行器原地旋转的过程中,获取以预设角度为间隔采集到的多张第一图像,对所述多张第一图像进行合并处理,以得到所述环境图像。其中,预设角度可以根据需求灵活实现。具体的,预设角度越大,采集到的第一图像的数量越少,相邻时刻采集到的第一图像之间重叠的内容越少;预设角度越小,采集到的第一图像的数量越多,相邻时刻采集到的第一图像之间重叠的内容越多。
需要说明的是,在所述无人飞行器在第一位置原地旋转的过程中,获得多张第一图像时,所述图像获取装置相对于所述无人飞行器机身的位置可以保持不变。
步骤402,确定所述环境图像中与所述无人飞行器用于通行判断的兴趣空域对应的兴趣区域。
本步骤中,由于无人飞行器飞行的过程中,非固定障碍物(例如鸟)的位置是会发生变化的,而鸟类或者悬停无人机等非固定障碍物会随着无人机的抬升自行移动或者通过通信手段避障。而环境图像是用于记录特定时刻无人飞行器通行方向的场景内容,因此本申请实施例提供的方法所针对的障碍物主要可以为固定障碍物(例如树木)。并且,由于固定障碍物通常并不会悬空出现,例如图7中无人飞行器上方的树枝是从无人飞行器上方的外围向无 人飞行器上方的中心O’延伸,因此为了降低对图像获取装置视场角的要求,所述兴趣空域具体可以为环形空域。通过兴趣空域为环形空域,有利于简化实现。可选的,为了进一步简化实现,所述兴趣空域具体可以为圆环形空域,其中,圆环形空域例如可以如图8所示。
所述无人飞行器的机身中心与所述无人飞行器的机翼边缘之间的最大距离,决定了所述无人飞行器能够通过空域的最小尺寸,即理想情况下无人飞行器能够通道的最小空域。并且,由于无人飞行器在飞行过程中不可避免的会受到气流等的影响,导致其无法沿着一条直线飞行,因此无人飞行器的兴趣空域除了可以考虑所述最大距离对应的空域,还可以考虑最大距离对应空域外围的空域。
基于此,所述兴趣空域的内环直径可以等于所述无人飞行器的机身中心与所述无人飞行器的机翼边缘之间的最大距离。通过所述兴趣空域的内环直径等于所述无人飞行器的机身中心与所述无人飞行器的机翼边缘之间的最大距离,能够最大程度的降低对图像获取装置视场角的要求。
需要说明的是,在实际应用中,所述兴趣空域的内环直径也可以小于所述最大距离,可以理解的是,内环直径小于最大距离越多,对图像获取装置视场角的要求越高,受非固定障碍物的影响越大。
所述兴趣空域的外环直径可以等于所述兴趣区域的内环直径与一定距离之和。可选的,为了最大程度的降低对图像获取装置视场角的要求,所述兴趣空域的外环直径可以等于所述兴趣区域的内环直径与扰动距离之和。其中,扰动距离可以是指所述无人飞行器沿飞行过程中由于气流等的影响导致其偏离飞行方向的距离。所述扰动距离的大小可以与所述无人飞行器沿飞行方向的目标飞行距离有关,当然,所述扰动距离的大小还可以与其他因素有关,具体需要根据需求灵活实现。
以通行方向为无人飞行器的上方,且通行方向为竖直向上方向为例,环形兴趣区域的外环直径L1、内环直径L2、扰动距离可以如图9所示,其中,H可以表示无人飞行器沿通行方向的飞行距离,D可以表示扰动距离,O可以表示环境图像对应的无人飞行器位置,本实施例中O具体可以为所述第一位置。
需要说明的是,在实际应用中,所述兴趣空域的外环直径也可以大于所述兴趣空域的内环直径与所述扰动距离之和,可以理解的是,外环直径大于内环直径与所述扰动距离之和越多,对图像获取装置视场角的要求越高,受 非固定障碍物的影响越大。基于上述判断过程,可以对无人飞行器飞行路径上存在障碍物,同时存在可通行路径的情况进行判断。尤其是对无法直接判断无人飞行器能否通过的窄通行路径的情况进行判断。
步骤403,判断所述兴趣区域内是否存在预设类别的对象。
本步骤中,可以通过对所述兴趣区域的图像进行语义识别的方式,判断所述兴趣区域内是否存在预设类别的对象。示例性的,所述步骤403具体可以包括:对所述环境图像中所述兴趣区域的图像进行语义识别,获得所述兴趣区域的语义识别结果;判断所述语义识别结果中是否包含所述预设类别的语义。若对所述语义识别结果中包括所述预设类别对应的语义,则表征所述兴趣区域内存在预设类别的对象;若对所述兴趣区域的语义识别结果中不包括所述预设类别对应的语义,则表征所述兴趣区域内不存在预设类别的对象。
示例性的,可以基于预先训练好的神经网络模型获得所述兴趣区域的语义识别结果。所述神经网络模型具体可以为卷积神经网络(Convolutional Neural Networks,CNN)模型。基于预先训练好的神经网络模型获得所述兴趣区域的语义识别结果,具体可以包括如下步骤A和步骤B。
步骤A,将所述兴趣区域的图像输入所述神经网络模型,得到所述神经网络模型的模型输出结果。
其中,所述神经网络模型的模型输出结果可以包括多个输出通道分别输出的置信度特征图,该多个输出通道可以与多个对象类别一一对应,单个对象类别的置信度特征图的像素值用于表征像素是所述对象类别的概率。
步骤B,根据所述神经网络模型的模型输出结果,得到包含对象语义信息的特征图。
示例性的,可以将与该多个输出通道一一对应的多个置信度特征图中同一像素位置像素值最大的置信度特征图对应的对象类别,作为所述像素位置的对象类别,从而得到特征图。
其中,特征图包含对象语义信息的具体方式可以为特征图中的像素值可以表征对应像素的对象语义,其中,对象语义可以包括能够识别出的对象类别,例如建筑物、树木、草地、河流等。例如,假设像素值为1可以表示建筑物、像素值为2可以表示树木、像素值为3可以表示草地,则处理背景图像所得到的特征图中,像素值为1的像素位置即为识别为建筑物的像素位置,像素值为2的像素位置即为识别为树木的像素位置,像素值为3的像素位置即为识 别为草地的像素位置。
所述特征图可以理解为兴趣区域的语义识别结果,假设预设对象为树木,而像素值2表示树木,则在特征图中存在像素值是2的像素情况下,可以表示兴趣区域中存在预设类别的对象;在特征图中不存在像素值是2的像素情况下,可以表示兴趣区域中不存在预设类别的对象。
可替换的,可以基于预先训练好的神经网络模型对环境图像进行处理,获得环境图像的语义识别结果,并根据环境图像的语义识别结果,确定兴趣区域中是否存在预设类别的对象。基于神经网络模型获得环境图像的语义识别结果的具体方式与前述类似,在此不再赘述。假设环境图像如图10A所示,则其语义识别结果可以如图10B所示,进一步的假设兴趣区域为图10B中的圆环区域且预设类别包括树木类别,则可以确定兴趣区域中存在预设类别的对象。
步骤404,若所述兴趣区域内不存在所述预设类别的对象,则所述兴趣空域为可通行空域,并控制所述无人飞行器通过所述兴趣空域。
需要说明的是,针对兴趣区域中存在预设类别的对象的情况,所述无人飞行器的控制方式,本申请不做限定。示例性的,可以控制所述无人飞行器处于悬停状态,或者可以控制所述无人飞行器降落。
本实施例中,通过在所述无人飞行器在第一位置原地旋转的过程中,获取所述无人飞行器通行方向的环境图像,确定环境图像中与无人飞行器用于通行判断的兴趣空域对应的兴趣区域,并判断兴趣区域内是否存在预设类别的对象,若存在则兴趣空域为可通行空域,并控制无人飞行器通过兴趣空域,实现了基于无人飞行器的环境图像控制无人飞行器避开障碍物的飞行控制方式,并且通过在第一位置原地旋转的飞行方式获得环境图像,降低了对无人飞行器上设置的图像获取装置视场角的要求,有利于节省成本。另外,通过兴趣区域为环形区域,能够进一步降低对于图像获取装置视场角的要求,有利于简化实现。
图11为本申请又一实施例提供的基于可通行空域判断的飞行控制方法的流程示意图,本实施例在图2所示实施例的基础上给出了一种可选的实现方式。如图11所示,本实施例的方法可以包括:
步骤111,在所述无人飞行器在第一位置原地旋转第一目标角度的过程中,获取以预设角度为间隔采集的多张第一图像。
需要说明的是,步骤111的具体说明可以参见前述图4所示实施例的相关描述,在此不再赘述。
步骤112,在所述无人飞行器由所述第一位置沿目标方向飞行至第二位置,并以所述第一位置为中心旋转第二目标角度的过程中,获取以预设角度为间隔采集到的多张第二图像。
本步骤中,受图像获取装置视野方向以及视场角的限制,所述无人飞行器在所述第一位置原地旋转所获得的图像中可能还是无法包含整个兴趣区域的图像,由此可以通过在第一位置之外的其他位置采集图像以得到能包含整个兴趣区域图像的环境图像。需要说明的是,无人飞行器在第一位置原地旋转包括以无人飞行器云台相机为旋转轴旋转,在旋转过程中保持云台相机的位置基本不变;或者,在旋转过程中以无人机的某一位置,例如无人机几何重心或重心为中心旋转,在旋转过程中,云台相机以一个较小的半径旋转。相应的,上述预设飞行方式还可以包括:由所述第一位置沿目标方向飞行至第二位置,并以所述第一位置为中心旋转第二目标角度。
所述第二位置与第一位置之间的距离可以较小,例如可以距离几厘米。在兴趣空域为圆形空域或环形空域情况下,第二目标角度可以为360°。所述目标方向不同于所述通行方向及其反方向,为了尽可能多的获得通行方向的场景内容,所述目标方向可以垂直于所述通行方向。
可选的,为了尽可能多的获得通行方向的场景内容,所述无人飞行器以所述第一位置为中心旋转,具体可以为在与所述通行方向垂直的平面内,以所述第一位置为中心旋转。以所述通行方向为无人飞行器的上方,且为竖直向上方向为例,具体可以在水平平面内以所述第一位置为中心旋转,其中,以第一位置为中心旋转的俯视图可以如图12所示,图12中O表示第一位置,需要说明的是图12中逆时针的旋转方式仅为举例。
需要说明的是,在所述无人飞行器以所述第一位置为中心旋转的过程中,获得多张第二图像时,所述图像获取装置相对于所述无人飞行器机身的位置可以保持不变,且可以与原地旋转时的相对位置相同。
步骤113,对所述多张第一图像和所述多张第二图像进行合并处理,以得到所述环境图像。
本步骤中,可以先分别对多张第一图像和多张第二图像进行合并处理,以获得第一合并图像和第二合并图像,其中,第一合并图像可以与多张第一 图像对应,第二合并图像可以与多张第二图像对应。然后,可以对第一合并图像和第二合并图像进行合并处理,以得到环境图像。当然,在其他实施例中,也可以通过其他方式合并处理,本申请对此不作限定。
可以理解的是,基于步骤113获得的环境图像,较基于步骤401获得的环境图像的场景内容要多,具体的,基于步骤113获得的环境图像中除了包括基于步骤401获得的环境图像的场景内容,还可以包括其他场景内容。
步骤114,确定所述环境图像中与所述无人飞行器用于通行判断的兴趣空域对应的兴趣区域。
本步骤中,所述兴趣空域具体可以为所述无人飞行器第一位置下用于通行判断的空域,相应的,在获得多张第二图像之后,可以控制所述无人飞行器可以返回所述第一位置,以便在所述兴趣空域为可通行空域时,可以由所述第一位置开始飞行。
步骤115,判断所述兴趣区域内是否存在预设类别的对象。
步骤116,若所述兴趣区域内不存在所述预设类别的对象,则所述兴趣空域为可通行空域,并控制所述无人飞行器通过所述兴趣空域。
需要说明的是,步骤114-步骤116可以参见前述实施例的相关描述,在此不再赘述。
本实施例中,通过在所述无人飞行器在第一位置原地旋转第一目标角度的过程中,获取以预设角度为间隔采集的多张第一图像,在以所述第一位置为中心旋转第二目标角度的过程中,获取以预设角度为间隔采集到的多张第二图像,并对所述多张第一图像和所述多张第二图像进行合并处理以得到所述环境图像,实现了在通过在第一位置原地旋转的飞行方式扩大环境图像的场景内容的基础上,通过第一第一位置为中心旋转的飞行方式进一步括扩大环境图像的场景内容,从而能够进一步降低对无人飞行器上设置的图像获取装置视场角的要求,有利于进一步节省成本。
在上述实施例中,可以由特定条件触发前述方法步骤,即可以由特定条件触发获取所述无人飞行器通行方形的环境图像。示例性的,可以在确定无人飞行器需要返航的情况下,获取所述无人飞行器通行方向的环境图像。通过在确定无人飞行器需要返航的情况下,获取所述无人飞行器通行方向的环境图像,进一步基于获取到的环境图像进行兴趣空域的可通行判断,并根据判断结果控制无人飞行器,与相关技术中无人飞行器在进行返航时默认无人 飞行器的上方为安全空域,并控制无人飞行器上升至一定高度相比,能够提高返航的安全性,降低了无人飞行器的损害风险。
示例性的,可以在确定无人飞行器需要避障的情况下,获取所述无人飞行器上通行方向的环境图像。通过在确定无人飞行器需要避障的情况下,获取所述无人飞行器上通行方向的环境图像,进一步基于获取到的环境图像进行兴趣空域的可通行判断,并根据判断结果控制无人飞行器,使得无人飞行器在遇到障碍物阻碍其飞行时能够选择可通行空域继续飞行,与相关技术中无人飞行器在遇到障碍物后悬停相比,提高了无人飞行器飞行控制的灵活性。
图13为本申请一实施例提供的基于可通行空域判断的飞行控制装置的结构示意图,如图13所示,该装置130可以包括:处理器131和存储器132。
所述存储器132,用于存储程序代码;
所述处理器131,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
获取无人飞行器通行方向的环境图像;
确定所述环境图像中与所述无人飞行器用于通行判断的兴趣空域对应的兴趣区域;
判断所述兴趣区域内是否存在预设类别的对象;
若所述兴趣区域内不存在所述预设类别的对象,则所述兴趣空域为可通行空域,并控制所述无人飞行器通过所述兴趣空域。
本实施例提供的基于可通行空域判断的飞行控制装置,可以用于执行前述方法实施例的技术方案,其实现原理和技术效果与方法实施例类似,在此不再赘述。
图14为本申请一实施例提供的无人飞行器的结构示意图,如图14所示,该无人飞行器140可以包括:机身141、以及设置于所述机身上的动力系统142、图像采集装置143和飞行控制装置144;
所述动力系统142,用于为所述无人飞行器提供动力;
所述图像采集装置143,用于采集所述无人飞行器通行方向的环境图像;
所述飞行控制装置144包括存储器和处理器;
所述存储器,用于存储程序代码;
所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
通过图像采集装置获取所述无人飞行器通行方向的环境图像;
确定所述环境图像中与所述无人飞行器用于通行判断的兴趣空域对应的兴趣区域;
判断所述兴趣区域内是否存在预设类别的对象;
若所述兴趣区域内不存在所述预设类别的对象,则所述兴趣空域为可通行空域,并控制所述无人飞行器通过所述兴趣空域。
可选的,无人飞行器140还可以包括云台145,图像采集装置143可以通过云台145设置在机身141上。当然,可无人飞行器除上述列出装置外,还可包括其他元件或装置,这里不一一例举。
本实施例提供的无人飞行器中的飞行控制装置的具体结构可以参见图13所示的基于可通行空域判断的飞行控制装置130的相关描述,在此不再赘述。
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (41)

  1. 一种基于可通行空域判断的飞行控制方法,其特征在于,所述方法包括:
    获取无人飞行器通行方向的环境图像;
    确定所述环境图像中与所述无人飞行器用于通行判断的兴趣空域对应的兴趣区域;
    判断所述兴趣区域内是否存在预设类别的对象;
    若所述兴趣区域内不存在所述预设类别的对象,则所述兴趣空域为可通行空域,并控制所述无人飞行器通过所述兴趣空域。
  2. 根据权利要求1所述的方法,其特征在于,所述获取所述无人飞行器通行方向的环境图像,包括:
    在所述无人飞行器以预设飞行方式飞行的过程中,获取所述无人飞行器通行方向的环境图像。
  3. 根据权利要求2所述的方法,其特征在于,所述预设飞行方式包括在第一位置原地旋转一周。
  4. 根据权利要求3所述的方法,其特征在于,所述获取所述无人飞行器通行方向的环境图像,包括:
    在所述无人飞行器原地旋转的过程中,获取以预设角度为间隔采集到的多张第一图像;
    对所述多张第一图像进行合并处理,以得到所述环境图像。
  5. 根据权利要求4所述的方法,其特征在于,所述预设飞行方式还包括:由所述第一位置沿目标方向飞行至第二位置,并以所述第一位置为中心旋转一周,所述目标方向不同于所述通行方向及其反方向。
  6. 根据权利要求5所述的方法,其特征在于,所述目标方向垂直于所述通行方向。
  7. 根据权利要求5所述的方法,其特征在于,所述获取所述无人飞行器通行方向的环境图像,还包括:
    在所述无人飞行器以所述第一位置为中心旋转的过程中,获取以预设角度为间隔采集到的多张第二图像;
    所述对所述多张第一图像进行合并处理,以得到所述环境图像,包括:
    对所述多张第一图像和所述多张第二图像进行合并处理,以得到所述环 境图像。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述兴趣空域包括环形空域。
  9. 根据权利要求8所述的方法,其特征在于,所述兴趣空域的内环直径等于所述无人飞行器的机身中心与所述无人飞行器的机翼边缘之间的最大距离。
  10. 根据权利要求8所述的方法,其特征在于,所述兴趣空域的外环直径等于所述兴趣区域的内环直径与扰动距离之和。
  11. 根据权利要求1-7任一项所述的方法,其特征在于,所述判断所述兴趣区域内是否存在预设类别的对象,包括:
    对所述环境图像中所述兴趣区域的图像进行语义识别,获得所述兴趣区域的语义识别结果;
    判断所述语义识别结果中是否包含所述预设类别的语义;
    若对所述语义识别结果中包括所述预设类别对应的语义,则表征所述兴趣区域内存在预设类别的对象;若对所述兴趣区域的语义识别结果中不包括所述预设类别对应的语义,则表征所述兴趣区域内不存在预设类别的对象。
  12. 根据权利要求1-7任一项所述的方法,其特征在于,所述获取所述无人飞行器通行方向的环境图像,包括:
    在确定无人飞行器需要返航的情况下,获取所述无人飞行器通行方向的环境图像。
  13. 根据权利要求1-7任一项所述的方法,其特征在于,所述获取所述无人飞行器通行方向的环境图像,包括:
    在确定无人飞行器需要避障的情况下,获取所述无人飞行器上通行方向的环境图像。
  14. 一种基于可通行空域判断的飞行控制装置,其特征在于,所述装置包括:存储器和处理器;
    所述存储器,用于存储程序代码;
    所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
    获取无人飞行器通行方向的环境图像;
    确定所述环境图像中与所述无人飞行器用于通行判断的兴趣空域对应的 兴趣区域;
    判断所述兴趣区域内是否存在预设类别的对象;
    若所述兴趣区域内不存在所述预设类别的对象,则所述兴趣空域为可通行空域,并控制所述无人飞行器通过所述兴趣空域。
  15. 根据权利要求14所述的装置,其特征在于,所述处理器用于获取所述无人飞行器通行方向的环境图像,具体包括:
    在所述无人飞行器以预设飞行方式飞行的过程中,获取所述无人飞行器通行方向的环境图像。
  16. 根据权利要求15所述的装置,其特征在于,所述预设飞行方式包括在第一位置原地旋转一周。
  17. 根据权利要求16所述的装置,其特征在于,所述处理器用于获取所述无人飞行器通行方向的环境图像,具体包括:
    在所述无人飞行器原地旋转的过程中,获取以预设角度为间隔采集到的多张第一图像;
    对所述多张第一图像进行合并处理,以得到所述环境图像。
  18. 根据权利要求17所述的装置,其特征在于,所述预设飞行方式还包括:由所述第一位置沿目标方向飞行至第二位置,并以所述第一位置为中心旋转一周,所述目标方向不同于所述通行方向及其反方向。
  19. 根据权利要求18所述的装置,其特征在于,所述目标方向垂直于所述通行方向。
  20. 根据权利要求18所述的装置,其特征在于,所述处理器用于获取所述无人飞行器通行方向的环境图像,还包括:
    在所述无人飞行器以所述第一位置为中心旋转的过程中,获取以预设角度为间隔采集到的多张第二图像;
    所述对所述多张第一图像进行合并处理,以得到所述环境图像,包括:
    对所述多张第一图像和所述多张第二图像进行合并处理,以得到所述环境图像。
  21. 根据权利要求14-20任一项所述的装置,其特征在于,所述兴趣空域包括环形空域。
  22. 根据权利要求21所述的装置,其特征在于,所述兴趣空域的内环直径等于所述无人飞行器的机身中心与所述无人飞行器的机翼边缘之间的最大 距离。
  23. 根据权利要求21所述的装置,其特征在于,所述兴趣空域的外环直径等于所述兴趣区域的内环直径与扰动距离之和。
  24. 根据权利要求14-20任一项所述的装置,其特征在于,所述处理器用于判断所述兴趣区域内是否存在预设类别的对象,具体包括:
    对所述环境图像中所述兴趣区域的图像进行语义识别,获得所述兴趣区域的语义识别结果;
    判断所述语义识别结果中是否包含所述预设类别的语义;
    若对所述语义识别结果中包括所述预设类别对应的语义,则表征所述兴趣区域内存在预设类别的对象;若对所述兴趣区域的语义识别结果中不包括所述预设类别对应的语义,则表征所述兴趣区域内不存在预设类别的对象。
  25. 根据权利要求14-20任一项所述的装置,其特征在于,所述处理器用于获取所述无人飞行器通行方向的环境图像,具体包括:
    在确定无人飞行器需要返航的情况下,获取所述无人飞行器通行方向的环境图像。
  26. 根据权利要求14-20任一项所述的装置,其特征在于,所述处理器用于获取所述无人飞行器通行方向的环境图像,具体包括:
    在确定无人飞行器需要避障的情况下,获取所述无人飞行器上通行方向的环境图像。
  27. 一种无人飞行器,其特征在于,包括:机身、以及设置于所述机身上的动力系统、图像采集装置和飞行控制装置;
    所述动力系统,用于为所述无人飞行器提供动力;
    所述图像采集装置,用于采集所述无人飞行器通行方向的环境图像;
    所述飞行控制装置包括存储器和处理器;
    所述存储器,用于存储程序代码;
    所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
    通过图像采集装置获取所述无人飞行器通行方向的环境图像;
    确定所述环境图像中与所述无人飞行器用于通行判断的兴趣空域对应的兴趣区域;
    判断所述兴趣区域内是否存在预设类别的对象;
    若所述兴趣区域内不存在所述预设类别的对象,则所述兴趣空域为可通行空域,并控制所述无人飞行器通过所述兴趣空域。
  28. 根据权利要求27所述的无人飞行器,其特征在于,所述处理器用于获取所述无人飞行器通行方向的环境图像,具体包括:
    在所述无人飞行器以预设飞行方式飞行的过程中,获取所述无人飞行器通行方向的环境图像。
  29. 根据权利要求28所述的无人飞行器,其特征在于,所述预设飞行方式包括在第一位置原地旋转一周。
  30. 根据权利要求29所述的无人飞行器,其特征在于,所述处理器用于获取所述无人飞行器通行方向的环境图像,具体包括:
    在所述无人飞行器原地旋转的过程中,获取以预设角度为间隔采集到的多张第一图像;
    对所述多张第一图像进行合并处理,以得到所述环境图像。
  31. 根据权利要求30所述的无人飞行器,其特征在于,所述预设飞行方式还包括:由所述第一位置沿目标方向飞行至第二位置,并以所述第一位置为中心旋转一周,所述目标方向不同于所述通行方向及其反方向。
  32. 根据权利要求31所述的无人飞行器,其特征在于,所述目标方向垂直于所述通行方向。
  33. 根据权利要求31所述的无人飞行器,其特征在于,所述处理器用于获取所述无人飞行器通行方向的环境图像,还包括:
    在所述无人飞行器以所述第一位置为中心旋转的过程中,获取以预设角度为间隔采集到的多张第二图像;
    所述对所述多张第一图像进行合并处理,以得到所述环境图像,包括:
    对所述多张第一图像和所述多张第二图像进行合并处理,以得到所述环境图像。
  34. 根据权利要求27-33任一项所述的无人飞行器,其特征在于,所述兴趣空域包括环形空域。
  35. 根据权利要求34所述的无人飞行器,其特征在于,所述兴趣空域的内环直径等于所述无人飞行器的机身中心与所述无人飞行器的机翼边缘之间的最大距离。
  36. 根据权利要求34所述的无人飞行器,其特征在于,所述兴趣空域的 外环直径等于所述兴趣区域的内环直径与扰动距离之和。
  37. 根据权利要求27-33任一项所述的无人飞行器,其特征在于,所述处理器用于判断所述兴趣区域内是否存在预设类别的对象,具体包括:
    对所述环境图像中所述兴趣区域的图像进行语义识别,获得所述兴趣区域的语义识别结果;
    判断所述语义识别结果中是否包含所述预设类别的语义;
    若对所述语义识别结果中包括所述预设类别对应的语义,则表征所述兴趣区域内存在预设类别的对象;若对所述兴趣区域的语义识别结果中不包括所述预设类别对应的语义,则表征所述兴趣区域内不存在预设类别的对象。
  38. 根据权利要求27-33任一项所述的无人飞行器,其特征在于,所述处理器用于获取所述无人飞行器通行方向的环境图像,具体包括:
    在确定无人飞行器需要返航的情况下,获取所述无人飞行器通行方向的环境图像。
  39. 根据权利要求27-33任一项所述的无人飞行器,其特征在于,所述处理器用于获取所述无人飞行器通行方向的环境图像,具体包括:
    在确定无人飞行器需要避障的情况下,获取所述无人飞行器上通行方向的环境图像。
  40. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序包含至少一段代码,所述至少一段代码可由计算机执行,以控制所述计算机执行如权利要求1-13任一项所述的方法。
  41. 一种计算机程序,其特征在于,当所述计算机程序被计算机执行时,用于实现如权利要求1-13任一项所述的方法。
PCT/CN2020/073658 2020-01-21 2020-01-21 基于可通行空域判断的飞行控制方法、装置及设备 WO2021146971A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/073658 WO2021146971A1 (zh) 2020-01-21 2020-01-21 基于可通行空域判断的飞行控制方法、装置及设备
CN202080004232.8A CN112585555A (zh) 2020-01-21 2020-01-21 基于可通行空域判断的飞行控制方法、装置及设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/073658 WO2021146971A1 (zh) 2020-01-21 2020-01-21 基于可通行空域判断的飞行控制方法、装置及设备

Publications (1)

Publication Number Publication Date
WO2021146971A1 true WO2021146971A1 (zh) 2021-07-29

Family

ID=75145416

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/073658 WO2021146971A1 (zh) 2020-01-21 2020-01-21 基于可通行空域判断的飞行控制方法、装置及设备

Country Status (2)

Country Link
CN (1) CN112585555A (zh)
WO (1) WO2021146971A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105974938A (zh) * 2016-06-16 2016-09-28 零度智控(北京)智能科技有限公司 避障方法、装置、载体及无人机
CN106444837A (zh) * 2016-10-17 2017-02-22 北京理工大学 一种无人机避障方法及系统
CN106647805A (zh) * 2016-12-27 2017-05-10 深圳市道通智能航空技术有限公司 无人机自主飞行的方法、装置以及无人机
CN107703951A (zh) * 2017-07-27 2018-02-16 上海拓攻机器人有限公司 一种基于双目视觉的无人机避障方法及系统
KR20180034000A (ko) * 2016-09-27 2018-04-04 대한민국(행정안전부 국립재난안전연구원장) 무인항공기의 충돌 회피 방법
CN109074490A (zh) * 2018-07-06 2018-12-21 深圳前海达闼云端智能科技有限公司 通路检测方法、相关装置及计算机可读存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106275470B (zh) * 2015-06-29 2019-01-01 优利科技有限公司 飞行器及其避障方法和系统
CN105959576A (zh) * 2016-07-13 2016-09-21 北京博瑞爱飞科技发展有限公司 无人机拍摄全景图的方法及装置
CN108229287B (zh) * 2017-05-31 2020-05-22 北京市商汤科技开发有限公司 图像识别方法和装置、电子设备和计算机存储介质
CN108171116B (zh) * 2017-12-01 2021-03-26 北京臻迪科技股份有限公司 飞行器辅助避障方法、装置和辅助避障系统
JP2019188965A (ja) * 2018-04-23 2019-10-31 株式会社イームズラボ 無人飛行体、学習結果情報、無人飛行方法及び無人飛行プログラム

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105974938A (zh) * 2016-06-16 2016-09-28 零度智控(北京)智能科技有限公司 避障方法、装置、载体及无人机
KR20180034000A (ko) * 2016-09-27 2018-04-04 대한민국(행정안전부 국립재난안전연구원장) 무인항공기의 충돌 회피 방법
CN106444837A (zh) * 2016-10-17 2017-02-22 北京理工大学 一种无人机避障方法及系统
CN106647805A (zh) * 2016-12-27 2017-05-10 深圳市道通智能航空技术有限公司 无人机自主飞行的方法、装置以及无人机
CN107703951A (zh) * 2017-07-27 2018-02-16 上海拓攻机器人有限公司 一种基于双目视觉的无人机避障方法及系统
CN109074490A (zh) * 2018-07-06 2018-12-21 深圳前海达闼云端智能科技有限公司 通路检测方法、相关装置及计算机可读存储介质

Also Published As

Publication number Publication date
CN112585555A (zh) 2021-03-30

Similar Documents

Publication Publication Date Title
US11361665B2 (en) Unmanned aerial vehicle privacy controls
US11693428B2 (en) Methods and system for autonomous landing
US11727814B2 (en) Drone flight operations
US20220388656A1 (en) Unmanned Aerial Vehicle Area Surveying
US9632509B1 (en) Operating a UAV with a narrow obstacle-sensor field-of-view
US20200258400A1 (en) Ground-aware uav flight planning and operation system
CA2920388C (en) Unmanned vehicle searches
AU2021204188A1 (en) A backup navigation system for unmanned aerial vehicles
US20200026720A1 (en) Construction and update of elevation maps
TWI817962B (zh) 基於環境的可預測性的可調整的物件避開接近度閾值的方法、機器人式運載工具及處理設備
WO2020103110A1 (zh) 一种基于点云地图的图像边界获取方法、设备及飞行器
JP2020098567A (ja) 適応検知・回避システム
WO2017139282A1 (en) Unmanned aerial vehicle privacy controls
WO2020143576A1 (zh) 调整机载雷达的主探测方向的方法、装置和无人机
Escobar‐Alvarez et al. R‐ADVANCE: rapid adaptive prediction for vision‐based autonomous navigation, control, and evasion
US20180348766A1 (en) System and Method for Mission Planning and Flight Automation for Unmanned Aircraft
WO2022095067A1 (zh) 路径规划方法、路径规划装置、路径规划系统和介质
US20220392353A1 (en) Unmanned aerial vehicle privacy controls
WO2021238743A1 (zh) 一种无人机飞行控制方法、装置及无人机
WO2021146973A1 (zh) 无人机返航的控制方法、设备、可移动平台和存储介质
WO2021168810A1 (zh) 无人机控制方法、装置及无人机
WO2021146971A1 (zh) 基于可通行空域判断的飞行控制方法、装置及设备
Bakirci et al. Surveillance, reconnaissance and detection services for disaster operations of IoT-based eVTOL UAVs with swarm intelligence
CN115602003A (zh) 无人驾驶航空器飞行避障方法、系统和可读存储介质
WO2022061902A1 (zh) 飞行控制方法、装置、无人机和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20915338

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20915338

Country of ref document: EP

Kind code of ref document: A1