WO2020135797A1 - 一种深度图处理方法、装置和无人机 - Google Patents

一种深度图处理方法、装置和无人机 Download PDF

Info

Publication number
WO2020135797A1
WO2020135797A1 PCT/CN2019/129562 CN2019129562W WO2020135797A1 WO 2020135797 A1 WO2020135797 A1 WO 2020135797A1 CN 2019129562 W CN2019129562 W CN 2019129562W WO 2020135797 A1 WO2020135797 A1 WO 2020135797A1
Authority
WO
WIPO (PCT)
Prior art keywords
thread
image
hardware acceleration
binocular
resolution
Prior art date
Application number
PCT/CN2019/129562
Other languages
English (en)
French (fr)
Inventor
李昭早
Original Assignee
深圳市道通智能航空技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市道通智能航空技术有限公司 filed Critical 深圳市道通智能航空技术有限公司
Priority to EP19905206.9A priority Critical patent/EP3889544B1/en
Publication of WO2020135797A1 publication Critical patent/WO2020135797A1/zh
Priority to US17/361,694 priority patent/US20210325909A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/106Change initiated in response to external conditions, e.g. avoidance of elevated terrain or of no-fly zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D47/00Equipment not otherwise provided for
    • B64D47/08Arrangements of cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U20/00Constructional aspects of UAVs
    • B64U20/80Arrangement of on-board electronics, e.g. avionics systems or wiring
    • B64U20/87Mounting of imaging devices, e.g. mounting of gimbals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • Embodiments of the present invention relate to the technical field of drones, and in particular, to a depth map processing method, device, and drones.
  • the UAV can take obstacle avoidance measures according to the positions of the obstacles.
  • vision systems such as monocular vision system, binocular vision system, etc.
  • camera devices to take images of the area around the drone and process the images to determine Location information of surrounding obstacles.
  • the unmanned aerial vehicle takes obstacle avoidance measures such as detour, deceleration, or suspension based on its own speed, posture, and position information of the obstacle to avoid obstacles.
  • the related art has at least the following problems: when the UAV performs image processing and performs obstacle avoidance measures according to the processing results, if the frame rate of image acquisition is high, it is likely to cause image processing jams, thereby Causes a large delay.
  • the purpose of the embodiments of the present invention is to provide a depth map processing method, device and drone, which can improve the problems of image processing blocking and large delay in the process of using the vision system by the drone.
  • an embodiment of the present invention provides a depth map processing method for a UAV controller, the UAV further includes an image acquisition device, and the image acquisition device is communicatively connected to the controller, The method includes:
  • the method also includes:
  • the step S2 and the step S3 Before performing the step S1, the step S2 and the step S3, acquiring the execution time of the step S1, the execution time of the step S2 and the execution time of the step S3;
  • the execution time of the step S1 the execution time of the step S2 and the execution time of the step S3, at least two threads and at least one ring queue are established, and the at least two threads respectively execute the steps S1 The step S2 and the step S3 to reduce the total execution time;
  • the processing result of the thread performing the previous step is sent to the ring queue, and the thread performing the latter step takes the processing result from the ring queue and performs processing according to the process As a result, the latter step is performed.
  • At least two threads and at least one ring queue are established by the at least two The thread executes the step S1, the step S2 and the step S3 respectively to reduce the total execution time, including:
  • the processing result of the thread performing the previous step is sent to the first ring queue, and the thread performing the latter step takes the thread from the first ring queue Process the result and execute the latter step according to the process result.
  • the first preset condition is:
  • the sum of the execution time of the step S1, the step S2 and the step S3 is greater than a preset value.
  • the preset value is 1000/P, where P is the image frame rate.
  • the first thread executes the two steps of step S1, step S2, and step S3, and the second thread executes the steps S1, step S2, and all Describe a step in step S3;
  • step S1, step S2, and step S3 meets the first preset condition, a first thread, a second thread, and a first ring queue are established, and the first thread Performing the step S1, the step S2, and the step S3 with the second thread, respectively, includes:
  • the processing result of the thread performing the previous step is sent to the second ring queue, and the thread performing the latter step takes the thread from the second ring queue Process the result and execute the latter step according to the process result.
  • the second preset condition is that the sum of the execution time of the two steps executed by the first thread is greater than a preset value.
  • the preset value is 1000/P, where P is the image frame rate.
  • the controller includes a hardware acceleration channel, and the image acquisition device includes at least two sets of binocular units;
  • the binocular matching of the image to obtain the depth map of the target area includes:
  • the images collected by the at least two groups of binocular units are sent to the hardware acceleration channel, and the images collected by the at least two groups of binocular units are binocularly matched by time-division multiplexing the hardware acceleration channels to obtain Describe the depth map.
  • the number of the hardware acceleration channels is at least two, and the images acquired by the at least two sets of binocular units include an image group with a first resolution and a second resolution The image group of, wherein the second resolution is greater than the first resolution;
  • the binocular matching of the image to obtain the depth map of the target area includes:
  • the image group with the first resolution is sent to the other one of the at least two hardware acceleration channels for binocular matching to obtain a depth map corresponding thereto.
  • the number of the hardware acceleration channels is 4, which are the first hardware acceleration channel, the second hardware acceleration channel, the third hardware acceleration channel, and the fourth hardware acceleration channel, respectively;
  • the image acquisition device includes 6 groups of binocular units. Among the 6 groups of binocular units, 4 groups of binocular units acquire the image group as the first resolution image group, and 2 groups of binocular units acquire The image group of is the image group of the second resolution;
  • the sending of the image group with the first resolution to the other of the at least two hardware acceleration channels for binocular matching to obtain a depth map corresponding thereto includes:
  • the sending of the image group with the resolution of the second resolution to one of the at least two hardware acceleration channels for binocular matching to obtain a depth map corresponding thereto includes:
  • an embodiment of the present invention provides a depth map processing device for a controller of a drone, the drone further includes an image acquisition device, and the image acquisition device is communicatively connected to the controller,
  • the device includes:
  • An image correction module configured to perform step S1 to correct the image of the target area collected by the image collection device
  • a depth map acquisition module configured to perform step S2 and perform binocular matching on the image to obtain a depth map of the target area
  • Obstacle distribution acquisition module used to perform step S3, and obtain the obstacle distribution around the drone according to the depth map
  • the device also includes:
  • a time obtaining module configured to obtain the execution time of the step S1, the execution time of the step S2 and the execution time of the step S3 before performing the step S1, the step S2 and the step S;
  • the thread and ring queue establishment module is used to establish at least two threads and at least one ring queue according to the execution time of the step S1, the execution time of the step S2 and the execution time of the step S3. Each thread executes the step S1, the step S2 and the step S3 respectively to reduce the total execution time;
  • the processing result of the thread performing the previous step is sent to the ring queue, and the thread performing the latter step takes the processing result from the ring queue and performs processing according to the process As a result, the latter step is performed.
  • the thread and ring queue establishment module includes:
  • the judgment sub-module is used to judge whether the sum of the execution time of the step S1, the step S2 and the step S3 satisfies the first preset condition;
  • a thread and ring queue creation sub-module for establishing a first thread, a second thread and a first ring if the sum of the execution times of step S1, step S2 and step S3 meets the first preset condition In the queue, the first thread and the second thread respectively execute the step S1, the step S2, and the step S3;
  • the processing result of the thread performing the previous step is sent to the first ring queue, and the thread performing the latter step takes the thread from the first ring queue Process the result and execute the latter step according to the process result.
  • the first preset condition is:
  • the sum of the execution time of the step S1, the step S2 and the step S3 is greater than a preset value.
  • the preset value is 1000/P, where P is the image frame rate.
  • the first thread performs the two steps of the step S1, the step S2, and the step S3, and the second thread executes the step S1, the step S2, and the step One step in step S3;
  • the thread and ring queue establishment sub-module is specifically used for:
  • the processing result of the thread performing the previous step is sent to the second ring queue, and the thread performing the latter step takes the thread from the second ring queue Process the result and execute the latter step according to the process result.
  • the second preset condition is that the sum of the execution time of the two steps executed by the first thread is greater than a preset value.
  • the preset value is 1000/P, where P is the image frame rate.
  • the controller includes a hardware acceleration channel, and the image acquisition device includes at least two sets of binocular units;
  • the depth map acquisition module is specifically used to:
  • the images collected by the at least two groups of binocular units are sent to the hardware acceleration channel, and the images collected by the at least two groups of binocular units are binocularly matched by time-division multiplexing the hardware acceleration channels to obtain Describe the depth map.
  • the number of the hardware acceleration channels is at least two, and the images acquired by the at least two sets of binocular units include an image group with a first resolution and a second resolution The image group of, wherein the second resolution is greater than the first resolution;
  • the depth map acquisition module is specifically used to:
  • the image group with the first resolution is sent to the other one of the at least two hardware acceleration channels for binocular matching to obtain a depth map corresponding thereto.
  • the number of the hardware acceleration channels is 4, which are the first hardware acceleration channel, the second hardware acceleration channel, the third hardware acceleration channel, and the fourth hardware acceleration channel, respectively;
  • the image acquisition device includes 6 groups of binocular units. Among the 6 groups of binocular units, 4 groups of binocular units acquire the image group as the first resolution image group, and 2 groups of binocular units acquire The image group of is the image group of the second resolution;
  • the depth map acquisition module is specifically used to:
  • an embodiment of the present invention provides a drone.
  • the drone includes:
  • the machine arm is connected to the fuselage
  • the power device is provided on the arm;
  • An image acquisition device which is installed on the fuselage, is used to acquire a target image of the target area of the drone;
  • a vision chip is provided on the body, and the vision chip is in communication connection with the image acquisition device;
  • the vision chip includes:
  • At least one processor and
  • a memory the memory is in communication connection with the at least one processor, the memory stores instructions executable by the at least one processor, the instructions are executed by the at least one processor, so that the at least one processor The processor can execute the method described above.
  • an embodiment of the present invention provides a non-volatile computer-readable storage medium that stores computer-executable instructions.
  • the computer-executable instructions are executed by a drone, Causing the drone to perform the method described above.
  • the depth map processing method and device and the drone establish at least two threads and at least one ring queue according to the execution time of each step of the depth map processing process performed by the drone controller. Each thread executes each step of the depth map processing. Each thread can obtain the processing result of other threads through at least one ring queue.
  • FIG. 1 is a schematic diagram of an application scenario of a depth map processing method and device according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of the hardware structure of an embodiment of the drone of the present invention.
  • FIG. 3 is a schematic flowchart of an embodiment of a depth map processing method of the present invention.
  • FIG. 4 is a schematic diagram of applying a hardware acceleration channel in an embodiment of the depth map processing method of the present invention.
  • FIG. 5 is a schematic structural diagram of an embodiment of a depth map processing device of the present invention.
  • FIG. 6 is a schematic structural diagram of an embodiment of a depth map processing device of the present invention.
  • FIG. 7 is a schematic diagram of the hardware structure of the vision chip in an embodiment of the drone of the present invention.
  • the depth map processing method, device and drone provided in the embodiments of the present invention can be applied to the application scenario shown in FIG. 1.
  • the application scenarios include a drone 100 and an obstacle 200, wherein the drone 100 may be a suitable unmanned aerial vehicle, including a fixed-wing unmanned aerial vehicle and a rotary-wing unmanned aerial vehicle, such as a helicopter, a quadcopter, and other Number of rotors and/or rotor configuration aircraft.
  • the UAV 100 may also be other movable objects, such as manned vehicles, aeromodelling, unmanned airships, and unmanned hot air balloons.
  • Obstacles 200 such as buildings, mountains, trees, forests, signal towers, or other movable or immovable objects (only one obstacle is shown in Figure 1, there may be more obstacles or no obstacles in practical applications) .
  • FIG. 2 FIG. 2 only partially shows the structure of the drone 100.
  • the drone 100 includes a fuselage 10, an arm connected to the fuselage 10, a power device, and a device.
  • the power unit is used to provide thrust and lift for the UAV 10 to fly.
  • the control system is the central nerve of the UAV 100, and may include multiple functional units, such as a flight control system, a vision system, and other systems with specific functions.
  • the vision system includes an image acquisition device 30 and a vision chip 20, etc.
  • the flight control system includes various sensors (such as gyroscopes, accelerometers) and flight control chips.
  • the drone 100 may detect the position information of obstacles around the drone 100 through the image acquisition device 30 and the vision chip 20, and take obstacle avoidance measures according to the position information.
  • the image acquisition device 30 is used to acquire the target image of the target area, which may be, for example, a high-definition camera or a motion camera.
  • the vision chip 20 is in communication connection with the image acquisition device 30, and can acquire the target image collected by the image acquisition device 30, and perform image processing on the target image to obtain depth information of the area corresponding to the target image, thereby obtaining the surroundings of the drone 100 Position information of the obstacle 200. According to the position information of the obstacle 200, the vision chip 20 can make obstacle avoidance measures.
  • the flight control chip controls the UAV 100 according to the obstacle avoidance measures.
  • obstacle avoidance measures include controlling drone deceleration or suspension.
  • the visual chip 20 can also determine the distance of a certain obstacle and perform three-dimensional reconstruction.
  • the image acquisition device 30 may include at least one monocular unit or at least one binocular unit (the binocular unit will be used as an example for description below).
  • Each binocular unit can obtain a set of target images, and each binocular unit acquires target images at a preset frame rate.
  • the vision chip 20 needs to perform image processing on the target image group obtained by each binocular unit to obtain depth information corresponding to each target image group, and the vision chip 20 also needs to obtain the surroundings of the drone 100 according to the depth information corresponding to each target image group The distribution of obstacles 200.
  • the frame rate of the target image acquisition is high, it is easy for the vision chip 20 to fail to process it, which in turn will block the processing of the target image group of the input source, causing image processing to be blocked and a large delay.
  • the execution time of each step in the depth map processing process of the drone controller when the total execution time of each step is large, at least two Threads and at least one ring queue, the at least two threads execute the steps of the depth map processing respectively.
  • Each thread can obtain the processing result of other threads through at least one ring queue.
  • the drone 100 is provided with a vision chip 20 to obtain the obstacle avoidance measures of the drone 100 according to the image acquired by the image acquisition device 30.
  • the drone 100 may also use other controllers To realize the function of the vision chip 20.
  • FIG. 3 is a schematic flowchart of a depth map processing method provided by an embodiment of the present invention.
  • the method is used in a controller of the drone 100 shown in FIG. 1 or FIG. 2.
  • the controller may As the vision chip 20 of the drone 100, as shown in FIG. 3, the method includes:
  • the image acquisition device may be a binocular unit, and correcting the image collected by the image acquisition device includes performing image correction on the image and calibrating each group of images collected by the binocular unit, To obtain the calibration parameters corresponding to each group of the images.
  • binocular matching is performed on each group of the images to obtain a disparity map corresponding to each group of the images, and according to the disparity map and the calibration parameters, depth information of the area corresponding to each group of the images is obtained .
  • the distribution of obstacles is, for example, position information of each obstacle, the distance of each obstacle, and a three-dimensional map of the surrounding environment of the drone Wait.
  • the execution time of each step may be preset by the designer based on the basic understanding of the processing time of each step.
  • the step S1, the step S2, and the step S3 may be trial-run in the controller for a period of time, and the execution time of each step may be detected by the controller to obtain the step S1.
  • the execution time of the step S1 the execution time of the step S2 and the execution time of the step S3, establish at least two threads and at least one ring queue, and the at least two threads respectively execute the steps S1, the step S2 and the step S3 to reduce the total execution time; of the two threads executing adjacent steps, the processing result of the thread executing the previous step is sent to the ring queue, and the latter is executed
  • the thread of the step takes the processing result from the ring queue and executes the latter step according to the processing result.
  • step S1 Based on the obtained execution time of step S1, execution time of step S2, and execution time of step S3, it is determined whether at least two threads are used to execute steps S1, S2, and S3 in parallel.
  • the number of threads and ring queues can be determined according to the execution time of the above steps, for example, it can be two threads, one ring queue, or three threads, two ring queues.
  • step S1, step S2, and step S3 may be longer (eg, greater than 1000/P, where P is the image frame rate of the image captured by the image acquisition device), image blocking may occur.
  • P is the image frame rate of the image captured by the image acquisition device
  • step S1, step S2 and step S3 in parallel. If two adjacent steps are respectively executed by different threads, the thread executing the previous step can send the processing result to the ring queue, and the thread executing the latter step obtains the processing result from the ring queue.
  • a first thread, a second thread, and a first ring queue can be set, the first thread performs the two steps of step S1, step S2, and step S3, and the second thread performs the steps of step S1, step S2, and step S3 One step.
  • the second thread executes two steps of step S1, step S2, and step S3, and the first thread executes one of step S1, step S2, and step S3.
  • the first thread executes step S1 and step S2, and the second thread executes step S3.
  • the first thread sends the processing result of step S2 to the first circular queue, and the second thread obtains the processing result from the first circular queue and executes step S3.
  • the first thread may also perform step S1, the second thread may perform step S2 and step S3, and so on.
  • step S1 Assuming that the execution time of step S1 is t1, the execution time of step S2 is t2, and the execution time of step S3 is t3, if t1+t2+t3>1000/P, t1+t2 ⁇ 1000/P and t3 ⁇ 1000/P,
  • the first thread executes step S1 and step S2, the second thread executes step S3, and after two threads execute step S1, step S2 and step S3 in parallel, the total execution time max(t1+t2, t3) ⁇ 1000/P, lower This reduces the total execution time and effectively avoids image blocking.
  • the two Steps can also be executed separately by two threads.
  • a third thread and a second ring queue can be established.
  • One thread executes step S1, the third thread executes step S2, the first thread sends the processing result of step S1 to the second circular queue, and the third thread takes the processing result from the second circular queue and executes step S2.
  • the third thread may also perform step S1, and the first thread may perform step S2.
  • step S1 if one of step S1, step S2, and step S3 has a longer execution time, for example, t3>1000/P, then two or more threads may be further used to execute step S3.
  • binocular matching can be provided by the Hardware acceleration channel to execute.
  • the hardware acceleration channel is a device composed of hardware and interface software that can increase the speed of software operation.
  • a hardware acceleration channel may be used to increase the running speed.
  • Each frame image group continuously captured by at least two binocular units is sequentially sent to the hardware acceleration channel, and image processing is performed on each group of images by time-division multiplexing the hardware acceleration channel to obtain depth information corresponding to each group of images. That is, the first group of images is first sent to the hardware acceleration channel for processing, and after the processing is completed, the second group of images is sent..., and so on, and the hardware acceleration channel is multiplexed by polling.
  • At least two hardware acceleration channels may be used to increase the running speed.
  • the image group obtained by the high-resolution binocular unit and the image group obtained by the low-resolution binocular unit are respectively subjected to a binocular matching process with a hardware acceleration channel to further improve the operation speed.
  • the image group obtained by a binocular unit with a high resolution can also be processed with a hardware acceleration channel, and the image group obtained by at least two binocular units with a low resolution (for example, two binocular units) Unit or three binocular units) share a hardware acceleration channel for processing.
  • the image groups obtained by at least two binocular units with low resolution share a hardware acceleration channel, and the hardware acceleration channel can be fully and reasonably used without affecting the running speed of the software when the number of hardware acceleration channels is small.
  • each target image group may be processed by time-division multiplexing the hardware acceleration channel.
  • an implementation of a drone includes two pairs of binocular units with a resolution of 720P and four pairs of binocular units with a resolution of VGA.
  • a high-resolution 720P binocular unit can use a hardware acceleration channel alone, and every two binocular units in a low-resolution VGA binocular unit share a hardware acceleration channel.
  • the correspondence between the binocular unit and the hardware acceleration channel can be set in advance, so that the image group obtained by each binocular unit can be sent to the corresponding hardware channel for processing.
  • an embodiment of the present invention also provides a depth map processing device, which is used in the controller of the drone 100 shown in FIG. 1 or FIG. 2, in some of these embodiments
  • the controller may be the vision chip 20 of the drone 100.
  • the depth map processing device 500 includes:
  • the image correction module 501 is configured to perform step S1 to correct the image of the target area collected by the image collection device;
  • the depth map acquisition module 502 is configured to perform step S2 to perform binocular matching on the image to obtain a depth map of the target area;
  • the obstacle distribution obtaining module 503 is configured to perform step S3, and obtain the obstacle distribution around the drone according to the depth map;
  • the device also includes:
  • the time obtaining module 504 is configured to obtain the execution time of the step S1, the execution time of the step S2 and the execution time of the step S3 before performing the step S1, the step S2 and the step S;
  • the thread and ring queue establishment module 505 is configured to establish at least two threads and at least one ring queue according to the execution time of the step S1, the execution time of the step S2 and the execution time of the step S3.
  • the two threads respectively execute the step S1, the step S2 and the step S3 to reduce the total execution time;
  • the processing result of the thread performing the previous step is sent to the ring queue, and the thread performing the latter step takes the processing result from the ring queue and performs processing according to the process As a result, the latter step is performed.
  • At least two threads and at least one circular queue are established according to the execution time of each step of the depth map processing process performed by the UAV controller, and the at least two threads respectively execute each step of the depth map processing .
  • Each thread can obtain the processing result of other threads through at least one ring queue.
  • the thread and ring queue establishment module 505 includes:
  • the judgment submodule 5051 is configured to judge whether the sum of the execution time of the step S1, the step S2 and the step S3 satisfies the first preset condition;
  • the thread and ring queue creation submodule 5052 is configured to establish a first thread, a second thread and a first thread if the sum of the execution time of the step S1, the step S2 and the step S3 meets the first preset condition A circular queue, where the first thread and the second thread respectively execute the step S1, the step S2, and the step S3;
  • the processing result of the thread performing the previous step is sent to the first ring queue, and the thread performing the latter step takes the thread from the first ring queue Process the result and execute the latter step according to the process result.
  • the first preset condition is:
  • the sum of the execution time of the step S1, the step S2 and the step S3 is greater than a preset value.
  • the preset value is 1000/P, where P is the image frame rate.
  • the first thread performs the two steps of step S1, step S2, and step S3, and the second thread executes the steps S1, Step S2 and one of the steps S3;
  • the thread and ring queue creation submodule 5052 is specifically used for:
  • the processing result of the thread performing the previous step is sent to the second ring queue, and the thread performing the latter step takes the thread from the second ring queue Process the result and execute the latter step according to the process result.
  • the second preset condition is that the sum of execution times of the two steps performed by the first thread is greater than a preset value.
  • the preset value is 1000/P, where P is the image frame rate.
  • the controller includes a hardware acceleration channel, and the image acquisition device includes at least two sets of binocular units;
  • the depth map acquisition module 502 is specifically used to:
  • the images collected by the at least two groups of binocular units are sent to the hardware acceleration channel, and the images collected by the at least two groups of binocular units are binocularly matched by time-division multiplexing the hardware acceleration channels to obtain Describe the depth map.
  • the number of the hardware acceleration channels is at least two, and the images acquired by the at least two groups of binocular units include the image group with the first resolution and the resolution An image group with a second resolution, wherein the second resolution is greater than the first resolution;
  • the depth map acquisition module 502 is specifically used to:
  • the image group with the first resolution is sent to the other one of the at least two hardware acceleration channels for binocular matching to obtain a depth map corresponding thereto.
  • the number of the hardware acceleration channels is 4, which are the first hardware acceleration channel, the second hardware acceleration channel, the third hardware acceleration channel, and the fourth hardware acceleration channel, respectively;
  • the image acquisition device includes 6 groups of binocular units. Among the 6 groups of binocular units, 4 groups of binocular units acquire the image group as the first resolution image group, and 2 groups of binocular units acquire The image group of is the image group of the second resolution;
  • the depth map acquisition module 502 is specifically used to:
  • the above-mentioned device can execute the method provided by the embodiments of the present application, and has functional modules and beneficial effects corresponding to the execution method.
  • the above-mentioned device can execute the method provided by the embodiments of the present application, and has functional modules and beneficial effects corresponding to the execution method.
  • the methods provided in the embodiments of the present application refer to the methods provided in the embodiments of the present application.
  • FIG. 7 is a schematic diagram of the hardware structure of the vision chip 20 in an embodiment of the drone 100. As shown in FIG. 7, the vision chip 20 includes:
  • One or more processors 21 and a memory 22, one processor 21 is taken as an example in FIG. 7.
  • the processor 21 and the memory 22 may be connected through a bus or in other ways.
  • the connection through a bus is used as an example.
  • the memory 22 is a non-volatile computer-readable storage medium, and can be used to store non-volatile software programs, non-volatile computer executable programs, and modules, such as programs corresponding to the depth map processing method in the embodiments of the present application. Instructions/modules (for example, the image correction module 501, depth map acquisition module 502, obstacle distribution acquisition module 503, time acquisition module 504, and thread and ring queue establishment module 505 shown in FIG. 5).
  • the processor 21 executes various functional applications and data processing of the drone by running non-volatile software programs, instructions, and modules stored in the memory 22, that is, implementing the depth map processing method of the foregoing method embodiment.
  • the memory 22 may include a storage program area and a storage data area, wherein the storage program area may store an operating system and application programs required for at least one function; the storage data area may store data created according to the use of a vision chip, and the like.
  • the memory 22 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the memory 22 may optionally include memories remotely provided with respect to the processor 21, and these remote memories may be connected to the drone via a network. Examples of the aforementioned network include, but are not limited to, the Internet, intranet, local area network, mobile communication network, and combinations thereof.
  • the one or more modules are stored in the memory 22, and when executed by the one or more processors 21, execute the depth map processing method in any of the above method embodiments, for example, execute FIG. 3 described above Steps 101 to 105 in the method in FIG. 5; realize the functions of the modules 501-505 in FIG. 5 and the modules 501-505 and 5051-5052 in FIG. 6.
  • An embodiment of the present application provides a non-volatile computer-readable storage medium that stores computer-executable instructions, which are executed by one or more processors, for example, in FIG. 7
  • a processor 21 of the above may enable the one or more processors to execute the depth map processing method in any of the above method embodiments, for example, to perform the method steps 101 to 105 in FIG. 3 described above;
  • the device embodiments described above are only schematics, wherein the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located One place, or it can be distributed to multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each embodiment can be implemented by means of software plus a general hardware platform, and of course, it can also be implemented by hardware.
  • a person of ordinary skill in the art may understand that all or part of the processes in the method of the foregoing embodiments may be completed by instructing relevant hardware through a computer program.
  • the program may be stored in a computer-readable storage medium. When executed, it may include the processes of the foregoing method embodiments.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Multimedia (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

一种用于无人机的深度图处理方法,包括:S1、对图像采集装置(30)采集的目标区域的图像进行校正;S2、对图像进行双目匹配,以获得目标区域的深度图;S3、根据深度图,获取无人机(100)周围的障碍物分布;在执行以上各步骤之前,获取各步骤的执行时间(t1, t2, t3);根据各步骤的执行时间(t1, t2, t3),建立至少两个线程和至少一个环形队列,由至少两个线程分别执行各个步骤,以降低总的执行时间。由至少两个线程分别执行深度图处理的各个步骤,各个线程可以通过至少一个环形队列获得其他线程的处理结果。通过加入环形队列,多线程并行运行的方式,解决了图像处理阻塞的问题,降低了延时。还公开了一种用于无人机的深度图处理装置、一种执行深度图处理方法的无人机和一种存储使无人机执行深度图处理方法的计算机可执行指令的非易失性计算机可读存储介质。

Description

一种深度图处理方法、装置和无人机 技术领域
本发明实施例涉及无人机技术领域,特别涉及一种深度图处理方法、装置和无人机。
背景技术
无人机在自主飞行过程中,要躲避障碍物飞行,因此需要对障碍物的位置进行检测,从而使无人机能根据障碍物的位置采取避障措施。目前,无人机多采用视觉系统(例如单目视觉系统、双目视觉系统等)进行障碍物位置检测,其利用摄像装置拍摄无人机周围区域的图像,并对所述图像进行处理来确定周围障碍物的位置信息。然后,无人机根据自身的速度、姿态和所述障碍物的位置信息等采取绕行、减速或暂停等避障措施来躲避障碍物。
实现本发明过程中,发明人发现相关技术中至少存在如下问题:无人机在进行图像处理及根据处理结果执行避障措施时,如果图像采集的帧率较高,容易造成图像处理阻塞,从而导致延时较大。
发明内容
本发明实施例的目的是提供一种深度图处理方法、装置和无人机,能改善无人机利用视觉系统过程中图像处理阻塞、延时大的问题。
第一方面,本发明实施例提供了一种深度图处理方法,用于无人机的控制器,所述无人机还包括图像采集装置,所述图像采集装置与所述控制器通信连接,所述方法包括:
S1、对所述图像采集装置采集的目标区域的图像进行校正;
S2、对所述图像进行双目匹配,以获得所述目标区域的深度图;
S3、根据所述深度图,获取所述无人机周围的障碍物分布;
该方法还包括:
在执行所述步骤S1、所述步骤S2和所述步骤S3之前,获取所述步骤S1的执行时间、所述步骤S2的执行时间和所述步骤S3的执行时间;
根据所述步骤S1的执行时间、所述步骤S2的执行时间和所述步骤S3的执行时间,建立至少两个线程和至少一个环形队列,由所述至少两个线程分别执行所述步骤S1、所述步骤S2和所述步骤S3,以降低总的执行时间;
其中,执行相邻步骤的两个线程中,执行前一步骤的线程的处理结果送至所述环形队列,执行后一步骤的线程从所述环形队列中取出所述处理结果并根据所述处理结果执行所述后一步骤。
在一些实施例中,所述根据所述步骤S1的执行时间、所述步骤S2的执行时间和所述步骤S3的执行时间,建立至少两个线程和至少一个环形队列,由所述至少两个线程分别执行所述步骤S1、所述步骤S2和所述步骤S3,以降低总的执行时间,包括:
判断所述步骤S1、所述步骤S2以及所述步骤S3的执行时间之和是否满足第一预设条件;
若是,则建立第一线程、第二线程和第一环形队列,由所述第一线程和所述第二线程分别执行所述步骤S1、所述步骤S2和所述步骤S3;
其中,所述第一线程和所述第二线程中,执行前一步骤的线程的处理结果送至所述第一环形队列,执行后一步骤的线程从所述第一环形队列中取出所述处理结果并根据所述处理结果执行所述后一步骤。
在一些实施例中,所述第一预设条件为:
所述步骤S1、所述步骤S2以及所述步骤S3的执行时间之和大于预设值。
在一些实施例中,所述预设值为1000/P,其中,P为图像帧率。
在一些实施例中于,所述第一线程执行所述步骤S1、所述步骤S2以及所述步骤S3中的两个步骤,所述第二线程执行所述步骤S1、所述步骤S2以及所述步骤S3中的一个步骤;
所述若所述步骤S1、所述步骤S2以及所述步骤S3的执行时间之和 满足第一预设条件,则建立第一线程、第二线程和第一环形队列,由所述第一线程和所述第二线程分别执行所述步骤S1、所述步骤S2和所述步骤S3,包括:
判断所述第一线程执行的两个步骤的执行时间之和是否满足第二预设条件;
若是,则建立第三线程和第二环形队列,由所述第一线程和所述第三线程分别执行所述两个步骤;
其中,所述第一线程和所述第三线程中,执行前一步骤的线程的处理结果送至所述第二环形队列,执行后一步骤的线程从所述第二环形队列中取出所述处理结果并根据所述处理结果执行所述后一步骤。
在一些实施例中,所述第二预设条件为所述第一线程执行的两个步骤的执行时间之和大于预设值。
在一些实施例中,所述预设值为1000/P,其中,P为图像帧率。
在一些实施例中,所述控制器包括硬件加速通道,所述图像采集装置包括至少两组双目单元;
则,所述对所述图像进行双目匹配,以获得所述目标区域的深度图,包括:
将所述至少两组双目单元采集的图像送至所述硬件加速通道,通过时分复用所述硬件加速通道,对所述至少两组双目单元采集的图像进行双目匹配,以获得所述深度图。
在一些实施例中,所述硬件加速通道的数量为至少两个,在所述至少两组双目单元采集的图像中包括分辨率为第一分辨率的图像组和分辨率为第二分辨率的图像组,其中,所述第二分辨率大于所述第一分辨率;
则,所述对所述图像进行双目匹配,以获得所述目标区域的深度图,包括:
将分辨率为所述第二分辨率的图像组送至所述至少两个硬件加速通道中的其中一个进行双目匹配,以获得与之对应的深度图;
将分辨率为第一分辨率的图像组送至所述至少两个硬件加速通道中的另一个进行双目匹配,以获得与之对应的深度图。
在一些实施例中,所述硬件加速通道的数量为4个,分别为第一硬件加速通道、第二硬件加速通道、第三硬件加速通道和第四硬件加速通道;
所述图像采集装置包括6组双目单元,在所述6组双目单元中,有4组双目单元采集的图像组为所述第一分辨率的图像组,有2组双目单元采集的图像组为所述第二分辨率的图像组;
则,所述将分辨率为第一分辨率的图像组送至所述至少两个硬件加速通道中的另一个进行双目匹配,以获得与之对应的深度图,包括:
将所述2组双目单元采集的图像组分别送至所述第一硬件加速通道和所述第二硬件加速通道;
所述将分辨率为所述第二分辨率的图像组送所述至少两个硬件加速通道中的其中一个进行双目匹配,以获得与之对应的深度图,包括:
将所述4组双目单元中的2组双目单元采集的图像组送至所述第三硬件加速通道,并将所述4组双目单元中剩下的2组双目单元采集的图像组送至所述第四硬件加速通道。
第二方面,本发明实施例提供了一种深度图处理装置,用于无人机的控制器,所述无人机还包括图像采集装置,所述图像采集装置与所述控制器通信连接,所述装置包括:
图像校正模块,用于执行步骤S1,对所述图像采集装置采集的目标区域的图像进行校正;
深度图获取模块,用于执行步骤S2,对所述图像进行双目匹配,以获得所述目标区域的深度图;
障碍物分布获取模块,用于执行步骤S3,根据所述深度图,获取所述无人机周围的障碍物分布;
该装置还包括:
时间获取模块,用于在执行所述步骤S1、所述步骤S2和所述步骤S之前,获取所述步骤S1的执行时间、所述步骤S2的执行时间和所述步骤S3的执行时间;
线程和环形队列建立模块,用于根据所述步骤S1的执行时间、所述步骤S2的执行时间和所述步骤S3的执行时间,建立至少两个线程和 至少一个环形队列,由所述至少两个线程分别执行所述步骤S1、所述步骤S2和所述步骤S3,以降低总的执行时间;
其中,执行相邻步骤的两个线程中,执行前一步骤的线程的处理结果送至所述环形队列,执行后一步骤的线程从所述环形队列中取出所述处理结果并根据所述处理结果执行所述后一步骤。
在一些实施例中,所述线程和环形队列建立模块包括:
判断子模块,用于判断所述步骤S1、所述步骤S2以及所述步骤S3的执行时间之和是否满足第一预设条件;
线程和环形队列建立子模块,用于若所述步骤S1、所述步骤S2以及所述步骤S3的执行时间之和满足第一预设条件,则建立第一线程、第二线程和第一环形队列,由所述第一线程和所述第二线程分别执行所述步骤S1、所述步骤S2和所述步骤S3;
其中,所述第一线程和所述第二线程中,执行前一步骤的线程的处理结果送至所述第一环形队列,执行后一步骤的线程从所述第一环形队列中取出所述处理结果并根据所述处理结果执行所述后一步骤。
在一些实施例中,所述第一预设条件为:
所述步骤S1、所述步骤S2以及所述步骤S3的执行时间之和大于预设值。
在一些实施例中,所述预设值为1000/P,其中,P为图像帧率。
在一些实施例中,所述第一线程执行所述步骤S1、所述步骤S2以及所述步骤S3中的两个步骤,所述第二线程执行所述步骤S1、所述步骤S2以及所述步骤S3中的一个步骤;
所述线程和环形队列建立子模块具体用于:
判断所述第一线程执行的两个步骤的执行时间之和是否满足第二预设条件;
若是,则建立第三线程和第二环形队列,由所述第一线程和所述第三线程分别执行所述两个步骤;
其中,所述第一线程和所述第三线程中,执行前一步骤的线程的处理结果送至所述第二环形队列,执行后一步骤的线程从所述第二环形队列中取出所述处理结果并根据所述处理结果执行所述后一步骤。
在一些实施例中,所述第二预设条件为所述第一线程执行的两个步骤的执行时间之和大于预设值。
在一些实施例中,所述预设值为1000/P,其中,P为图像帧率。
在一些实施例中,所述控制器包括硬件加速通道,所述图像采集装置包括至少两组双目单元;
则,所述深度图获取模块具体用于:
将所述至少两组双目单元采集的图像送至所述硬件加速通道,通过时分复用所述硬件加速通道,对所述至少两组双目单元采集的图像进行双目匹配,以获得所述深度图。
在一些实施例中,所述硬件加速通道的数量为至少两个,在所述至少两组双目单元采集的图像中包括分辨率为第一分辨率的图像组和分辨率为第二分辨率的图像组,其中,所述第二分辨率大于所述第一分辨率;
则,所述深度图获取模块具体用于:
将分辨率为所述第二分辨率的图像组送至所述至少两个硬件加速通道中的其中一个进行双目匹配,以获得与之对应的深度图;
将分辨率为第一分辨率的图像组送至所述至少两个硬件加速通道中的另一个进行双目匹配,以获得与之对应的深度图。
在一些实施例中,所述硬件加速通道的数量为4个,分别为第一硬件加速通道、第二硬件加速通道、第三硬件加速通道和第四硬件加速通道;
所述图像采集装置包括6组双目单元,在所述6组双目单元中,有4组双目单元采集的图像组为所述第一分辨率的图像组,有2组双目单元采集的图像组为所述第二分辨率的图像组;
则,所述深度图获取模块具体用于:
将所述2组双目单元采集的图像组分别送至所述第一硬件加速通道和所述第二硬件加速通道;
将所述4组双目单元中的2组双目单元采集的图像组送至所述第三硬件加速通道,并将所述4组双目单元中剩下的2组双目单元采集的图像组送至所述第四硬件加速通道。
第三方面,本发明实施例提供了一种无人机,所述无人机包括:
机身;
机臂,与所述机身相连;
动力装置,设于所述机臂;
图像采集装置,设置于所述机身,用于获取所述无人机目标区域的目标图像;
视觉芯片,设于所述机身,所述视觉芯片与所述图像采集装置通信连接;
所述视觉芯片包括:
至少一个处理器,以及
存储器,所述存储器与所述至少一个处理器通信连接,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述的方法。
第四方面,本发明实施例提供了一种非易失性计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,当所述计算机可执行指令被无人机执行时,使所述无人机执行上述的方法。
本发明实施例的深度图处理方法、装置和无人机,根据无人机控制器在进行深度图处理过程中各个步骤的执行时间建立至少两个线程和至少一个环形队列,由所述至少两个线程分别执行所述深度图处理的各个步骤。各个线程可以通过至少一个环形队列获得其他线程的处理结果。通过加入环形队列,多线程并行运行的方式,解决了图像处理阻塞的问题、降低了延时。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本发明实施例深度图处理方法和装置的应用场景示意图;
图2是本发明无人机的一个实施例的硬件结构示意图;
图3是本发明深度图处理方法的一个实施例的流程示意图;
图4是本发明深度图处理方法的一个实施例中应用硬件加速通道的示意图;
图5是本发明深度图处理装置的一个实施例的结构示意图;
图6是本发明深度图处理装置的一个实施例的结构示意图;
图7是本发明无人机的一个实施例中视觉芯片的硬件结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例提供的深度图处理方法、装置和无人机可以适用于如图1所示的应用场景。所述应用场景包括无人机100和障碍物200,其中,无人机100可以为合适的无人飞行器,包括固定翼无人飞行器和旋转翼无人飞行器,例如直升机、四旋翼机和具有其它数量的旋翼和/或旋翼配置的飞行器。无人机100还可以是其他可移动物体,例如载人飞行器、航模、无人飞艇和无人热气球等。障碍物200例如建筑物、山体、树木、森林、信号塔或其他可移动或不可移动物体(图1中只示出了一个障碍物,实际应用中可能会有更多障碍物或者没有障碍物)。
其中,在一些实施例中,请参照图2(图2仅部分示出了无人机100的构成),无人机100包括机身10、与机身10相连的机臂、动力装置和设于机身10的控制系统。动力装置用于提供无人机10飞行的推力、升力。控制系统是无人机100的中枢神经,可以包括多个功能性单元,例如飞控系统、视觉系统以及其他具有特定功能的系统。视觉系统包括图像采集装置30和视觉芯片20等,飞控系统包括各类传感器(例如陀螺仪、加速计)和飞控芯片。
无人机100在自主飞行过程中,需要自己识别并躲避飞行前方的障 碍物200。无人机100可以通过图像采集装置30和视觉芯片20检测无人机100周围的障碍物的位置信息,并根据所述位置信息采取避障措施。其中,图像采集装置30用于获取目标区域的目标图像,其可以采用例如高清摄像头或者运动摄像机等。视觉芯片20与图像采集装置30通信连接,可以获取图像采集装置30采集的目标图像,并对所述目标图像进行图像处理,获得所述目标图像对应区域的深度信息,从而获得无人机100周围障碍物200的位置信息。根据障碍物200的位置信息,视觉芯片20可以作出避障措施。飞控芯片依据所述避障措施对无人机100进行控制。其中,避障措施包括控制无人机减速或暂停等。根据障碍物200的位置信息,视觉芯片20还可以据此判断某一障碍物的距离,以及进行三维重建等。
其中,图像采集装置30可以包括至少一个单目单元或者至少一个双目单元(以下以双目单元为例说明)。每个双目单元可以获得一组目标图像,每个双目单元均以预设的帧率采集目标图像。视觉芯片20需对各个双目单元获得的目标图像组进行图像处理,分别获得各个目标图像组对应的深度信息,视觉芯片20还需根据各个目标图像组对应的深度信息,获得无人机100周围障碍物200的分布。当目标图像采集的帧率较高时,容易出现视觉芯片20处理不过来的情况,反过来会阻塞输入源的目标图像组的处理,造成图像处理阻塞,延时大。
为解决图像处理阻塞、延时大的问题,本发明实施例根据无人机控制器在进行深度图处理过程中各个步骤的执行时间,当各个步骤的执行时间总和较大时,可以建立至少两个线程和至少一个环形队列,由所述至少两个线程分别执行所述深度图处理的各个步骤。各个线程可以通过至少一个环形队列获得其他线程的处理结果。通过加入环形队列,多线程并行运行的方式,解决了图像处理阻塞的问题、降低了延时。
在上述实施例中,无人机100设置视觉芯片20来根据图像采集装置30获取的图像获得无人机100的避障措施,在另一些实施例中,无人机100也可以用其他控制器来实现视觉芯片20的功能。
图3为本发明实施例提供的深度图处理方法的流程示意图,所述方 法用于图1或图2所示的无人机100的控制器,在其中一些实施例中,所述控制器可以为无人机100的视觉芯片20,如图3所示,所述方法包括:
S1、对所述图像采集装置采集的目标区域的图像进行校正。
其中,所述图像采集装置可以为双目单元,对所述图像采集装置采集的图像进行校正包括对所述图像进行图像校正,以及对所述双目单元采集的每一组图像进行标定,以获取每一组所述图像对应的标定参数。
S2、对所述图像进行双目匹配,以获得所述目标区域的深度图。
即对每一组所述图像进行双目匹配,以获取每一组所述图像对应的视差图,根据所述视差图和所述标定参数,获取每一组所述图像对应的区域的深度信息。
S3、根据所述深度图,获取所述无人机周围的障碍物分布。
根据所述深度图进行数据处理,获得所述无人机周围的障碍物分布,其中,所述障碍物分布例如各障碍物的位置信息,各障碍物的距离以及无人机周围环境的三维地图等。
S4、在执行以上各步骤之前,获取所述步骤S1的执行时间、所述步骤S2的执行时间和所述步骤S3的执行时间。
其中,各步骤的执行时间可以是设计人员基于对各步骤的处理时间的基本认识预先设定好的。也可以是先将所述步骤S1、所述步骤S2和所述步骤S3在所述控制器中试运行一段时间,由所述控制器对各步骤的执行时间进行检测,获得所述步骤S1的执行时间、所述步骤S2的执行时间和所述步骤S3的执行时间。
S5、根据所述步骤S1的执行时间、所述步骤S2的执行时间和所述步骤S3的执行时间,建立至少两个线程和至少一个环形队列,由所述至少两个线程分别执行所述步骤S1、所述步骤S2和所述步骤S3,以降低总的执行时间;其中,执行相邻步骤的两个线程中,执行前一步骤的线程的处理结果送至所述环形队列,执行后一步骤的线程从所述环形队列中取出所述处理结果并根据所述处理结果执行所述后一步骤。
根据获得的步骤S1的执行时间、步骤S2的执行时间和步骤S3的执行时间,确定是否采用至少两个线程来并行执行步骤S1、步骤S2和 步骤S3。其中,线程和环形队列的数量可以根据上述各步骤的执行时间确定,例如可以是两个线程、一个环形队列,也可以是三个线程、两个环形队列。
在一些实施例中,如果步骤S1、步骤S2和步骤S3的总的执行时间较长(例如大于1000/P,其中P为图像采集装置采集图像的图像帧率),则有可能发生图像阻塞,为了避免图像阻塞,可以建立至少两个线程来并行执行步骤S1、步骤S2和步骤S3。如果相邻的两个步骤分别由不同的线程执行,则执行前一个步骤的线程可以将处理结果送至环形队列,执行后一个步骤的线程从该环形队列中获取所述处理结果。
例如可以设置第一线程、第二线程和第一环形队列,由第一线程执行步骤S1、步骤S2和步骤S3中的两个步骤,由第二线程执行步骤S1、步骤S2和步骤S3中的一个步骤。或者由第二线程执行步骤S1、步骤S2和步骤S3中的两个步骤,由第一线程执行步骤S1、步骤S2和步骤S3中的一个步骤。
例如第一线程执行步骤S1和步骤S2,第二线程执行步骤S3。第一线程将步骤S2的处理结果送至所述第一环形队列,第二线程从所述第一环形队列中获取所述处理结果并执行步骤S3。在另一些实施例中,也可以由第一线程执行步骤S1,由第二线程执行步骤S2和步骤S3,等等。
假设步骤S1的执行时间为t1、步骤S2的执行时间为t2、步骤S3的执行时间为t3,如果t1+t2+t3>1000/P,t1+t2<1000/P且t3<1000/P,第一线程执行步骤S1和步骤S2,第二线程执行步骤S3,两个线程并行执行步骤S1、步骤S2和步骤S3后,总的执行时间max(t1+t2,t3)<1000/P,降低了总的执行时间,有效的避免了图像阻塞。
其中,在一些实施例中,如果某一个线程执行相邻的两个步骤,该两个步骤的总的执行时间仍然较大(例如大于1000/P),则为了进一步避免图像阻塞,该两个步骤也可以由两个线程分别执行。
以第一线程执行步骤S1和步骤S2、第二线程执行步骤S3为例,如果t1+t2>1000/P、t3<1000/P,则可以再建立第三线程和第二环形队列,由第一线程执行步骤S1,第三线程执行步骤S2,第一线程将步骤S1的处理结果送至第二环形队列,第三线程从所述第二环形队列中取出所述 处理结果并执行步骤S2。在另一些实施例中,也可以由第三线程执行步骤S1,第一线程执行步骤S2。
在另一些实施例中,如果步骤S1、步骤S2和步骤S3中某一个步骤的执行时间较长,例如t3>1000/P,则可以进一步采用两个或者两个以上的线程来执行步骤S3。
在实际应用中,通常采用多组双目单元进行深度检测,而且图像双目匹配占用时间较长,在一些实施例中,为了提高运行速度,双目匹配可以通过设置于所述控制器中的硬件加速通道来执行。硬件加速通道是一种由硬件和接口软件组成的能提升软件运行速度的装置。
在其中一些实施例中,可以利用一个硬件加速通道来提升运行速度。将至少两组双目单元连续拍摄的各帧图像组顺序送至所述硬件加速通道,并通过时分复用所述硬件加速通道对各组图像进行图像处理,获得各组图像对应的深度信息。即先将第一组图像送入硬件加速通道处理,处理完毕后,再送入第二组图像…,依次类推,轮询的复用所述硬件加速通道。
在其中另一些实施例中,可以利用至少两个硬件加速通道来提升运行速度。将各双目单元获得的图像组中,分辨率高的双目单元获得的图像组和分辨率低的双目单元获得的图像组分别用一个硬件加速通道进行双目匹配处理,以进一步提高运行速度。
在另一些实施例中,还可以将分辨率高的一个双目单元获得的图像组用一个硬件加速通道进行处理,分辨率低的至少两个双目单元获得的图像组(例如两个双目单元或者三个双目单元)共用一个硬件加速通道进行处理。将分辨率低的至少两个双目单元获得的图像组共用一个硬件加速通道,可以在硬件加速通道数量少的情况下,在不影响软件运行速度的基础上、充分合理的利用硬件加速通道。至少两个双目单元获得的图像组共用一个硬件加速通道的场合,可以通过时分复用所述硬件加速通道的方法对各目标图像组进行处理。
以下举例说明,例如在无人机的一个实现中包括2对分辨率为720P的双目单元和4对分辨率为VGA的双目单元,控制器中有4个硬件加速通道。那么具体实施时,如图4所示,分辨率高的720P双目单元可以 单独用一个硬件加速通道,分辨率低的VGA双目单元中每两个双目单元共用一个硬件加速通道。双目单元与硬件加速通道的对应关系可以事先设置,这样可以将每个双目单元获得的图像组送往对应的硬件通道进行处理。
相应的,如图5所示,本发明实施例还提供了一种深度图处理装置,所述装置用于图1或图2所示的无人机100的控制器,在其中一些实施例中,所述控制器可以为无人机100的视觉芯片20,如图5所示,深度图处理装置500包括:
图像校正模块501,用于执行步骤S1,对所述图像采集装置采集的目标区域的图像进行校正;
深度图获取模块502,用于执行步骤S2,对所述图像进行双目匹配,以获得所述目标区域的深度图;
障碍物分布获取模块503,用于执行步骤S3,根据所述深度图,获取所述无人机周围的障碍物分布;
该装置还包括:
时间获取模块504,用于在执行所述步骤S1、所述步骤S2和所述步骤S之前,获取所述步骤S1的执行时间、所述步骤S2的执行时间和所述步骤S3的执行时间;
线程和环形队列建立模块505,用于根据所述步骤S1的执行时间、所述步骤S2的执行时间和所述步骤S3的执行时间,建立至少两个线程和至少一个环形队列,由所述至少两个线程分别执行所述步骤S1、所述步骤S2和所述步骤S3,以降低总的执行时间;
其中,执行相邻步骤的两个线程中,执行前一步骤的线程的处理结果送至所述环形队列,执行后一步骤的线程从所述环形队列中取出所述处理结果并根据所述处理结果执行所述后一步骤。
本发明实施例根据无人机控制器在进行深度图处理过程中各个步骤的执行时间建立至少两个线程和至少一个环形队列,由所述至少两个线程分别执行所述深度图处理的各个步骤。各个线程可以通过至少一个环形队列获得其他线程的处理结果。通过加入环形队列,多线程并行运 行的方式,解决了图像处理阻塞的问题、降低了延时。
在深度图处理装置500的一些实施例中,如图6所示,线程和环形队列建立模块505包括:
判断子模块5051,用于判断所述步骤S1、所述步骤S2以及所述步骤S3的执行时间之和是否满足第一预设条件;
线程和环形队列建立子模块5052,用于若所述步骤S1、所述步骤S2以及所述步骤S3的执行时间之和满足第一预设条件,则建立第一线程、第二线程和第一环形队列,由所述第一线程和所述第二线程分别执行所述步骤S1、所述步骤S2和所述步骤S3;
其中,所述第一线程和所述第二线程中,执行前一步骤的线程的处理结果送至所述第一环形队列,执行后一步骤的线程从所述第一环形队列中取出所述处理结果并根据所述处理结果执行所述后一步骤。
在深度图处理装置500的一些实施例中,所述第一预设条件为:
所述步骤S1、所述步骤S2以及所述步骤S3的执行时间之和大于预设值。
在深度图处理装置500的一些实施例中,所述预设值为1000/P,其中,P为图像帧率。
在深度图处理装置500的一些实施例中,所述第一线程执行所述步骤S1、所述步骤S2以及所述步骤S3中的两个步骤,所述第二线程执行所述步骤S1、所述步骤S2以及所述步骤S3中的一个步骤;
线程和环形队列建立子模块5052具体用于:
判断所述第一线程执行的两个步骤的执行时间之和是否满足第二预设条件;
若是,则建立第三线程和第二环形队列,由所述第一线程和所述第三线程分别执行所述两个步骤;
其中,所述第一线程和所述第三线程中,执行前一步骤的线程的处理结果送至所述第二环形队列,执行后一步骤的线程从所述第二环形队列中取出所述处理结果并根据所述处理结果执行所述后一步骤。
在深度图处理装置500的一些实施例中,所述第二预设条件为所述第一线程执行的两个步骤的执行时间之和大于预设值。
在深度图处理装置500的一些实施例中,所述预设值为1000/P,其中,P为图像帧率。
在深度图处理装置500的一些实施例中,所述控制器包括硬件加速通道,所述图像采集装置包括至少两组双目单元;
则,深度图获取模块502具体用于:
将所述至少两组双目单元采集的图像送至所述硬件加速通道,通过时分复用所述硬件加速通道,对所述至少两组双目单元采集的图像进行双目匹配,以获得所述深度图。
在深度图处理装置500的一些实施例中,所述硬件加速通道的数量为至少两个,在所述至少两组双目单元采集的图像中包括分辨率为第一分辨率的图像组和分辨率为第二分辨率的图像组,其中,所述第二分辨率大于所述第一分辨率;
则,深度图获取模块502具体用于:
将分辨率为所述第二分辨率的图像组送至所述至少两个硬件加速通道中的其中一个进行双目匹配,以获得与之对应的深度图;
将分辨率为第一分辨率的图像组送至所述至少两个硬件加速通道中的另一个进行双目匹配,以获得与之对应的深度图。
在深度图处理装置500的一些实施例中,所述硬件加速通道的数量为4个,分别为第一硬件加速通道、第二硬件加速通道、第三硬件加速通道和第四硬件加速通道;
所述图像采集装置包括6组双目单元,在所述6组双目单元中,有4组双目单元采集的图像组为所述第一分辨率的图像组,有2组双目单元采集的图像组为所述第二分辨率的图像组;
则,深度图获取模块502具体用于:
将所述2组双目单元采集的图像组分别送至所述第一硬件加速通道和所述第二硬件加速通道;
将所述4组双目单元中的2组双目单元采集的图像组送至所述第三硬件加速通道,并将所述4组双目单元中剩下的2组双目单元采集的图像组送至所述第四硬件加速通道。
需要说明的是,上述装置可执行本申请实施例所提供的方法,具备 执行方法相应的功能模块和有益效果。未在装置实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。
图7为无人机100的一个实施例中视觉芯片20的硬件结构示意图,如图7所示,视觉芯片20包括:
一个或多个处理器21以及存储器22,图7中以一个处理器21为例。
处理器21和存储器22可以通过总线或者其他方式连接,图7中以通过总线连接为例。
存储器22作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中的深度图处理方法对应的程序指令/模块(例如,附图5所示的图像校正模块501、深度图获取模块502、障碍物分布获取模块503、时间获取模块504和线程和环形队列建立模块505)。处理器21通过运行存储在存储器22中的非易失性软件程序、指令以及模块,从而执行无人机的各种功能应用以及数据处理,即实现上述方法实施例的深度图处理方法。
存储器22可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据视觉芯片的使用所创建的数据等。此外,存储器22可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器22可选包括相对于处理器21远程设置的存储器,这些远程存储器可以通过网络连接至无人机。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
所述一个或者多个模块存储在所述存储器22中,当被所述一个或者多个处理器21执行时,执行上述任意方法实施例中的深度图处理方法,例如,执行以上描述的图3中的方法步骤101至步骤105;实现图5中的模块501-505、图6中的模块501-505、5051-5052的功能。
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。
本申请实施例提供了一种非易失性计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器执行,例如图7中的一个处理器21,可使得上述一个或多个处理器可执行上述任意方法实施例中的深度图处理方法,例如,执行以上描述的图3中的方法步骤101至步骤105;实现图5中的模块501-505、图6中的模块501-505、5051-5052的功能。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
通过以上的实施例的描述,本领域普通技术人员可以清楚地了解到各实施例可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(RandomAccessMemory,RAM)等。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;在本发明的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本发明的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (22)

  1. 一种深度图处理方法,用于无人机的控制器,所述无人机还包括图像采集装置,所述图像采集装置与所述控制器通信连接,其特征在于,所述方法包括:
    S1、对所述图像采集装置采集的目标区域的图像进行校正;
    S2、对所述图像进行双目匹配,以获得所述目标区域的深度图;
    S3、根据所述深度图,获取所述无人机周围的障碍物分布;
    该方法还包括:
    在执行所述步骤S1、所述步骤S2和所述步骤S3之前,获取所述步骤S1的执行时间、所述步骤S2的执行时间和所述步骤S3的执行时间;
    根据所述步骤S1的执行时间、所述步骤S2的执行时间和所述步骤S3的执行时间,建立至少两个线程和至少一个环形队列,由所述至少两个线程分别执行所述步骤S1、所述步骤S2和所述步骤S3,以降低总的执行时间;
    其中,执行相邻步骤的两个线程中,执行前一步骤的线程的处理结果送至所述环形队列,执行后一步骤的线程从所述环形队列中取出所述处理结果并根据所述处理结果执行所述后一步骤。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述步骤S1的执行时间、所述步骤S2的执行时间和所述步骤S3的执行时间,建立至少两个线程和至少一个环形队列,由所述至少两个线程分别执行所述步骤S1、所述步骤S2和所述步骤S3,以降低总的执行时间,包括:
    判断所述步骤S1、所述步骤S2以及所述步骤S3的执行时间之和是否满足第一预设条件;
    若是,则建立第一线程、第二线程和第一环形队列,由所述第一线程和所述第二线程分别执行所述步骤S1、所述步骤S2和所述步骤S3;
    其中,所述第一线程和所述第二线程中,执行前一步骤的线程的处理结果送至所述第一环形队列,执行后一步骤的线程从所述第一环形队 列中取出所述处理结果并根据所述处理结果执行所述后一步骤。
  3. 根据权利要求2所述的方法,其特征在于,所述第一预设条件为:
    所述步骤S1、所述步骤S2以及所述步骤S3的执行时间之和大于预设值。
  4. 根据权利要求3所述的方法,其特征在于,所述预设值为1000/P,其中,P为图像帧率。
  5. 根据权利要求2-4中任一项所述的方法,其特征在于,所述第一线程执行所述步骤S1、所述步骤S2以及所述步骤S3中的两个步骤,所述第二线程执行所述步骤S1、所述步骤S2以及所述步骤S3中的一个步骤;
    所述若所述步骤S1、所述步骤S2以及所述步骤S3的执行时间之和满足第一预设条件,则建立第一线程、第二线程和第一环形队列,由所述第一线程和所述第二线程分别执行所述步骤S1、所述步骤S2和所述步骤S3,包括:
    判断所述第一线程执行的两个步骤的执行时间之和是否满足第二预设条件;
    若是,则建立第三线程和第二环形队列,由所述第一线程和所述第三线程分别执行所述两个步骤;
    其中,所述第一线程和所述第三线程中,执行前一步骤的线程的处理结果送至所述第二环形队列,执行后一步骤的线程从所述第二环形队列中取出所述处理结果并根据所述处理结果执行所述后一步骤。
  6. 根据权利要求5所述的方法,其特征在于,所述第二预设条件为所述第一线程执行的两个步骤的执行时间之和大于预设值。
  7. 根据权利要求6所述的方法,其特征在于,所述预设值为1000/P, 其中,P为图像帧率。
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,所述控制器包括硬件加速通道,所述图像采集装置包括至少两组双目单元;
    则,所述对所述图像进行双目匹配,以获得所述目标区域的深度图,包括:
    将所述至少两组双目单元采集的图像送至所述硬件加速通道,通过时分复用所述硬件加速通道,对所述至少两组双目单元采集的图像进行双目匹配,以获得所述深度图。
  9. 根据权利要求1-7中任一项所述的方法,其特征在于,所述硬件加速通道的数量为至少两个,在所述至少两组双目单元采集的图像中包括分辨率为第一分辨率的图像组和分辨率为第二分辨率的图像组,其中,所述第二分辨率大于所述第一分辨率;
    则,所述对所述图像进行双目匹配,以获得所述目标区域的深度图,包括:
    将分辨率为所述第二分辨率的图像组送至所述至少两个硬件加速通道中的其中一个进行双目匹配,以获得与之对应的深度图;
    将分辨率为第一分辨率的图像组送至所述至少两个硬件加速通道中的另一个进行双目匹配,以获得与之对应的深度图。
  10. 根据权利要求9所述的方法,其特征在于,所述硬件加速通道的数量为4个,分别为第一硬件加速通道、第二硬件加速通道、第三硬件加速通道和第四硬件加速通道;
    所述图像采集装置包括6组双目单元,在所述6组双目单元中,有4组双目单元采集的图像组为所述第一分辨率的图像组,有2组双目单元采集的图像组为所述第二分辨率的图像组;
    则,所述将分辨率为第一分辨率的图像组送至所述至少两个硬件加速通道中的另一个进行双目匹配,以获得与之对应的深度图,包括:
    将所述2组双目单元采集的图像组分别送至所述第一硬件加速通道 和所述第二硬件加速通道;
    所述将分辨率为所述第二分辨率的图像组送所述至少两个硬件加速通道中的其中一个进行双目匹配,以获得与之对应的深度图,包括:
    将所述4组双目单元中的2组双目单元采集的图像组送至所述第三硬件加速通道,并将所述4组双目单元中剩下的2组双目单元采集的图像组送至所述第四硬件加速通道。
  11. 一种深度图处理装置,用于无人机的控制器,所述无人机还包括图像采集装置,所述图像采集装置与所述控制器通信连接,其特征在于,所述装置包括:
    图像校正模块,用于执行步骤S1,对所述图像采集装置采集的目标区域的图像进行校正;
    深度图获取模块,用于执行步骤S2,对所述图像进行双目匹配,以获得所述目标区域的深度图;
    障碍物分布获取模块,用于执行步骤S3,根据所述深度图,获取所述无人机周围的障碍物分布;
    该装置还包括:
    时间获取模块,用于在执行所述步骤S1、所述步骤S2和所述步骤S之前,获取所述步骤S1的执行时间、所述步骤S2的执行时间和所述步骤S3的执行时间;
    线程和环形队列建立模块,用于根据所述步骤S1的执行时间、所述步骤S2的执行时间和所述步骤S3的执行时间,建立至少两个线程和至少一个环形队列,由所述至少两个线程分别执行所述步骤S1、所述步骤S2和所述步骤S3,以降低总的执行时间;
    其中,执行相邻步骤的两个线程中,执行前一步骤的线程的处理结果送至所述环形队列,执行后一步骤的线程从所述环形队列中取出所述处理结果并根据所述处理结果执行所述后一步骤。
  12. 根据权利要求11所述的装置,其特征在于,所述线程和环形队列建立模块包括:
    判断子模块,用于判断所述步骤S1、所述步骤S2以及所述步骤S3的执行时间之和是否满足第一预设条件;
    线程和环形队列建立子模块,用于若所述步骤S1、所述步骤S2以及所述步骤S3的执行时间之和满足第一预设条件,则建立第一线程、第二线程和第一环形队列,由所述第一线程和所述第二线程分别执行所述步骤S1、所述步骤S2和所述步骤S3;
    其中,所述第一线程和所述第二线程中,执行前一步骤的线程的处理结果送至所述第一环形队列,执行后一步骤的线程从所述第一环形队列中取出所述处理结果并根据所述处理结果执行所述后一步骤。
  13. 根据权利要求12所述的装置,其特征在于,所述第一预设条件为:
    所述步骤S1、所述步骤S2以及所述步骤S3的执行时间之和大于预设值。
  14. 根据权利要求13所述的装置,其特征在于,所述预设值为1000/P,其中,P为图像帧率。
  15. 根据权利要求12-14中任一项所述的装置,其特征在于,所述第一线程执行所述步骤S1、所述步骤S2以及所述步骤S3中的两个步骤,所述第二线程执行所述步骤S1、所述步骤S2以及所述步骤S3中的一个步骤;
    所述线程和环形队列建立子模块具体用于:
    判断所述第一线程执行的两个步骤的执行时间之和是否满足第二预设条件;
    若是,则建立第三线程和第二环形队列,由所述第一线程和所述第三线程分别执行所述两个步骤;
    其中,所述第一线程和所述第三线程中,执行前一步骤的线程的处理结果送至所述第二环形队列,执行后一步骤的线程从所述第二环形队列中取出所述处理结果并根据所述处理结果执行所述后一步骤。
  16. 根据权利要求15所述的装置,其特征在于,所述第二预设条件为所述第一线程执行的两个步骤的执行时间之和大于预设值。
  17. 根据权利要求16所述的装置,其特征在于,所述预设值为1000/P,其中,P为图像帧率。
  18. 根据权利要求11-17中任一项所述的装置,其特征在于,所述控制器包括硬件加速通道,所述图像采集装置包括至少两组双目单元;
    则,所述深度图获取模块具体用于:
    将所述至少两组双目单元采集的图像送至所述硬件加速通道,通过时分复用所述硬件加速通道,对所述至少两组双目单元采集的图像进行双目匹配,以获得所述深度图。
  19. 根据权利要求11-17中任一项所述的装置,其特征在于,所述硬件加速通道的数量为至少两个,在所述至少两组双目单元采集的图像中包括分辨率为第一分辨率的图像组和分辨率为第二分辨率的图像组,其中,所述第二分辨率大于所述第一分辨率;
    则,所述深度图获取模块具体用于:
    将分辨率为所述第二分辨率的图像组送至所述至少两个硬件加速通道中的其中一个进行双目匹配,以获得与之对应的深度图;
    将分辨率为第一分辨率的图像组送至所述至少两个硬件加速通道中的另一个进行双目匹配,以获得与之对应的深度图。
  20. 根据权利要求19所述的装置,其特征在于,所述硬件加速通道的数量为4个,分别为第一硬件加速通道、第二硬件加速通道、第三硬件加速通道和第四硬件加速通道;
    所述图像采集装置包括6组双目单元,在所述6组双目单元中,有4组双目单元采集的图像组为所述第一分辨率的图像组,有2组双目单元采集的图像组为所述第二分辨率的图像组;
    则,所述深度图获取模块具体用于:
    将所述2组双目单元采集的图像组分别送至所述第一硬件加速通道和所述第二硬件加速通道;
    将所述4组双目单元中的2组双目单元采集的图像组送至所述第三硬件加速通道,并将所述4组双目单元中剩下的2组双目单元采集的图像组送至所述第四硬件加速通道。
  21. 一种无人机,其特征在于,所述无人机包括:
    机身;
    机臂,与所述机身相连;
    动力装置,设于所述机臂;
    图像采集装置,设置于所述机身,用于获取所述无人机目标区域的目标图像;
    视觉芯片,设于所述机身,所述视觉芯片与所述图像采集装置通信连接;
    所述视觉芯片包括:
    至少一个处理器,以及
    存储器,所述存储器与所述至少一个处理器通信连接,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-10任一项所述的方法。
  22. 一种非易失性计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机可执行指令,当所述计算机可执行指令被无人机执行时,使所述无人机执行如权利要求1-10任一项所述的方法。
PCT/CN2019/129562 2018-12-29 2019-12-28 一种深度图处理方法、装置和无人机 WO2020135797A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19905206.9A EP3889544B1 (en) 2018-12-29 2019-12-28 Depth image processing method and device, and unmanned aerial vehicle
US17/361,694 US20210325909A1 (en) 2018-12-29 2021-06-29 Method, apparatus and unmanned aerial vehicle for processing depth map

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811634006.9A CN109631853A (zh) 2018-12-29 2018-12-29 一种深度图处理方法、装置和无人机
CN201811634006.9 2018-12-29

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/361,694 Continuation US20210325909A1 (en) 2018-12-29 2021-06-29 Method, apparatus and unmanned aerial vehicle for processing depth map

Publications (1)

Publication Number Publication Date
WO2020135797A1 true WO2020135797A1 (zh) 2020-07-02

Family

ID=66054507

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/129562 WO2020135797A1 (zh) 2018-12-29 2019-12-28 一种深度图处理方法、装置和无人机

Country Status (4)

Country Link
US (1) US20210325909A1 (zh)
EP (1) EP3889544B1 (zh)
CN (2) CN113776503B (zh)
WO (1) WO2020135797A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688868A (zh) * 2021-07-21 2021-11-23 深圳市安软科技股份有限公司 一种多线程的图像处理方法及装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113776503B (zh) * 2018-12-29 2024-04-12 深圳市道通智能航空技术股份有限公司 一种深度图处理方法、装置和无人机
TWI804850B (zh) * 2020-04-16 2023-06-11 鈺立微電子股份有限公司 多深度資訊之融合方法與融合系統

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011075368A1 (en) * 2009-12-14 2011-06-23 Verisign, Inc. Lockless queues
US20140337848A1 (en) * 2013-05-08 2014-11-13 Nividia Corporation Low overhead thread synchronization using hardware-accelerated bounded circular queues
CN106358003A (zh) * 2016-08-31 2017-01-25 华中科技大学 一种基于线程级流水线的视频分析加速方法
CN106384382A (zh) * 2016-09-05 2017-02-08 山东省科学院海洋仪器仪表研究所 一种基于双目立体视觉的三维重建系统及其方法
CN107703951A (zh) * 2017-07-27 2018-02-16 上海拓攻机器人有限公司 一种基于双目视觉的无人机避障方法及系统
CN108733344A (zh) * 2018-05-28 2018-11-02 深圳市道通智能航空技术有限公司 数据读写方法、装置以及环形队列
CN109034018A (zh) * 2018-07-12 2018-12-18 北京航空航天大学 一种基于双目视觉的低空小型无人机障碍物感知方法
CN109631853A (zh) * 2018-12-29 2019-04-16 深圳市道通智能航空技术有限公司 一种深度图处理方法、装置和无人机

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070044103A1 (en) * 2005-07-25 2007-02-22 Mark Rosenbluth Inter-thread communication of lock protected data
US10061619B2 (en) * 2015-05-29 2018-08-28 Red Hat, Inc. Thread pool management
CN108594851A (zh) * 2015-10-22 2018-09-28 飞智控(天津)科技有限公司 一种基于双目视觉的无人机自主障碍物检测系统、方法及无人机
RU2623806C1 (ru) * 2016-06-07 2017-06-29 Акционерное общество Научно-производственный центр "Электронные вычислительно-информационные системы" (АО НПЦ "ЭЛВИС") Способ и устройство обработки стереоизображений
US10296393B2 (en) * 2016-09-19 2019-05-21 Texas Instruments Incorporated Method for scheduling a processing device
CN107077741A (zh) * 2016-11-11 2017-08-18 深圳市大疆创新科技有限公司 深度图生成方法和基于该方法的无人机
CN107636679B (zh) * 2016-12-30 2021-05-25 达闼机器人有限公司 一种障碍物检测方法及装置
CN108874555A (zh) * 2018-05-23 2018-11-23 福建天泉教育科技有限公司 一种写消息至消息中间件的方法及装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011075368A1 (en) * 2009-12-14 2011-06-23 Verisign, Inc. Lockless queues
US20140337848A1 (en) * 2013-05-08 2014-11-13 Nividia Corporation Low overhead thread synchronization using hardware-accelerated bounded circular queues
CN106358003A (zh) * 2016-08-31 2017-01-25 华中科技大学 一种基于线程级流水线的视频分析加速方法
CN106384382A (zh) * 2016-09-05 2017-02-08 山东省科学院海洋仪器仪表研究所 一种基于双目立体视觉的三维重建系统及其方法
CN107703951A (zh) * 2017-07-27 2018-02-16 上海拓攻机器人有限公司 一种基于双目视觉的无人机避障方法及系统
CN108733344A (zh) * 2018-05-28 2018-11-02 深圳市道通智能航空技术有限公司 数据读写方法、装置以及环形队列
CN109034018A (zh) * 2018-07-12 2018-12-18 北京航空航天大学 一种基于双目视觉的低空小型无人机障碍物感知方法
CN109631853A (zh) * 2018-12-29 2019-04-16 深圳市道通智能航空技术有限公司 一种深度图处理方法、装置和无人机

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688868A (zh) * 2021-07-21 2021-11-23 深圳市安软科技股份有限公司 一种多线程的图像处理方法及装置
CN113688868B (zh) * 2021-07-21 2023-08-22 深圳市安软科技股份有限公司 一种多线程的图像处理方法及装置

Also Published As

Publication number Publication date
CN113776503A (zh) 2021-12-10
EP3889544B1 (en) 2023-09-27
CN109631853A (zh) 2019-04-16
US20210325909A1 (en) 2021-10-21
EP3889544A4 (en) 2022-01-26
EP3889544A1 (en) 2021-10-06
CN113776503B (zh) 2024-04-12

Similar Documents

Publication Publication Date Title
WO2020135797A1 (zh) 一种深度图处理方法、装置和无人机
WO2020244649A1 (zh) 一种避障方法、装置和电子设备
EP3397554B1 (en) System and method for utilization of multiple-camera network to capture static and/or motion scenes
US10754354B2 (en) Hover control
WO2018210078A1 (zh) 无人机的距离测量方法以及无人机
CN105974932B (zh) 无人机控制方法
US10789722B2 (en) Processing images to obtain environmental information
WO2017185607A1 (zh) 一种无人机飞行控制方式切换方法、装置及其无人机
US20210333807A1 (en) Method and system for controlling aircraft
CN110291481B (zh) 一种信息提示方法及控制终端
WO2019173981A1 (zh) 一种无人机控制方法、设备、无人机、系统及存储介质
WO2021088684A1 (zh) 全向避障方法及无人飞行器
WO2020135449A1 (zh) 一种中继点生成方法、装置和无人机
CN104932523A (zh) 一种无人飞行器的定位方法及装置
WO2018045976A1 (zh) 一种飞行器的飞行控制方法和飞行控制装置
JP6760615B2 (ja) 移動体操縦システム、操縦シグナル送信システム、移動体操縦方法、プログラム、および記録媒体
WO2021027886A1 (zh) 无人机飞行控制方法及无人机
WO2020233682A1 (zh) 一种自主环绕拍摄方法、装置以及无人机
WO2019128275A1 (zh) 一种拍摄控制方法、装置及飞行器
US20210009270A1 (en) Methods and system for composing and capturing images
WO2020135795A1 (zh) 一种图像显示方法、装置和电子设备
WO2017185651A1 (zh) 一种无人机图像传输方式切换方法、装置及其无人机
WO2018018514A1 (en) Target-based image exposure adjustment
US11620913B2 (en) Movable object application framework
EP3877865A1 (en) Systems and methods for image display

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19905206

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019905206

Country of ref document: EP

Effective date: 20210701