WO2020207411A1 - 一种图像数据处理方法、装置、图像处理芯片及飞行器 - Google Patents

一种图像数据处理方法、装置、图像处理芯片及飞行器 Download PDF

Info

Publication number
WO2020207411A1
WO2020207411A1 PCT/CN2020/083769 CN2020083769W WO2020207411A1 WO 2020207411 A1 WO2020207411 A1 WO 2020207411A1 CN 2020083769 W CN2020083769 W CN 2020083769W WO 2020207411 A1 WO2020207411 A1 WO 2020207411A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
image
channels
line
path
Prior art date
Application number
PCT/CN2020/083769
Other languages
English (en)
French (fr)
Inventor
李昭早
Original Assignee
深圳市道通智能航空技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市道通智能航空技术有限公司 filed Critical 深圳市道通智能航空技术有限公司
Publication of WO2020207411A1 publication Critical patent/WO2020207411A1/zh
Priority to US17/490,635 priority Critical patent/US11949844B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U20/00Constructional aspects of UAVs
    • B64U20/80Arrangement of on-board electronics, e.g. avionics systems or wiring
    • B64U20/87Mounting of imaging devices, e.g. mounting of gimbals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/156Mixing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • This application relates to the field of image processing technology, and in particular to an image data processing method, device, image processing chip, and aircraft.
  • Image data processing technology is an emerging technology that emerged, developed and matured with the development of computer technology and VLSI (Very Large Scale Integration) in the 1960s. It can use various methods to process image data according to requirements.
  • image data processing technology has become more mature and has been widely used in aerospace, military, biomedicine, artificial intelligence and other fields.
  • image data processing technology is used to process images collected by cameras and video cameras to obtain images that meet various needs.
  • the requirements for the performance of the image data processed by it are getting higher and higher, for example, it is required to be able to process multiple channels of image data.
  • the chip used to process image data such as the image processing chip itself
  • the number of channels of image data that can be received by the chip is limited, which cannot meet the requirements of multi-channel image data processing well.
  • the chip will not be able to fully acquire the number of channels.
  • the condition of multiple channels of image data makes it impossible to process the multiple channels of image data, which has a great impact on the realization of functions related to image data processing in the chip, and even leads to the failure of related functions.
  • the embodiments of the present invention provide an image data processing method, device, image processing chip, and aircraft, which can better meet the requirements of multi-channel image data processing.
  • an embodiment of the present invention provides an image data processing method for processing N channels of first image data collected by N image acquisition devices, the method including:
  • K channels of image data are received, and the K channels of image data include L channels of second image data obtained by merging M channels of first image data among the N channels of first image data, and the N channels of first image data.
  • Image processing is performed on the grayscale components in the color image in the preset format to obtain a depth image.
  • K is less than or equal to the maximum number of image data input channels that can be supported.
  • the L path of second image data includes: a first path of second image data, a second path of second image data, ..., an L path of second image data;
  • the splitting processing of L channels of second image data in the K channels of image data includes:
  • the first channel of second image data, the second channel of second image data, ..., the Lth channel of second image data are split line by line, wherein each channel of second image data is split It is the third image data of M/L road.
  • the first path of second image data is split into a first path of third image data, a second path of third image data, ..., an M/L path of third image data, and the first path of 2.
  • the image data includes P lines of image data;
  • performing line-by-line split processing on the first path of second image data includes:
  • the first line of image data, the second line of image data, ..., the M/Lth line of image data of the first path of second image data are respectively disassembled into the first line, of the first path of third image data,
  • the first line of the second path of the third image data ..., the first line of the M/L path of the third image data; the (M/L+1)th path of the first path of the second image data )
  • Line image data, (M/L+2) line image data,..., 2*M/L line image data are respectively disassembled into the second line of the first channel of the third image data, the second line
  • the preset format includes: YUV format.
  • the first image data and the third image data are both images in RGB format
  • the calculation formula for the gray-scale component in the color image in the preset format is:
  • R is the intensity of the red component in the RGB format image
  • G is the intensity of the green component of the RGB format image
  • B is the intensity of the blue component of the RGB format image.
  • an embodiment of the present invention provides an image data processing device for processing N channels of first image data collected by N image acquisition devices, the device including:
  • the receiving module is configured to receive K channels of image data, and the K channels of image data include L channels of second image data obtained by merging M channels of first image data among the N channels of first image data, and all (NM) channels of first image data that have not been merged among the N channels of first image data;
  • a split processing module configured to split the L channels of second image data in the K channels of image data to obtain M channels of third image data
  • a format conversion module configured to perform format conversion processing on the (N-M) channels of first image data and the M channels of third image data to obtain a color image in a preset format
  • the depth image acquisition module is used to perform image processing on the grayscale components in the color image in the preset format to obtain a depth image.
  • K is less than or equal to the maximum number of image data input channels that can be supported.
  • the L path of second image data includes: a first path of second image data, a second path of second image data, ..., an L path of second image data;
  • the split processing module is specifically used for:
  • each channel of second image data is split into M/L channels of third image data.
  • the first path of second image data is split into a first path of third image data, a second path of third image data, ..., an M/L path of third image data, and the first path of 2.
  • the image data includes P lines of image data;
  • the splitting processing module performing line-by-line splitting processing on the first path of second image data includes:
  • the first line of image data, the second line of image data, ..., the M/Lth line of image data of the first path of second image data are respectively disassembled into the first line, of the first path of third image data,
  • the first line of the second path of the third image data ..., the first line of the M/L path of the third image data; the (M/L+1)th path of the first path of the second image data )
  • Line image data, (M/L+2) line image data,..., 2*M/L line image data are respectively disassembled into the second line of the first channel of the third image data, the second line
  • the preset format includes: YUV format.
  • the first image data and the third image data are both images in RGB format, and the calculation formula of the gray component Y in the color image in the preset format is:
  • R is the intensity of the red component of the image in the RGB format
  • G is the intensity of the green component of the image in the GB format
  • B is the intensity of the blue component of the image in the GB format.
  • an image processing chip including:
  • At least one processor and,
  • a memory communicatively connected with the at least one processor; wherein,
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the image data processing method described above.
  • an embodiment of the present invention provides an aircraft including a fuselage, N image acquisition devices arranged on the fuselage, and an image processing chip arranged in the fuselage, the image processing chip and The N image acquisition devices are connected, the N image acquisition devices are used to acquire N channels of first image data, and the image processing chip is the image processing chip according to claim 13.
  • N image capture devices include: 2 front-view lenses installed parallel to each other, 2 lower-view lenses installed parallel to each other, 2 rear-view lenses installed parallel to each other, and 2 top-view lenses installed parallel to each other , 2 left-view lenses installed parallel to each other, 2 right-view lenses installed parallel to each other.
  • the image processing chip receiving K channels of image data includes:
  • the aircraft further includes L merging modules, the input ends of the L merging modules are connected to M image acquisition devices for collecting M channels of first image data, and the output ends of the L merging modules Connected to the image processing chip, the L merging modules are used for merging M channels of first image data among the N channels of first image data.
  • the L merging modules include a first merging module and a second merging module, and the input end of the first merging module and the two rear-view lenses installed in parallel with each other and the two parallel installed Connected to the top-view lens, the output end of the first merging module is connected to the image processing chip, the input end of the second merging module is connected to the two left-view lenses installed in parallel and the two mutual The parallel-mounted right-view lens is connected, and the output end of the second merging module is connected with the image processing chip.
  • an embodiment of the present invention provides a computer program product, the computer program product includes a computer program stored on a non-volatile computer-readable storage medium, the computer program includes program instructions, when the program When the instructions are executed by a computer, the computer is caused to execute the image data processing method described above.
  • an embodiment of the present invention provides a non-volatile computer-readable storage medium, the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are used to make a computer execute the above The image data processing method.
  • the K (N>K) channels of image data received include a comparison of M channels of the N channels of first image data.
  • the L-channel second image data obtained after the image data is merged, and after the K-channel image data is received, the L-channel second image data in the K-channel image data is split, formatted, etc.
  • FIG. 1 is a schematic diagram of an application environment of an image data processing method provided by an embodiment of the present invention, where the schematic diagram is a bottom view of an aircraft;
  • FIG. 2 is a schematic diagram of an application environment of an image data processing method provided by an embodiment of the present invention, where the schematic diagram is a top view of an aircraft;
  • FIG. 3 is a schematic flowchart of an image data processing method provided by an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of 12 image acquisition devices provided by an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of line-by-line merge processing provided by an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of line-by-line split processing provided by an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of converting an image in RGB format into a color image in YUV format provided by an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of an image data processing device provided by an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of the hardware structure of an image processing chip provided by an embodiment of the present invention.
  • Figure 10 is a schematic diagram of an aircraft provided by an embodiment of the present invention.
  • Figure 11 is a schematic diagram of another aircraft provided by an embodiment of the present invention.
  • Fig. 12 is a schematic diagram of another aircraft provided by an embodiment of the present invention.
  • the application environment includes: aircraft 10.
  • the aircraft 10 includes a fuselage 11, a number of image acquisition devices 12 arranged in the fuselage 11, and an image processing chip (not shown) arranged in the fuselage 11.
  • the image processing chip is the main body for executing the image data processing method.
  • connection may be a communication connection to realize data interaction between the several image acquisition devices 12 and the image processing chip.
  • image collection devices 12 are used to collect image data, and through the communication connection, send the collected image data to the image processing chip, so that the image processing chip can process the received image data.
  • the aircraft 10 is mainly used to complete designated tasks by flying, such as flying to a designated location, or shooting during the flight.
  • Obstacle avoidance capability is a necessary capability for aircraft 10, that is, an important index for evaluating aircraft performance.
  • the first thing to be achieved is how to accurately measure the distance between the aircraft 10 and the obstacle. Only by measuring the distance within the dangerous range can there be time to stop the aircraft before hitting the obstacle. 10 forward action to avoid accidents.
  • the obstacle distance detection methods that are widely used in the aircraft field include ultrasonic ranging, infrared or laser ranging, visual image detection, electronic maps, etc.
  • the visual image detection technology uses the principle of how the human eye estimates vision, and is currently a technology favored by the aircraft field.
  • Visual image detection technology is a technology that uses machines to replace human eyes for measurement and judgment.
  • visual image detection is mainly a measurement method that uses the image as a means or carrier to detect and transmit information. By extracting the characteristic signal of the image, the actual information of the measured object is finally obtained from the image.
  • machine vision products ie, image acquisition devices, such as CMOS and CCD, etc.
  • image processing equipment or chips image processing equipment or The chip obtains depth information based on pixel distribution and image information such as brightness and color to determine the distance between the aircraft and the obstacle.
  • the characteristics of the target can also be extracted through analysis and various calculations, and then the flight of the aircraft 10 can be controlled according to the result of the discrimination, so as to realize the obstacle avoidance of the aircraft 10.
  • the aircraft 10 can meet the requirements of omnidirectional obstacle avoidance, that is, the obstacle avoidance usually requires support for six directions: front, bottom, rear, left, right, and top.
  • 12 image acquisition devices may be 2 front-view lenses, 2 down-view lenses, 2 rear-view lenses, 2 top-view lenses, 2 left-view lenses, and 2 right-view lenses.
  • the number of channels of image data that can be received is limited.
  • the current number of input channels supported by the more advanced chips is only about 8 channels, which is still far from sufficient for omnidirectional obstacle avoidance. Demand.
  • the merging module used to implement the merging function merges the M channels of the first image data in the N channels of first image data to obtain the L channels of second image data;
  • the processing chip can receive the L channels of second image data and N channels of first image data that have not been merged (NM) channels of first image data, a total of K (N>K) channels of image data; and, in the image processing After the chip receives the K-channel image data, it performs split processing, format conversion and other processing on the L-channel second image data in the K-channel image data, so as to realize when the number of image data to be processed is greater than the image processing chip can receive When the number of channels of image data, it can also realize the processing of N channels of first image data, so as to better meet the requirements of multi-
  • the aforementioned image acquisition device 12 may be any suitable photosensitive element or image sensor, for example, CMOS (Complementary Metal Oxide Semiconductor), CCD (Charge-coupled Device, charge coupled device), etc.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge-coupled Device, charge coupled device
  • the several image acquisition devices 12 may be the same image acquisition device, or may be different image acquisition devices, so as to meet different requirements.
  • the two front-view lenses and the two down-view lenses adopt high-resolution lenses.
  • 720P lenses, 2 rear view lenses, 2 top view lenses, 2 left view lenses, and 2 right view lenses use relatively low-resolution lenses, such as VGA lenses.
  • the above-mentioned image processing chip may be any suitable chip capable of realizing the above-mentioned image data processing method, such as a microprocessor, a micro-control unit, a single-chip microcomputer, and a controller.
  • image data processing method provided by the embodiment of the present invention can be further extended to other suitable application environments, and is not limited to the application environment shown in FIG. 1.
  • the aircraft 10 in the application environment can use various types of aircraft, such as unmanned aerial vehicles, unmanned ships or other Removable devices, etc.
  • this image data processing method can be used for automatic obstacle avoidance of drones.
  • UAV is an unmanned aerial vehicle with mission payload, which is operated by remote control equipment or self-provided program control device.
  • the drone can be various types of drones.
  • the drone can be a rotorcraft, for example, a multi-rotor aircraft propelled by multiple propulsion devices through the air.
  • the embodiment of the present invention is not limited to this, and the drone can also be other types of drones, such as fixed-wing drones, unmanned airships, para-wing drones, and flapping-wing drones. Wait.
  • the number of image acquisition devices in the several image acquisition devices 12 may be more or less, for example, 10, 11, 13, 14, etc., that is, several The number of image capture devices in the image capture device 12 is not limited here.
  • the image data processing method can also be applied to other image processing equipment, such as cell monitoring equipment, vehicle monitoring equipment, robots, etc., and is not limited to the aircraft described in the embodiments of the present invention.
  • FIG. 3 is a schematic flowchart of an image data processing method provided by an embodiment of the present invention.
  • the image data processing method is used to process N channels of first image data collected by N image collection devices.
  • the image data processing method can be applied to various image processing equipment, such as aircraft, small area monitoring equipment, vehicle monitoring equipment, robots, etc.
  • the image data processing method can be executed by any suitable type of chip, such as the image processing chip of the aforementioned aircraft.
  • the aircraft can be any type of aircraft, such as unmanned aerial vehicles, unmanned ships and the like. The following uses drones as an example of aircraft.
  • the image data processing method includes:
  • K channels of image data include L channels of second image data obtained by merging M channels of first image data in the N channels of first image data, and N channels of first image data that are not merged
  • K is less than or equal to the maximum number of image data input channels that can be supported.
  • K, N, M, and L are not limited here, and can be set as required to adapt to different image data processing requirements.
  • obstacle avoidance For example, take the obstacle avoidance of drones and other aircraft as an example.
  • 12 image capture devices include: 2 front-view lenses, 2 down-view lenses, 2 rear-view lenses, 2 top-view lenses, 2 left-view lenses, and 2 right-view lenses .
  • 2 front-view lenses, 2 down-view lenses, 2 rear-view lenses, 2 top-view lenses, 2 left-view lenses, and 2 right-view lenses are used to collect image data in corresponding directions.
  • the number of channels of image data that it can receive is limited.
  • the current number of input channels supported by more advanced chips is only about 8 channels, that is, at this time, the image processing chip can support
  • the first image data collected by the 2 rear-view lenses, 2 top-view lenses, 2 left-view lenses, and 2 right-view lenses can be merged according to needs.
  • the first image data collected by 2 rear view lenses and 2 top view lenses are merged by the merge module to obtain 1 channel of second image data, and the merge module is used for 2 left view lenses and 2 right view lenses.
  • the image processing chip can receive K-channel image data for subsequent image data processing.
  • K channels of image data can be input into the image processing chip through K input devices.
  • 6 channels of image data are input to the image processing chip through 6 input devices.
  • the foregoing merging processing may be a row-by-row merging processing.
  • the following describes the row-by-row merging processing in detail with reference to FIG. 5.
  • the merge processing process is:
  • the first line of the first image data collected by one of the two rear view lenses is merged into the first line of the second image data, and the other one of the two rear view lenses is collected
  • the first line of images of the first image data is merged into the second line of the second image data, and the first line of images of the first image data collected by one of the two top-view lenses is merged into the second image data
  • the first row of the first image data collected by the other of the two top-view lenses is merged into the fourth row of the second image data;
  • the second line of the first image data collected by one of the two rear view lenses is merged into the fifth line of the second image data, and the second line of the second image data is collected by the other rear view lens.
  • the second line of image data of one image data is merged into the sixth line of the second image data, and the second line image of the first image data collected by one of the two top view lenses is merged into the second line of image data.
  • the seventh row the second row of the first image data collected by the other of the two top-view lenses is merged into the eighth row of the second image data...;
  • each first image data collected by the above two rear-view lenses and two top-view lenses is 640*480
  • the first image data collected by the two rear-view lenses and two top-view lenses The image size of one channel of second image data obtained by the merging process is 640*1920.
  • the first image data may be rawdata data, which is raw data, and may also be understood as raw data in which the image acquisition device converts the captured light source signal into a digital signal.
  • 302 Perform split processing on L channels of second image data in the K channels of image data to obtain M channels of third image data.
  • the L path of second image data may include: a first path of second image data, a second path of second image data, ..., an L path of second image data.
  • the first path of second image data, the second path of second image data, ..., the Lth path of second image data represent the first path of second image data to the Lth path of second image data.
  • the image processing chip splitting the L channels of second image data in the K channels of image data includes: separately performing the first channel of second image data, the second channel of second image data, ..., the The L-th path of second image data is split line by line, wherein each path of second image data is split into M/L path of third image data.
  • one channel of second image data is split into 4 channels of third image data.
  • the first path of second image data can be split into first path of third image data, second path of third image data, ..., M/Lth path of third image data.
  • the first path of second image data includes P lines of image data.
  • P is a positive integer
  • the image processing chip performs line-by-line split processing on the first path of the second image data, including:
  • the first line of image data, the second line of image data, ..., the M/Lth line of image data of the first path of second image data are respectively disassembled into the first line, of the first path of third image data,
  • the first line of the second path of the third image data ..., the first line of the M/L path of the third image data; the (M/L+1)th path of the first path of the second image data )
  • Line image data, (M/L+2) line image data,..., 2*M/L line image data are respectively disassembled into the second line of the first channel of the third image data, the second line
  • This line-by-line splitting process is opposite to the above-mentioned line-by-line merging process.
  • the line-by-line splitting process will be specifically described below in conjunction with FIG.
  • the first line of image data, the second line of image data, ..., the fourth line of image data of the first channel of second image data are respectively disassembled into the first line of the first channel of third image data, the The first line of the second path of the third image data..., the first line of the fourth path of the third image data; the fifth line of image data and the sixth line of image data of the first path of second image data ,..., the 8th line of image data are disassembled to the second line of the first path of third image data, the second line of the second path of third image data, ..., the fourth path of the third image
  • the second line of data; the 9th line of image data, the 10th line of image data..., the 12th line of image data of the first channel of second image data are respectively disassembled into the first channel of third image data 3 lines, the third line of the second path of the third image data, and the fourth path of the third image data of the third line.
  • the preset format may include YUV format.
  • the color image in the YUV format is divided into three components, namely the "Y” component, the “U” component and the “V” component.
  • the "Y” component represents the brightness (Luminance or Luma), which is the gray value
  • the "U” and “V” components represent the chroma (Chrominance or Chroma), which is used to describe the color and saturation of the image Degree, used to specify the color of the pixel.
  • the "Y” component, "U” component and “V” component of the color image in the YUV format are separated. If there is only the “Y” component but no "U” or “V” components, then the image represented in this way is a black and white grayscale image.
  • the first image data and the third image data are both images in RGB format, that is, the above-mentioned format conversion processing may be to convert an image in RGB format into a color image in YUV format.
  • the calculation formula of the gray component Y in the color image in the preset format that is, the "Y" component of the color image in the YUV format, is:
  • R is the intensity of the red component in the RGB format image
  • G is the intensity of the green component of the RGB format image
  • B is the intensity of the blue component of the RGB format image.
  • the source data of the RGB format image is S (i, j) , where i is the number of rows and j is the number of columns, so the RGB to YUV conversion method is as follows:
  • Depth map is a way of expressing 3D scene information.
  • the gray value of each pixel of the depth image can be used to characterize the distance of a certain point in the scene corresponding to the collected image data from the image collection device.
  • the image processing chip may include a depth map processing module, through which the depth map processing module can perform image processing on the grayscale components in the color image in the preset format to obtain a depth image.
  • the depth map processing module uses the binocular stereo vision method to obtain the depth image.
  • the binocular stereo vision method acquires two images of the same scene at the same time through two image acquisition devices (such as two rear-view lenses) separated by a certain distance, through stereo matching
  • the algorithm finds the corresponding pixels in the two images, and then calculates the time difference information according to the triangulation principle, and the disparity information can be used to characterize the depth image of the objects in the scene through conversion.
  • the depth image is used to determine the distance information between the UAV and the obstacle, so as to determine the subsequent flight direction and flight trajectory of the UAV to avoid collision between the UAV and the obstacle .
  • the K (N>K) channels of image data received include a comparison of M channels of the N channels of first image data.
  • the L-channel second image data obtained after the image data is merged, and after the K-channel image data is received, the L-channel second image data in the K-channel image data is split, formatted, etc.
  • FIG. 8 is a schematic diagram of an image data processing device provided by an embodiment of the present invention.
  • the image data processing device 80 is used to process N channels of first image data collected by N image collection devices.
  • the image data processing device 80 can be configured in the chips of various image processing-related equipment.
  • the image processing-related equipment can be aircraft, community monitoring equipment, vehicle monitoring equipment, robots, and the like.
  • the chip may be an image processing chip, such as an image processing chip configured in the aforementioned aircraft.
  • the aircraft can be any type of aircraft, such as unmanned aerial vehicles, unmanned ships and the like. The following uses drones as an example of aircraft.
  • the image data processing device 80 includes: a receiving module 801, a split processing module 802, a format conversion module 803, and a depth image acquisition module 804.
  • the receiving module 801 is used to receive K channels of image data.
  • the K channels of image data include L channels of second image data obtained by merging M channels of the first image data in the N channels of first image data, and the N channels of first image data that are not merged
  • K is less than or equal to the maximum number of image data input channels that can be supported.
  • K, N, M, and L are not limited here, and can be set as required to adapt to different image data processing requirements.
  • obstacle avoidance For example, take the obstacle avoidance of drones and other aircraft as an example.
  • the number of channels of image data that can be received by the receiving module 801 is limited.
  • the current number of input channels supported by the more advanced chip is only about 8 channels, that is, at this time, the receiving module 801
  • the receiving module 801 can receive K channels of image data to split the processing module 802, the format conversion module 803, and the depth image acquisition module 804 for subsequent processing. Image data processing.
  • the first image data may be rawdata data, which is raw data, and may also be understood as raw data in which the image acquisition device converts the captured light source signal into a digital signal.
  • the split processing module 802 is configured to split the L channels of second image data in the K channels of image data to obtain M channels of third image data.
  • the L path of second image data may include: a first path of second image data, a second path of second image data, ..., an L path of second image data.
  • the first path of second image data, the second path of second image data, ..., the Lth path of second image data represent the first path of second image data to the Lth path of second image data.
  • the split processing module 802 is specifically configured to: separately perform processing on the first path of second image data, the second path of second image data, ..., the Lth path of second image data Perform line-by-line split processing to obtain M channels of third image data;
  • each channel of second image data is split into M/L channels of third image data.
  • one channel of second image data is split into 4 channels of third image data.
  • the first path of second image data can be split into first path of third image data, second path of third image data, ..., M/Lth path of third image data.
  • the first path of third image data, the second path of third image data, ..., the M/Lth path of third image data can be split into first path of third image data, second path of third image data, ..., M/Lth path of third image data.
  • the first path of second image data includes P lines of image data.
  • P is a positive integer
  • the split processing module 802 performs line-by-line split processing on the first path of the second image data includes:
  • the first line of image data, the second line of image data, ..., the M/Lth line of image data of the first path of second image data are respectively disassembled into the first line, of the first path of third image data,
  • the first line of the second path of the third image data ..., the first line of the M/L path of the third image data; the (M/L+1)th path of the first path of the second image data )
  • Line image data, (M/L+2) line image data,..., 2*M/L line image data are respectively disassembled into the second line of the first channel of the third image data, the second line
  • the format conversion module 803 is configured to perform format conversion processing on the (N-M) channels of first image data and the M channels of third image data to obtain a color image in a preset format.
  • the preset format may include YUV format.
  • the color image in the YUV format is divided into three components, namely the "Y” component, the “U” component and the “V” component.
  • the "Y” component represents the brightness (Luminance or Luma), which is the gray value
  • the "U” and “V” components represent the chroma (Chrominance or Chroma), which is used to describe the color and saturation of the image Degree, used to specify the color of the pixel.
  • the first image data and the third image data are both images in RGB format, that is, the format conversion processing performed by the format conversion module 803 may be to convert an image in RGB format into a color in YUV format. image.
  • the calculation formula of the gray component Y in the color image in the preset format that is, the "Y" component of the color image in the YUV format, is:
  • R is the intensity of the red component in the RGB format image
  • G is the intensity of the green component of the RGB format image
  • B is the intensity of the blue component of the RGB format image.
  • the depth image acquisition module 804 is configured to perform image processing on the gray-scale components in the color image in the preset format to obtain a depth image.
  • the gray value of each pixel of the depth image can be used to characterize the distance of a certain point in the scene corresponding to the collected image data from the image collection device.
  • the depth image acquisition module 804 can obtain a depth image by using a binocular stereo vision method.
  • the binocular stereo vision method acquires two images of the same scene at the same time through two image acquisition devices (such as two rear-view lenses) separated by a certain distance.
  • the stereo matching algorithm finds the corresponding pixels in the two images, and then calculates the time difference information according to the triangulation principle, and the disparity information can be used to characterize the depth image of the object in the scene through conversion.
  • the depth image is used to determine the distance information between the UAV and the obstacle, so as to determine the subsequent flight direction and flight trajectory of the UAV to avoid collision between the UAV and the obstacle .
  • the image data processing device 80 can execute the image data processing method provided by any method embodiment, and has the corresponding functional modules and beneficial effects for executing the method.
  • the image data processing device 80 can execute the image data processing method provided in the method embodiment.
  • the image processing chip 90 includes:
  • One processor 901 is taken as an example in FIG. 9.
  • the processor 901 and the memory 902 may be connected through a bus or in other ways.
  • the connection through a bus is taken as an example.
  • the memory 902 can be used to store non-volatile software programs, non-volatile computer-executable programs and modules, such as programs corresponding to the image data processing method in the embodiments of the present invention Instructions/modules (for example, the receiving module 801, the split processing module 802, the format conversion module 803, and the depth image acquisition module 804 shown in FIG. 8).
  • the processor 901 executes various functional applications and data processing of the image processing chip by running non-volatile software programs, instructions, and modules stored in the memory 902, that is, implements the image data processing method of the method embodiment.
  • the memory 902 may include a program storage area and a data storage area.
  • the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the image processing chip.
  • the memory 902 may include a high-speed random access memory, and may also include a non-volatile memory, for example, at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the memory 902 may optionally include memories remotely provided with respect to the processor 901, and these remote memories may be connected to the processor 901 via a network.
  • Examples of the network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the one or more modules are stored in the memory 902, and when executed by the one or more processors 901, the image data processing method in any method embodiment is executed, for example, the image data processing method described above is executed.
  • Steps 301 to 304 of the method in 3 implement the functions of the modules 801-804 in FIG. 8.
  • the image processing chip 90 can execute the image data processing method provided by the method embodiment, and has the corresponding functional modules and beneficial effects for executing the method.
  • the image processing chip 90 can execute the image data processing method provided by the method embodiment, and has the corresponding functional modules and beneficial effects for executing the method.
  • the embodiment of the present invention provides a computer program product
  • the computer program product includes a computer program stored on a non-volatile computer-readable storage medium
  • the computer program includes program instructions, when the program instructions are executed by a computer
  • the computer is caused to execute the image data processing method described above. For example, the steps 301 to 304 of the method in FIG. 3 described above are executed to realize the functions of the modules 801-804 in FIG. 8.
  • the embodiment of the present invention provides a non-volatile computer-readable storage medium, the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are used to make a computer execute the image data processing described above method. For example, the steps 301 to 304 of the method in FIG. 3 described above are executed to realize the functions of the modules 801-804 in FIG. 8.
  • the aircraft 100 includes: a fuselage (not shown), N image acquisition devices 110 arranged on the fuselage, and images arranged in the fuselage Processing chip 120.
  • the fuselage may include a center frame and one or more arms connected to the center frame, and the one or more arms extend radially from the center frame.
  • the number of arms can be 2, 4, 6, etc.
  • One or more arms are used to carry the power system.
  • the image processing chip 120 is connected to the N image acquisition devices 110, and the N image acquisition devices 110 are used to acquire N channels of first image data.
  • the N image acquisition devices 110 may be shown in FIG. 1 or FIG.
  • the image processing chip 120 may be the image processing chip 90 in FIG. 9.
  • the N image capture devices 110 include: 2 front-view lenses installed in parallel to each other, 2 down-view lenses installed in parallel to each other, 2 rear-view lenses installed in parallel to each other, and 2 rear-view lenses installed in parallel to each other.
  • Top view lens 2 left view lenses installed parallel to each other, 2 right view lenses installed parallel to each other.
  • the number of channels of image data it can receive is limited.
  • N the number of channels (N) of the first image data to be processed is greater than the number of channels of image data that the image processing chip 120 can receive, in order to achieve For the processing of N channels of image data, before the N channels of image data are input to the image processing chip 120, some channels of the first image data will be combined for processing, so that the image processing chip 120 realizes when the number of channels of image data to be processed is greater than
  • the number of channels of image data that the image processing chip 120 can receive can also realize the processing of N channels of first image data.
  • the image processing chip 120 receiving K channels of image data includes: receiving 4 channels of first image data collected by two front-view lenses installed in parallel and two downward-view lenses installed in parallel. ; And receive the 1 channel of second image data collected by the two parallel-installed rear-view lenses and two mutually parallel-installed top-view lenses after merging, and receive two parallel-installed left-view One channel of second image data collected by the lens and two right-view lenses installed in parallel to each other after merging.
  • the aircraft 100 further includes L merging modules 130.
  • the input ends of the L merging modules 130 are connected to M image acquisition devices for collecting M channels of first image data, and the output ends of the L merging modules 130 are connected to the image processing chip 120, so The L merging modules 130 are configured to merge M channels of first image data among the N channels of first image data.
  • the L merging modules 130 include a first merging module 1301 and a second merging module 1302.
  • the input end of the first merging module 1301 is connected to the two rear-view lenses installed in parallel and the two top-view lenses installed in parallel, and the output end of the first merging module 1301 is connected to the The image processing chip 120 is connected.
  • the first merging module 1301 is used for merging and processing 4 channels of first image data collected by the two rear-view lenses installed in parallel and the two top-view lenses installed in parallel.
  • the input end of the second merging module 1302 is connected to the two left-view lenses installed parallel to each other and the two right-view lenses installed parallel to each other, and the output end of the second merging module 1302 is connected to the image The processing chip 120 is connected.
  • the second merging module 1302 is used for merging the first image data of 4 channels collected by the two left-view lenses installed in parallel and the two upper right-view lenses installed in parallel.
  • Two front-view lenses installed in parallel to each other collect the first image data of 2 channels.
  • the image data acquisition is triggered by synchronization.
  • Two front-view lenses installed in parallel collect one frame of first image data at the same time,
  • Two channels of first image data can be obtained by sending into the image processing chip 120 through the input device Dev1 and the input device Dev2 respectively;
  • the image processing chip 120 acquires the 2-channel first image data collected by the two downward-looking lenses installed in parallel to each other is similar to the above-mentioned acquisition of the 2-channel first image data collected by the two front-view lenses installed in parallel to each other, so , I won’t repeat it here;
  • Two rear-view lenses installed parallel to each other and two top-view lenses installed parallel to each other collect 4 channels of first image data. After the 4 channels of first image data are merged by the first merging module 1301, the input device Dev5 Send to the image processing chip 120 to obtain one channel of second image data;
  • Two left-view lenses installed parallel to each other and two right-view lenses installed parallel to each other collect 4 channels of first image data.
  • the input device Dev6 It is sent to the image processing chip 120 to obtain one channel of second image data.
  • the K channels of image data acquired by the image processing chip 120 include 4 channels of first image data and 2 channels of second image data.
  • the aircraft 100 is a kind of flying vehicle. In order to realize the flight of the aircraft 100, the aircraft 100 also includes relevant components for realizing the function of flying. For example, as shown in Fig. 12, the aircraft 100 further includes: a power system, a flight control system, a pan/tilt and so on.
  • the flight control system is arranged in the fuselage, the power system and the pan/tilt are installed on the fuselage, and the N image acquisition devices 110 are respectively mounted on the corresponding pan/tilt.
  • the flight control system may be coupled with the power system, the pan-tilt and the image processing chip 120 to achieve communication.
  • the power system may include an electronic governor (referred to as an ESC for short), one or more propellers, and one or more first motors corresponding to the one or more propellers.
  • an electronic governor referred to as an ESC for short
  • one or more propellers one or more propellers
  • first motors corresponding to the one or more propellers.
  • the first motor is connected between the electronic governor and the propeller, and the first motor and the propeller are arranged on the corresponding arm.
  • the first motor is used to drive the propeller to rotate so as to provide power for the flight of the aircraft 100.
  • the power enables the aircraft 100 to achieve one or more degrees of freedom movement, such as forward and backward movement, up and down movement, and so on.
  • the aircraft 100 may rotate about one or more rotation axes.
  • the aforementioned rotation axis may include a roll axis, a pan axis, and a pitch axis.
  • the first motor may be a DC motor or an AC motor.
  • the first motor may be a brushless motor or a brush motor.
  • the electronic speed governor is used to receive the driving signal generated by the flight control system and provide a driving current to the first motor according to the driving signal to control the rotation speed of the first motor, thereby controlling the flight of the aircraft 100.
  • the flight control system has the ability to monitor and control the flight and mission of the aircraft 100, and includes a set of equipment for controlling the launch and recovery of the aircraft 100.
  • the flight control system is used to control the flight of the aircraft 100.
  • the flight control system may include a sensing system and a flight controller.
  • the sensing system is used to measure the position information and status information of the aircraft 100 and various parts of the aircraft 100, for example, three-dimensional position, three-dimensional angle, three-dimensional velocity, three-dimensional acceleration and three-dimensional angular velocity, flying height, and so on.
  • the sensing system may include at least one of infrared sensors, acoustic wave sensors, gyroscopes, electronic compasses, inertial measurement units (IMU), vision sensors, global navigation satellite systems, and barometers.
  • the global navigation satellite system may be a global positioning system (Global Positioning System, GPS).
  • the flight controller is used to control the aircraft 100, such as controlling the flight or shooting of the aircraft 100. It is understandable that the flight controller can control the aircraft 100 according to pre-programmed program instructions, and can also control the aircraft 100 by responding to one or more control instructions from other devices.
  • the remote controller is connected to the flight controller, and the remote controller sends a control instruction to the flight controller, so that the flight controller controls the aircraft 100 through the control instruction.
  • the flight controller sends the control command to the electronic speed governor to generate a driving signal, and provides a driving current to the first motor according to the driving signal to control the speed of the first motor, thereby Control the flight of the aircraft 100.
  • the pan/tilt is used as a shooting auxiliary device for carrying N image acquisition devices 110.
  • the gimbal is provided with a second motor, and the flight control system can control the gimbal. Specifically, the flight control system adjusts the angle of the images captured by the N image acquisition devices by controlling the movement (such as rotation speed) of the second motor.
  • the second motor may be a brushless motor or a brush motor.
  • the gimbal can be located at the top of the fuselage or at the bottom of the fuselage.
  • the pan/tilt may be used as a part of the aircraft 100. It is understood that, in some other embodiments, the pan/tilt may be independent of the aircraft 100.
  • modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physically separate. Modules can be located in one place or distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each embodiment can be implemented by software plus a general hardware platform, and of course, it can also be implemented by hardware.
  • a person of ordinary skill in the art can understand that all or part of the processes in the method of the embodiments can be implemented by a computer program instructing relevant hardware.
  • the program can be stored in a computer readable storage medium, and the program is executed At this time, it may include the flow of the embodiment of each method as described.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Mechanical Engineering (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

本发明实施例涉及图像处理技术领域,公开了一种图像数据处理方法、装置、图像处理芯片及飞行器。该方法用于对由N个图像采集设备所采集的N路第一图像数据进行处理,包括:接收K路图像数据,K路图像数据包含对N路第一图像数据中的M路第一图像数据进行合并处理后所得到的L路第二图像数据以及N路第一图像数据中未经合并处理的(N-M)路第一图像数据,K<N;对K路图像数据中的L路第二图像数据进行拆分处理,以得到M路第三图像数据;将(N-M)路第一图像数据及M路第三图像数据进行格式转换处理,以得到预设格式的彩色图像;将预设格式的彩色图像中的灰度部分分量进行图像处理,以得到深度图像。通过该方法可更好的满足多路图像数据处理的要求。

Description

一种图像数据处理方法、装置、图像处理芯片及飞行器
本申请要求于2019年4月12日提交中国专利局、申请号为201910294084.7、申请名称为“一种图像数据处理方法、装置、图像处理芯片及飞行器”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像数据处理方法、装置、图像处理芯片及飞行器。
背景技术
图像数据处理技术是20世纪60年代随着计算机技术和VLSI(Very Large Scale Integration)的发展而产生、发展和不断成熟起来的一个新兴技术。其可利用各种手段根据需求对图像数据进行处理。近年来,图像数据处理技术日趋成熟,被广泛应用于航空航天、军事、生物医学及人工智能等领域。例如,在无人机等飞行器的航拍领域中,通过图像数据处理技术对相机、摄像机等设备所采集的图像进行处理,以得到满足各种需求的图像。
而伴随着图像数据处理技术的应用与推广,对于其处理的图像数据的性能的要求也越来越高,例如,要求可处理多路图像数据。但是,由于用于处理图像数据的芯片如图像处理芯片本身的限制,使得芯片可接收的图像数据的路数是有限的,从而无法很好的满足多路图像数据处理的要求。例如,当需要处理的图像数据的路数大于芯片可接收的图像数据的路数时,由于超出了芯片可接收的图像数据的路数的最大值,因此将会出现芯片无法完全获取到该多路图像数据的情况,从而导致无法实现对该多路图像数据的处理,这对芯片中与图像数据处理相关的功能的实现造成很大的影响,甚至导致相关功能无法实现。
因此,如何能更好的满足多路图像数据处理的要求成为亟需解决的问题。
发明内容
本发明实施例提供一种图像数据处理方法、装置、图像处理芯片及飞行器,可以更好的满足多路图像数据处理的要求。
本发明实施例公开了如下技术方案:
第一方面,本发明实施例提供了一种图像数据处理方法,用于对由N个图像采集设备所采集的N路第一图像数据进行处理,所述方法包括:
接收K路图像数据,所述K路图像数据包含对所述N路第一图像数据中的M路第一图像数据进行合并处理后所得到的L路第二图像数据以及所述N路第 一图像数据中未经合并处理的(N-M)路第一图像数据,K<N;
对所述K路图像数据中的L路第二图像数据进行拆分处理,以得到M路第三图像数据;
将所述(N-M)路第一图像数据及所述M路第三图像数据进行格式转换处理,以得到预设格式的彩色图像;
将所述预设格式的彩色图像中的灰度部分分量进行图像处理,以得到深度图像。
可选的,K小于或等于所能支持的图像数据输入路数的最大值。
可选的,所述L路第二图像数据包括:第一路第二图像数据、第二路第二图像数据、…、第L路第二图像数据;
所述对所述K路图像数据中的L路第二图像数据进行拆分处理包括:
分别对所述第一路第二图像数据、所述第二路第二图像数据、…、所述第L路第二图像数据进行逐行拆分处理,其中,每一路第二图像数据拆分为M/L路第三图像数据。
可选的,所述第一路第二图像数据拆分为第一路第三图像数据、第二路第三图像数据、…、第M/L路第三图像数据,所述第一路第二图像数据包括P行图像数据;
其中,对第一路第二图像数据进行逐行拆分处理包括:
将所述第一路第二图像数据的第一行图像数据、第二行图像数据、…、第M/L行图像数据分别拆解到所述第一路第三图像数据的第一行、所述第二路第三图像数据的第一行、…、所述第M/L路第三图像数据的第一行;将所述第一路第二图像数据的第(M/L+1)行图像数据、第(M/L+2)行图像数据、…、第2*M/L行图像数据分别拆解到所述第一路第三图像数据的第二行、所述第二路第三图像数据的第二行、…、所述第M/L路第三图像数据的第二行;依此类推,直至将所述第一路第二图像数据的第(P-M/L+1)行图像数据、第(P-M/L+2)行图像数据…、第P行图像数据分别拆解到所述第一路第三图像数据的第(P/(M/L))行、所述第二路第三图像数据的第(P/(M/L))行、所述第M/L路第三图像数据的第(P/(M/L))行。
可选的,所述预设格式包括:YUV格式。
可选的,所述第一图像数据及所述第三图像数据均为RGB格式的图像,所述预设格式的彩色图像中的灰度部分分量的计算公式为:
Y=((66*R+129*G+25*B+128)/256+16)*16;
其中,R为RGB格式的图像中红色分量的强度,G为RGB格式的图像绿色分量的强度,B为RGB格式的图像蓝色分量的强度。
第二方面,本发明实施例提供了一种图像数据处理装置,用于对由N个图像采集设备所采集的N路第一图像数据进行处理,所述装置包括:
接收模块,用于接收K路图像数据,所述K路图像数据包含对所述N路第一图像数据中的M路第一图像数据进行合并处理后所得到的L路第二图像数据 以及所述N路第一图像数据中未经合并处理的(N-M)路第一图像数据;
拆分处理模块,用于对所述K路图像数据中的L路第二图像数据进行拆分处理,以得到M路第三图像数据;
格式转换模块,用于将所述(N-M)路第一图像数据及所述M路第三图像数据进行格式转换处理,以得到预设格式的彩色图像;
深度图像获取模块,用于将所述预设格式的彩色图像中的灰度部分分量进行图像处理,以得到深度图像。
可选的,K小于或等于所能支持的图像数据输入路数的最大值。
可选的,所述L路第二图像数据包括:第一路第二图像数据、第二路第二图像数据、…、第L路第二图像数据;
所述拆分处理模块具体用于:
分别对所述第一路第二图像数据、所述第二路第二图像数据、…、所述第L路第二图像数据进行逐行拆分处理,以得到M路第三图像数据;
其中,每一路第二图像数据拆分为M/L路第三图像数据。
可选的,所述第一路第二图像数据拆分为第一路第三图像数据、第二路第三图像数据、…、第M/L路第三图像数据,所述第一路第二图像数据包括P行图像数据;
其中,所述拆分处理模块对第一路第二图像数据进行逐行拆分处理包括:
将所述第一路第二图像数据的第一行图像数据、第二行图像数据、…、第M/L行图像数据分别拆解到所述第一路第三图像数据的第一行、所述第二路第三图像数据的第一行、…、所述第M/L路第三图像数据的第一行;将所述第一路第二图像数据的第(M/L+1)行图像数据、第(M/L+2)行图像数据、…、第2*M/L行图像数据分别拆解到所述第一路第三图像数据的第二行、所述第二路第三图像数据的第二行、…、所述第M/L路第三图像数据的第二行;依此类推,直至将所述第一路第二图像数据的第(P-M/L+1)行图像数据、第(P-M/L+2)行图像数据…、第P行图像数据分别拆解到所述第一路第三图像数据的第(P/(M/L))行、所述第二路第三图像数据的第(P/(M/L))行、所述第M/L路第三图像数据的第(P/(M/L))行。
可选的,所述预设格式包括:YUV格式。
可选的,所述第一图像数据及所述第三图像数据均为RGB格式的图像,所述预设格式的彩色图像中的灰度部分分量Y的计算公式为:
Y=((66*R+129*G+25*B+128)/256+16)*16;
其中,R为RGB格式的图像中红色分量的强度,G为GB格式的图像绿色分量的强度,B为GB格式的图像蓝色分量的强度。
第三方面,本发明实施例提供了一种图像处理芯片,包括:
至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述 至少一个处理器执行,以使所述至少一个处理器能够执行如上所述的图像数据处理方法。
第四方面,本发明实施例提供了一种飞行器,包括机身、设置于所述机身上的N个图像采集设备以及设置于所述机身内的图像处理芯片,所述图像处理芯片与所述N个图像采集设备连接,所述N个图像采集设备用于采集N路第一图像数据,所述图像处理芯片为权利要求13所述的图像处理芯片。
可选的,N个图像采集设备包括:2个相互平行安装的前视镜头、2个相互平行安装的下视镜头、2个相互平行安装的后视镜头、2个相互平行安装的上视镜头、2个相互平行安装的左视镜头、2个相互平行安装的右视镜头。
可选的,所述图像处理芯片接收K路图像数据包括:
接收由2个相互平行安装的前视镜头、2个相互平行安装的下视镜头所采集的未经合并处理的4路第一图像数据;并接收由2个相互平行安装的后视镜头及2个相互平行安装的上视镜头所采集的进行合并处理后所得到的1路第二图像数据,以及接收2个相互平行安装的左视镜头及2个相互平行安装的右视镜头所采集的进行合并处理后所得到的1路第二图像数据。
可选的,所述飞行器还包括L个合并模块,所述L个合并模块的输入端与用于采集M路第一图像数据的M个图像采集设备连接,所述L个合并模块的输出端与所述图像处理芯片连接,所述L个合并模块用于对所述N路第一图像数据中的M路第一图像数据进行合并处理。
可选的,所述L个合并模块包括第一合并模块及第二合并模块,所述第一合并模块的输入端与所述2个相互平行安装的后视镜头及所述2个相互平行安装的上视镜头连接,所述第一合并模块的输出端与所述图像处理芯片连接,所述第二合并模块的输入端与所述2个相互平行安装的左视镜头及所述2个相互平行安装的右视镜头连接,所述第二合并模块的输出端与所述图像处理芯片连接。
第五方面,本发明实施例提供了一种计算机程序产品,所述计算机程序产品包括存储在非易失性计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行如上所述的图像数据处理方法。
第六方面,本发明实施例提供了一种非易失性计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使计算机执行如上所述的图像数据处理方法。
在本发明实施例中,当N个图像采集设备所采集的N路第一图像数据时,接收的K(N>K)路图像数据中包含对N路第一图像数据中的M路第一图像数据进行合并处理后所得到的L路第二图像数据,并且,在接收到K路图像数据后,对K路图像数据中的L路第二图像数据进行拆分处理、格式转换等处理,从而实现了当需要处理的图像数据的路数大于芯片可接收的图像数据的路数 时,也可实现对N路第一图像数据的处理,以便更好的满足多路图像数据处理的要求,保证与图像数据处理相关的功能的实现。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本发明实施例提供的一种图像数据处理方法的应用环境的示意图,其中,该示意图为飞行器的仰视图;
图2是本发明实施例提供的一种图像数据处理方法的应用环境的示意图,其中,该示意图为飞行器的俯视图;
图3是本发明实施例提供的一种图像数据处理方法的流程示意图;
图4是本发明实施例提供的12个图像采集设备的示意图;
图5是本发明实施例提供的逐行合并处理的示意图;
图6是本发明实施例提供的逐行拆分处理的示意图;
图7是本发明实施例提供的RGB格式的图像转换为YUV格式的彩色图像的示意图;
图8是本发明实施例提供的一种图像数据处理装置的示意图;
图9是本发明实施例提供的一种图像处理芯片的硬件结构示意图;
图10是本发明实施例提供的一种飞行器的示意图;
图11是本发明实施例提供的另一种飞行器的示意图;
图12是本发明实施例提供的另一种飞行器的示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
此外,下面所描述的本发明各个实施方式中所涉及到的技术特征只要彼此之间未构成冲突就可以相互组合。
图1和图2为本发明实施例提供的图像数据处理方法的其中一种应用环境的示意图。其中,该应用环境中包括:飞行器10。该飞行器10包括机身11、设置于该机身11的若干个图像采集设备12以及设置于机身11内的图像处理 芯片(图未示)。该图像处理芯片为执行该图像数据处理方法的执行主体。
其中,若干个图像采集设备12与图像处理芯片连接,该连接可以为通信连接,以实现若干个图像采集设备12与图像处理芯片之间的数据交互。
例如,若干个图像采集设备12用于采集图像数据,并通过该通信连接,将所采集的图像数据发送给图像处理芯片,以便图像处理芯片对接收到的图像数据进行处理。
飞行器10作为一种飞行载具,主要用于通过飞行完成指定任务,如飞往指定地点的飞行任务,或者在飞行过程中进行拍摄的拍摄任务等。
而在飞行器10的飞行过程中保证飞行器10的飞行安全是飞行器10可完成指定任务的前提。在飞行器10飞行的过程中,难免会遇到障碍物,如其他飞行器、树木、电线等,为了避免飞行器10与各种障碍物发生碰撞,降低飞行器安全事故的发生几率,保证飞行过程中的安全,避障能力是飞行器10所必备的能力,也即评价飞行器性能的重要指标。
而若要实现飞行器10避障,首先要实现的是如何精确的测量飞行器10与障碍物之间的距离,只有先测量出危险范围内的距离,才可以有时间在撞向障碍物之前停止飞行器10的前进动作,进而避免事故的发生。
就如人类或其他动物在前进的过程中,只有先看见前方的障碍物,并且会大致估算出自己与障碍物之间的距离,才能决定下一步的行为方向,以避免与障碍物发生碰撞。
而目前的飞行器领域被广泛应用到的障碍物距离检测方法有超声波测距、红外或激光测距、视觉图像检测、电子地图等。其中,视觉图像检测技术更是利用了人眼如何估计视觉的原理,是目前较受飞行器领域青睐的一种技术。
视觉图像检测技术是一种用机器代替人眼来做测量和判断的技术。在距离测量的应用中,视觉图像检测主要是把图像当作检测和传递信息的手段或载体而加以利用的测量方法,通过提取图像的特征信号,最终从图像中获取被测对象的实际信息。
在用于测距的机器视觉系统中,通过机器视觉产品(即图像采集装置,如CMOS和CCD等)将被摄取目标转换成图像信号,传送给专用的图像处理设备或芯片,图像处理设备或芯片根据像素分布和亮度、颜色等图像信息,得到深 度信息,以便确定飞行器与障碍物间的距离信息。此外,还可通过进行分析及各种运算来抽取目标的特征,进而根据判别的结果来控制飞行器10的飞行,以便实现飞行器10的避障。
目前,为了尽可能提高飞行器10的安全性,通常要求飞行器10可满足全向避障的需求,也即避障通常要求支持前方、下方、后方、左方、右方和上方6个方向。
并且,若是全视觉方案,为了满足全面避障的需求,则通常需要6对即12个图像采集设备,如果再加上主图像采集设备,共需要13个图像采集设备。例如,12个图像采集设备可以为2个前视镜头、2个下视镜头、2个后视镜头、2个上视镜头、2个左视镜头、2个右视镜头。
而由于图像处理设备或芯片本身的限制,其可接收的图像数据的路数有限,通常目前比较高级的芯片最多支持的输入路数也只有8路左右,这仍远远满足不了全向避障的需求。
基于此,在本发明实施例中,在全面避障的应用中,首先,若干个图像采集设备12,如N个图像采集设备12采集N路第一图像数据,并输出该N路第一图像数据;图像处理芯片接收第一图像数据前,由用于实现合并功能的合并模块对N路第一图像数据中的M路第一图像数据进行合并处理,以得到L路第二图像数据;图像处理芯片便可接收该L路第二图像数据以及N路第一图像数据中未经合并处理的(N-M)路第一图像数据,共K(N>K)路图像数据;并且,在图像处理芯片接收到K路图像数据后,对K路图像数据中的L路第二图像数据进行拆分处理、格式转换等处理,从而实现了当需要处理的图像数据的路数大于图像处理芯片可接收的图像数据的路数时,也可实现对N路第一图像数据的处理,以便更好的满足多路图像数据处理的要求,保证与图像数据处理相关的功能的实现,也即保证全面避障功能的实现,从而提高飞行器10在飞行过程的安全性,降低安全事故发生的几率。
需要说明的是,上述图像采集设备12可以为任何合适的感光元件或图像传感器,例如,CMOS(Complementary Metal Oxide Semiconductor,互补金属氧化物半导体)、CCD(Charge-coupled Device,电荷耦合元件)等。
并且,若干个图像采集设备12可以为相同的图像采集设备,也可以为不 相同的图像采集设备,以便满足不同的需求。例如,以飞行器10的全面避障为例,飞行器10的前向运动和向下着陆距离的判断通常最为重要,因此,2个前视镜头及2个下视镜头采用分辨率较高的镜头,如720P的镜头,2个后视镜头、2个上视镜头、2个左视镜头、2个右视镜头采用分辨率相对较低的镜头,如VGA镜头。
上述图像处理芯片可以为任何合适的能实现上述图像数据处理方法的芯片,如微处理器、微控制单元、单片机、控制器等。
还需要说明的是,本发明实施例提供的图像数据处理方法还可以进一步的拓展到其他合适的应用环境中,而不限于图1中所示的应用环境。例如,在实际应用的过程中,对于本领域技术人员将会显而易见的是,该应用环境中的飞行器10,可以不受限制地使用各种类型的飞行器,如无人机、无人船或其他可移动装置等。
例如,以无人机为例,该图像数据处理方法可用于无人机的自动避障。无人机是由遥控设备或自备程序控制装置操纵,带任务载荷的不载人飞行器。该无人机可以为各种类型的无人机,例如,如图1、图2所示该无人机可以是旋翼飞行器(rotorcraft),例如,由多个推动装置通过空气推动的多旋翼飞行器。可以理解的是,本发明的实施例并不限于此,无人机也可以是其他类型的无人机,如固定翼无人机、无人飞艇、伞翼无人机、扑翼无人机等。
并且,在一些其他应用环境中,若干个图像采集设备12中的图像采集设备的数量可以更多或更少,例如,10个、11个、13个、14个等等,也即,若干个图像采集设备12中的图像采集设备的数量在此不予限定。
此外,在一些其他应用环境中,该图像数据处理方法还可应用于其他涉及图像处理的设备,如小区监控设备、车辆监控设备、机器人等,而不限于本发明实施例描述的飞行器中。
实施例1:
图3为本发明实施例提供的一种图像数据处理方法的流程示意图。所述图像数据处理方法用于对由N个图像采集设备所采集的N路第一图像数据进行处理。该图像数据处理方法可应用于各种涉及图像处理的设备中,如飞行器、小 区监控设备、车辆监控设备、机器人等。该图像数据处理方法可由任何合适类型的芯片执行,如由上述飞行器的图像处理芯片执行。该飞行器可以为任何类型的飞行器,如无人机、无人船等。以下以无人机作为飞行器的示例。
参照图3,所述图像数据处理方法包括:
301:接收K路图像数据。
其中,K路图像数据包含对所述N路第一图像数据中的M路第一图像数据进行合并处理后所得到的L路第二图像数据以及所述N路第一图像数据中未经合并处理的(N-M)路第一图像数据,K<N。
上述K、N、M、L均为正整数,并且,K=L+N-M。另外K小于或等于所能支持的图像数据输入路数的最大值。
需要说明的是,K、N、M、L的具体值在此不予限定,可以根据需要进行设定,以便适应不同的图像数据处理需求。
例如,以无人机等飞行器的避障为例,为了实现无人机的全面避障,避障通常要求支持前方、下方、后方、左方、右方和上方6个方向,因此,通常需要6对即12个图像采集设备,也即,此时可取N=12。
例如,如图4所示,12个图像采集设备包括:2个前视镜头、2个下视镜头、2个后视镜头、2个上视镜头、2个左视镜头、2个右视镜头。2个前视镜头、2个下视镜头、2个后视镜头、2个上视镜头、2个左视镜头、2个右视镜头分别用于采集对应方向的图像数据。
而由于图像处理芯片本身的限制,其可接收的图像数据的路数有限,通常目前比较高级的芯片最多支持的输入路数只有8路左右,也即,此时,图像处理芯片所能支持的图像数据输入路数的最大值可取8,K的取值范围为K≤8,例如,可以取K=4、5、6、7等。
此外,可以根据无人机避障的实际情况,确定需要进行合并处理的图像数据路数L。例如,对于无人机的全面避障而言,无人机前向运动和向下着陆距离的判断最为重要,因此,在将图像数据输入到图像处理芯片中前,对于2个后视镜头及2个上视镜头所采集的第一图像数据不进行合并处理,而对于2个后视镜头、2个上视镜头、2个左视镜头、2个右视镜头所采集的第一图像数据进行合并处理。此时,M=8。
并且,可以根据需要,对2个后视镜头、2个上视镜头、2个左视镜头、2个右视镜头所采集的第一图像数据进行合并处理。例如,通过合并模块对2个后视镜头及2个上视镜头所采集的第一图像数据进行合并处理得到1路第二图像数据,通过合并模块对2个左视镜头及2个右视镜头所采集的第一图像数据进行合并处理得到1路第二图像数据,也即,此时,L=2。
经过合并处理后,图像处理芯片便可接收K路图像数据,以便进行后续的图像数据处理。其中,可以通过K个输入设备分别将K路图像数据输入到图像处理芯片中。例如,通过6个输入设备分别将6路图像数据输入到图像处理芯片中。
上述合并处理可以为逐行合并处理,下面结合图5,对逐行合并处理进行具体描述。
如图5所示,以对2个后视镜头及2个上视镜头所采集的第一图像数据进行合并处理得到1路第二图像数据为例,假设2个后视镜头及2个上视镜头所采集的第一图像数据均有Q行,其中,Q为正整数,Q的具体值在此不予限定,可以根据需要进行设定,以便适应不同的图像数据处理需求,例如Q=3,则合并处理的过程为:
2个后视镜头中的其中一个后视镜头所采集的第一图像数据的第一行图像合并到第二图像数据的第一行,2个后视镜头中的另一个后视镜头所采集的第一图像数据的第一行图像合并到第二图像数据的第二行,2个上视镜头中的其中一个上视镜头所采集的第一图像数据的第一行图像合并到第二图像数据的第三行,2个上视镜头中的另一个上视镜头所采集的第一图像数据的第一行图像合并到第二图像数据的第四行;
2个后视镜头中的其中一个后视镜头所采集的第一图像数据的第二行图像合并到第二图像数据的第五行,2个后视镜头中的另一个后视镜头所采集的第一图像数据的第二行图像合并到第二图像数据的第六行,2个上视镜头中的其中一个上视镜头所采集的第一图像数据的第二行图像合并到第二图像数据的第七行,2个上视镜头中的另一个上视镜头所采集的第一图像数据的第二行图像合并到第二图像数据的第八行…;
依此类推,直至2个后视镜头中的其中一个后视镜头所采集的第一图像数 据的第Q行图像合并到第二图像数据的第(4*Q-3)行,2个后视镜头中的另一个后视镜头所采集的第一图像数据的第Q行图像合并到第二图像数据的第(4*Q-2)行,2个上视镜头中的其中一个上视镜头所采集的第一图像数据的第Q行图像合并到第二图像数据的第(4*Q-1)行,2个上视镜头中的另一个上视镜头所采集的第一图像数据的第Q行图像合并到第二图像数据的第(4*Q)行。
若上述2个后视镜头及2个上视镜头所采集的各个第一图像数据的图像大小为640*480,则对该2个后视镜头及2个上视镜头所采集的第一图像数据进行合并处理所得到1路第二图像数据的图像大小为640*1920。
其中,第一图像数据可以为rawdata数据,rawdata数据也即未经加工的数据,也可以理解为图像采集装置将捕捉到的光源信号转化为数字信号的原始数据。
302:对所述K路图像数据中的L路第二图像数据进行拆分处理,以得到M路第三图像数据。
其中,所述L路第二图像数据可以包括:第一路第二图像数据、第二路第二图像数据、…、第L路第二图像数据。其中,第一路第二图像数据、第二路第二图像数据、…、第L路第二图像数据表示第一路第二图像数据至第L路第二图像数据。
图像处理芯片对所述K路图像数据中的L路第二图像数据进行拆分处理包括:分别对所述第一路第二图像数据、所述第二路第二图像数据、…、所述第L路第二图像数据进行逐行拆分处理,其中,每一路第二图像数据拆分为M/L路第三图像数据。
例如,假设M=8,L=2,则1路第二图像数据拆分为4路第三图像数据。
在一些实现方式中,所述第一路第二图像数据拆分可为第一路第三图像数据、第二路第三图像数据、…、第M/L路第三图像数据。其中,第一路第三图像数据、第二路第三图像数据、…、第M/L路第三图像数据
并且,所述第一路第二图像数据包括P行图像数据。其中,P为正整数,P的具体值在此不予限定,可以根据需要进行设定,以便适应不同的图像数据处理需求,例如P=12。
其中,图像处理芯片对第一路第二图像数据进行逐行拆分处理包括:
将所述第一路第二图像数据的第一行图像数据、第二行图像数据、…、第M/L行图像数据分别拆解到所述第一路第三图像数据的第一行、所述第二路第三图像数据的第一行、…、所述第M/L路第三图像数据的第一行;将所述第一路第二图像数据的第(M/L+1)行图像数据、第(M/L+2)行图像数据、…、第2*M/L行图像数据分别拆解到所述第一路第三图像数据的第二行、所述第二路第三图像数据的第二行、…、所述第M/L路第三图像数据的第二行;依此类推,直至将所述第一路第二图像数据的第(P-M/L+1)行图像数据、第(P-M/L+2)行图像数据…、第P行图像数据分别拆解到所述第一路第三图像数据的第(P/(M/L))行、所述第二路第三图像数据的第(P/(M/L))行、所述第M/L路第三图像数据的第(P/(M/L))行。
该逐行拆分处理与上述逐行合并的过程相反,下面结合图6对逐行拆分处理进行具体描述。
如图6所示,以P=12、M=8、L=2为例,其逐行拆分处理的过程为:
将所述第一路第二图像数据的第一行图像数据、第二行图像数据、…、第4行图像数据分别拆解到所述第一路第三图像数据的第一行、所述第二路第三图像数据的第一行、…、所述第4路第三图像数据的第一行;将所述第一路第二图像数据的第5行图像数据、第6行图像数据、…、第8行图像数据分别拆解到所述第一路第三图像数据的第二行、所述第二路第三图像数据的第二行、…、所述第4路第三图像数据的第二行;将所述第一路第二图像数据的第9行图像数据、第10行图像数据…、第12行图像数据分别拆解到所述第一路第三图像数据的第3行、所述第二路第三图像数据的第3行、所述第4路第三图像数据的第3行。
303:将所述(N-M)路第一图像数据及所述M路第三图像数据进行格式转换处理,以得到预设格式的彩色图像。
其中,该预设格式可以包括YUV格式。YUV格式的彩色图像分为三个分量,分别为“Y”分量、“U”分量和“V”分量。其中,“Y”分量表示明亮度(Luminance或Luma),也就是灰度值;而“U”分量和“V”分量表示的则是色度(Chrominance或Chroma),作用是描述图像色彩及饱和度,用于指定像素的 颜色。
YUV格式的彩色图像的“Y”分量和“U”分量、“V”分量是分离的。如果只有“Y”分量而没有“U”分量、“V”分量,那么这样表示的图像就是黑白灰度图像。
在一些实现方式中,所述第一图像数据及所述第三图像数据均为RGB格式的图像,也即,上述进行格式转换处理可以为将RGB格式的图像转换为YUV格式的彩色图像。
其中,所述预设格式的彩色图像中的灰度部分分量Y也即YUV格式的彩色图像的“Y”分量的计算公式为:
Y=((66*R+129*G+25*B+128)/256+16)*16
其中,R为RGB格式的图像中红色分量的强度,G为RGB格式的图像绿色分量的强度,B为RGB格式的图像蓝色分量的强度。
下面结合图7,对RGB格式的图像转换为YUV格式的彩色图像进行具体描述。
如图7所示,RGB格式的图像的源数据为S (i,j),其中,i为行数,j为列数,则其RGB装YUV转换方法如下:
1)奇数行奇数列:
G=S (i,j)
R=(S (i,j-1)+S (i,j+1))/2
B=(S (i-1,j)+S (i+1,j))/2
2)偶数行偶数列:
G=S (i,j)
R=(S (i-1,j)+S (i+1,j))/2
B=(S (i,j-1)+S (i,j+1))/2
3)奇数行偶数列:
R=S (i,j)
B=(S (i-1,j-1)+S (i-1,j+1)+S (i+1,j-1)+S (i+1,j+1))/4
V=|S (i-2,j)+S (i+2,j)|,H=|S (i,j-2)+S (i,j+2)|
若H<V,G=(S (i,j-1)+S (i,j+1))/2
若H>V,G=(S (i-1,j)+S (i+1,j))/2
若H=V,G=(S (i,j-1)+S (i,j+1)+S (i-1,j)+S (i+1,j))/4
4)偶数行奇数列:
B=S (i,j)
R=(S (i-1,j-1)+S (i-1,j+1)+S (i+1,j-1)+S (i+1,j+1))/4
V=|S (i-2,j)+S (i+2,j)|,H=|S (i,j-2)+S (i,j+2)|
若H<V,G=(S (i,j-1)+S (i,j+1))/2
若H>V,G=(S (i-1,j)+S (i+1,j))/2
若H=V,G=(S (i,j-1)+S (i,j+1)+S (i-1,j)+S (i+1,j))/4
最终,Y=((66*R+129*G+25*B+128)/256+16)*16。
304:将所述预设格式的彩色图像中的灰度部分分量进行图像处理,以得到深度图像。
深度图像(Depth map)为一种三维场景信息表达方式。深度图像的每个像素点的灰度值可用于表征所采集的图像数据对应的场景中某一点距离图像采集设备的远近。
图像处理芯片中可以包含有深度图处理模块,通过该深度图处理模块可以对所述预设格式的彩色图像中的灰度部分分量进行图像处理,以得到深度图像。
深度图处理模块利用双目立体视觉方法得到深度图像,该双目立体视觉方法通过两个相隔一定距离的图像采集设备(如2个后视镜头)同时获取同一场景的两幅图像,通过立体匹配算法找到两幅图像中对应的像素点,随后根据三角原理计算出时差信息,而视差信息通过转换可用于表征场景中物体的深度图像。
在无人机的避障应用中,通过该深度图像以便确定无人机与障碍物间的距离信息,以便确定无人机后续的飞行方向及飞行轨迹,以避免无人机与障碍物发生碰撞。
需要说明的是,在本发明实施例中所示步骤301-304中未详尽描述的技术细节,可参考上述图像数据处理方法的应用场景中的具体描述。
在本发明实施例中,当N个图像采集设备所采集的N路第一图像数据时, 接收的K(N>K)路图像数据中包含对N路第一图像数据中的M路第一图像数据进行合并处理后所得到的L路第二图像数据,并且,在接收到K路图像数据后,对K路图像数据中的L路第二图像数据进行拆分处理、格式转换等处理,从而实现了当需要处理的图像数据的路数大于芯片可接收的图像数据的路数时,也可实现对N路第一图像数据的处理,以便更好的满足多路图像数据处理的要求,保证与图像数据处理相关的功能的实现。
实施例2:
图8为本发明实施例提供的一种图像数据处理装置示意图。该图像数据处理装置80用于对由N个图像采集设备所采集的N路第一图像数据进行处理。该图像数据处理装置80可配置于各种涉及图像处理的设备的芯片中,该涉及图像处理的设备可以为飞行器、小区监控设备、车辆监控设备、机器人等。该芯片可以为图像处理芯片,如配置于上述飞行器的图像处理芯片中。该飞行器可以为任何类型的飞行器,如无人机、无人船等。以下以无人机作为飞行器的示例。
参照图8,所述图像数据处理装置80包括:接收模块801、拆分处理模块802、格式转换模块803以及深度图像获取模块804。
其中,接收模块801用于接收K路图像数据。
所述K路图像数据包含对所述N路第一图像数据中的M路第一图像数据进行合并处理后所得到的L路第二图像数据以及所述N路第一图像数据中未经合并处理的(N-M)路第一图像数据,K<N。
上述K、N、M、L均为正整数,并且,K=L+N-M。另外K小于或等于所能支持的图像数据输入路数的最大值。
需要说明的是,K、N、M、L的具体值在此不予限定,可以根据需要进行设定,以便适应不同的图像数据处理需求。
例如,以无人机等飞行器的避障为例,为了实现无人机的全面避障,避障通常要求支持前方、下方、后方、左方、右方和上方6个方向,因此,通常需要6对即12个图像采集设备,也即,此时可取N=12。
而由于图像处理芯片本身的限制,其接收模块801可接收的图像数据的路 数有限,通常目前比较高级的芯片最多支持的输入路数只有8路左右,也即,此时,接收模块801所能支持的图像数据输入路数的最大值可取8,K的取值范围为K≤8,例如,可以取K=4、5、6、7等。
N路第一图像数据中的M路第一图像数据进行合并处理后,接收模块801便可接收K路图像数据,以便拆分处理模块802、格式转换模块803以及深度图像获取模块804进行后续的图像数据处理。
其中,第一图像数据可以为rawdata数据,rawdata数据也即未经加工的数据,也可以理解为图像采集装置将捕捉到的光源信号转化为数字信号的原始数据。
拆分处理模块802用于对所述K路图像数据中的L路第二图像数据进行拆分处理,以得到M路第三图像数据。
其中,所述L路第二图像数据可以包括:第一路第二图像数据、第二路第二图像数据、…、第L路第二图像数据。其中,第一路第二图像数据、第二路第二图像数据、…、第L路第二图像数据表示第一路第二图像数据至第L路第二图像数据。
在一些实现方式中,所述拆分处理模块802具体用于:分别对所述第一路第二图像数据、所述第二路第二图像数据、…、所述第L路第二图像数据进行逐行拆分处理,以得到M路第三图像数据;
其中,每一路第二图像数据拆分为M/L路第三图像数据。
例如,假设M=8,L=2,则1路第二图像数据拆分为4路第三图像数据。
在一些实现方式中,所述第一路第二图像数据拆分可为第一路第三图像数据、第二路第三图像数据、…、第M/L路第三图像数据。其中,第一路第三图像数据、第二路第三图像数据、…、第M/L路第三图像数据。
并且,所述第一路第二图像数据包括P行图像数据。其中,P为正整数,P的具体值在此不予限定,可以根据需要进行设定,以便适应不同的图像数据处理需求,例如P=12。
在一些实现方式中,所述拆分处理模块802对第一路第二图像数据进行逐行拆分处理包括:
将所述第一路第二图像数据的第一行图像数据、第二行图像数据、…、第 M/L行图像数据分别拆解到所述第一路第三图像数据的第一行、所述第二路第三图像数据的第一行、…、所述第M/L路第三图像数据的第一行;将所述第一路第二图像数据的第(M/L+1)行图像数据、第(M/L+2)行图像数据、…、第2*M/L行图像数据分别拆解到所述第一路第三图像数据的第二行、所述第二路第三图像数据的第二行、…、所述第M/L路第三图像数据的第二行;依此类推,直至将所述第一路第二图像数据的第(P-M/L+1)行图像数据、第(P-M/L+2)行图像数据…、第P行图像数据分别拆解到所述第一路第三图像数据的第(P/(M/L))行、所述第二路第三图像数据的第(P/(M/L))行、所述第M/L路第三图像数据的第(P/(M/L))行。
格式转换模块803用于将所述(N-M)路第一图像数据及所述M路第三图像数据进行格式转换处理,以得到预设格式的彩色图像。
其中,该预设格式可以包括YUV格式。YUV格式的彩色图像分为三个分量,分别为“Y”分量、“U”分量和“V”分量。其中,“Y”分量表示明亮度(Luminance或Luma),也就是灰度值;而“U”分量和“V”分量表示的则是色度(Chrominance或Chroma),作用是描述图像色彩及饱和度,用于指定像素的颜色。
在一些实现方式中,所述第一图像数据及所述第三图像数据均为RGB格式的图像,也即,格式转换模块803进行格式转换处理可以为将RGB格式的图像转换为YUV格式的彩色图像。
其中,所述预设格式的彩色图像中的灰度部分分量Y也即YUV格式的彩色图像的“Y”分量的计算公式为:
Y=((66*R+129*G+25*B+128)/256+16)*16
其中,R为RGB格式的图像中红色分量的强度,G为RGB格式的图像绿色分量的强度,B为RGB格式的图像蓝色分量的强度。
深度图像获取模块804用于将所述预设格式的彩色图像中的灰度部分分量进行图像处理,以得到深度图像。
深度图像的每个像素点的灰度值可用于表征所采集的图像数据对应的场景中某一点距离图像采集设备的远近。
深度图像获取模块804可以利用双目立体视觉方法得到深度图像,该双目 立体视觉方法通过两个相隔一定距离的图像采集设备(如2个后视镜头)同时获取同一场景的两幅图像,通过立体匹配算法找到两幅图像中对应的像素点,随后根据三角原理计算出时差信息,而视差信息通过转换可用于表征场景中物体的深度图像。
在无人机的避障应用中,通过该深度图像以便确定无人机与障碍物间的距离信息,以便确定无人机后续的飞行方向及飞行轨迹,以避免无人机与障碍物发生碰撞。
需要说明的是,在本发明实施例中,所述图像数据处理装置80可执行任意方法实施例所提供的图像数据处理方法,具备执行方法相应的功能模块和有益效果。未在图像数据处理装置80的实施例中详尽描述的技术细节,可参见方法实施例所提供的图像数据处理方法。
实施例3:
图9是本发明实施例提供的图像处理芯片的硬件结构示意图,其中,所述图像处理芯片可为各种类型的芯片,如微处理器、微控制单元、单片机、控制器等等。如图9所示,所述图像处理芯片90包括:
一个或多个处理器901以及存储器902,图9中以一个处理器901为例。
处理器901和存储器902可以通过总线或者其他方式连接,图9中以通过总线连接为例。
存储器902作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本发明实施例中的图像数据处理方法对应的程序指令/模块(例如,附图8所示的接收模块801、拆分处理模块802、格式转换模块803以及深度图像获取模块804)。处理器901通过运行存储在存储器902中的非易失性软件程序、指令以及模块,从而执行图像处理芯片的各种功能应用以及数据处理,即实现所述方法实施例的图像数据处理方法。
存储器902可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据图像处理芯片使用所创建的数据等。此外,存储器902可以包括高速随机存取存储器, 还可以包括非易失性存储器,例如,至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。
在一些实施例中,存储器902可选包括相对于处理器901远程设置的存储器,这些远程存储器可以通过网络连接至处理器901。所述网络的实施例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
所述一个或者多个模块存储在所述存储器902中,当被所述一个或者多个处理器901执行时,执行所述任意方法实施例中的图像数据处理方法,例如,执行以上描述的图3中的方法步骤301至步骤304,实现图8中的模块801-804的功能。
所述图像处理芯片90可执行方法实施例所提供的图像数据处理方法,具备执行方法相应的功能模块和有益效果。未在图像处理芯片实施例中详尽描述的技术细节,可参见方法发明实施例所提供的图像数据处理方法。
本发明实施例提供了一种计算机程序产品,所述计算机程序产品包括存储在非易失性计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行如上所述的图像数据处理方法。例如,执行以上描述的图3中的方法步骤301至步骤304,实现图8中的模块801-804的功能。
本发明实施例提供了一种非易失性计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使计算机执行如上所述的图像数据处理方法。例如,执行以上描述的图3中的方法步骤301至步骤304,实现图8中的模块801-804的功能。
实施例4:
图10是本发明实施例提供的飞行器示意图,所述飞行器100包括:机身(图未示)、设置于所述机身上的N个图像采集设备110以及设置于所述机身内的图像处理芯片120。
其中,机身可以包括中心架以及与中心架连接的一个或多个机臂,一个或多个机臂呈辐射状从中心架延伸出。该机臂的数量可以为2个、4个、6个等等。一个或多个机臂用于承载动力系统。
其中,所述图像处理芯片120与所述N个图像采集设备110连接,所述N个图像采集设备110用于采集N路第一图像数据,该N个图像采集设备110可以为图1或图2中的若干个图像采集设备12,该图像处理芯片120可以为图9中的图像处理芯片90。
需要说明的是,在本发明实施例中对于N个图像采集设备110及图像处理芯片120未详尽描述的技术细节,可参考上述各实施例的具体描述,因此,在此处不再赘述。
在一些实现方式中,N个图像采集设备110包括:2个相互平行安装的前视镜头、2个相互平行安装的下视镜头、2个相互平行安装的后视镜头、2个相互平行安装的上视镜头、2个相互平行安装的左视镜头、2个相互平行安装的右视镜头。
由于图像处理芯片120自身的限制,其可接收的图像数据路数有限,当需要处理的第一图像数据的路数(N)大于图像处理芯片120可接收的图像数据的路数时,为了实现对N路图像数据的处理,N路图像数据在输入到图像处理芯片120之前,部分路数的第一图像数据会被合并处理,以便图像处理芯片120实现当需要处理的图像数据的路数大于图像处理芯片120可接收的图像数据的路数时,也可实现对N路第一图像数据的处理。
其中,所述图像处理芯片120接收K路图像数据包括:接收由2个相互平行安装的前视镜头、2个相互平行安装的下视镜头所采集的未经合并处理的4路第一图像数据;并接收由2个相互平行安装的后视镜头及2个相互平行安装的上视镜头所采集的进行合并处理后所得到的1路第二图像数据,以及接收2个相互平行安装的左视镜头及2个相互平行安装的右视镜头所采集的进行合并处理后所得到的1路第二图像数据。
在一些实施例中,如图11所示,所述飞行器100还包括L个合并模块130。其中,所述L个合并模块130的输入端与用于采集M路第一图像数据的M个图像采集设备连接,所述L个合并模块130的输出端与所述图像处理芯片120连接,所述L个合并模块130用于对所述N路第一图像数据中的M路第一图像数据进行合并处理。
在一些实现方式中,如图11所示,所述L个合并模块130包括第一合并 模块1301及第二合并模块1302。
其中,所述第一合并模块1301的输入端与所述2个相互平行安装的后视镜头及所述2个相互平行安装的上视镜头连接,所述第一合并模块1301的输出端与所述图像处理芯片120连接。所述第一合并模块1301用于对所述2个相互平行安装的后视镜头及所述2个相互平行安装的上视镜头所采集的4路第一图像数据进行合并处理。
所述第二合并模块1302的输入端与所述2个相互平行安装的左视镜头及所述2个相互平行安装的右视镜头连接,所述第二合并模块1302的输出端与所述图像处理芯片120连接。所述第二合并模块1302用于对所述2个相互平行安装的左视镜头及所述2个相互平行安装的上右视镜头所采集的4路第一图像数据进行合并处理。
假设K=6,所述图像处理芯片120获取K路图像数据的具体过程为:
1、2个相互平行安装的前视镜头采集2路第一图像数据,其中,采集图像数据是同步触发进行的,2个相互平行安装的前视镜头同时采集到一帧第一图像数据之后,可以分别通过输入设备Dev1、输入设备Dev2送入到图像处理芯片120获取得到2路第一图像数据;
2、由于图像处理芯片120获取由2个相互平行安装的下视镜头所采集2路第一图像数据与上述获取由2个相互平行安装的前视镜头所采集2路第一图像数据类似,因此,此处不再赘述;
3、2个相互平行安装的后视镜头及2个相互平行安装的上视镜头采集4路第一图像数据,该4路第一图像数据经第一合并模块1301进行合并后,由输入设备Dev5送入到图像处理芯片120获取得到1路第二图像数据;
4、2个相互平行安装的左视镜头及2个相互平行安装的右视镜头采集4路第一图像数据,该4路第一图像数据经第二合并模块1302进行合并后,由输入设备Dev6送入到图像处理芯片120再获取得到1路第二图像数据。
综上,所述图像处理芯片120获取的K路图像数据中包括4路第一图像数据及2路第二图像数据。
飞行器100作为一种飞行载具,为了实现飞行器100的飞行,其还包括有实现能飞行功能的相关部件。例如,如图12所示,该飞行器100还包括:动 力系统、飞行控制系统、云台等。
其中,飞行控制系统设置于机身内,动力系统、云台均安装于机身上,N个图像采集设备110分别搭载于对应的云台上。飞行控制系统可以与动力系统、云台及图像处理芯片120进行耦合,以实现通信。
动力系统可以包括电子调速器(简称为电调)、一个或多个螺旋桨以及与一个或多个螺旋桨相对应的一个或多个第一电机。
其中,第一电机连接在电子调速器与螺旋桨之间,第一电机和螺旋桨设置在对应的机臂上。第一电机用于驱动螺旋桨旋转,从而为飞行器100的飞行提供动力,该动力使得飞行器100能够实现一个或多个自由度的运动,如前后运动、上下运动等等。在一些实施例中,飞行器100可以围绕一个或多个旋转轴旋转。例如,上述旋转轴可以包括横滚轴、平移轴和俯仰轴。
可以理解的是,第一电机可以是直流电机,也可以交流电机。另外,第一电机可以是无刷电机,也可以有刷电机。
电子调速器用于接收飞行控制系统产生的驱动信号,并根据驱动信号提供驱动电流给第一电机,以控制第一电机的转速,从而控制飞行器100的飞行。
飞行控制系统具有对飞行器100的飞行和任务进行监控和操纵的能力,包含对飞行器100发射和回收控制的一组设备。飞行控制系统用于实现对飞行器100的飞行的控制。飞行控制系统可以包括传感系统和飞行控制器。
传感系统用于测量飞行器100及飞行器100的各个部件的位置信息和状态信息等等,例如,三维位置、三维角度、三维速度、三维加速度和三维角速度、飞行高度等等。
其中,传感系统可以包括红外传感器、声波传感器、陀螺仪、电子罗盘、惯性测量单元(Inertial Measurement Unit,IMU)、视觉传感器、全球导航卫星系统和气压计等传感器中的至少一种。例如,全球导航卫星系统可以是全球定位系统(Global Positioning System,GPS)。
飞行控制器用于控制飞行器100,如控制飞行器100的飞行或拍摄。可以理解的是,飞行控制器可以按照预先编好的程序指令对飞行器100进行控制,也可以通过响应来自其它设备的一个或多个控制指令对飞行器100进行控制。
例如,遥控器与飞行控制器连接,遥控器将控制指令发送给飞行控制器, 从而使得飞行控制器通过该控制指令控制飞行器100。例如,以控制飞行器100的飞行为例,飞行控制器将该控制指令发送给电子调速器以产生驱动信号,并根据驱动信号提供驱动电流给第一电机,以控制第一电机的转速,从而控制飞行器100的飞行。
云台作为一种拍摄辅助设备,用于搭载N个图像采集设备110。云台上设置有第二电机,飞行控制系统可以控制云台,具体的,飞行控制系统通过控制第二电机的运动(如转速),来调节N个图像采集设备拍摄的图像的角度。其中,第二电机可以是无刷电机,也可以有刷电机。云台可以位于机身的顶部,也可以位于机身的底部。
另外,在本发明实施例中,云台可以作为飞行器100的一部分,可以理解的是,在一些其它实施例中,云台可以独立于飞行器100。
可以理解的是,上述对于飞行器100的各组成部分的命名仅是出于标识的目的,并不应理解为对本发明的实施例的限制。
需要说明的是,以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
通过以上的实施例的描述,本领域普通技术人员可以清楚地了解到各实施例可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。本领域普通技术人员可以理解实现所述实施例方法中的全部或部分流程是可以通过计算机程序指令相关的硬件来完成,所述的程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如所述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;在本发明的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本发明的不同方面的 许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (18)

  1. 一种图像数据处理方法,用于对由N个图像采集设备所采集的N路第一图像数据进行处理,其特征在于,所述方法包括:
    接收K路图像数据,所述K路图像数据包含对所述N路第一图像数据中的M路第一图像数据进行合并处理后所得到的L路第二图像数据以及所述N路第一图像数据中未经合并处理的(N-M)路第一图像数据,K<N;
    对所述K路图像数据中的L路第二图像数据进行拆分处理,以得到M路第三图像数据;
    将所述(N-M)路第一图像数据及所述M路第三图像数据进行格式转换处理,以得到预设格式的彩色图像;
    将所述预设格式的彩色图像中的灰度部分分量进行图像处理,以得到深度图像。
  2. 根据权利要求1所述的方法,其特征在于,K小于或等于所能支持的图像数据输入路数的最大值。
  3. 根据权利要求1或2所述的方法,其特征在于,所述L路第二图像数据包括:第一路第二图像数据、第二路第二图像数据、…、第L路第二图像数据;
    所述对所述K路图像数据中的L路第二图像数据进行拆分处理包括:
    分别对所述第一路第二图像数据、所述第二路第二图像数据、…、所述第L路第二图像数据进行逐行拆分处理,其中,每一路第二图像数据拆分为M/L路第三图像数据。
  4. 根据权利要求3所述的方法,其特征在于,所述第一路第二图像数据拆分为第一路第三图像数据、第二路第三图像数据、…、第M/L路第三图像数据,所述第一路第二图像数据包括P行图像数据;
    其中,对第一路第二图像数据进行逐行拆分处理包括:
    将所述第一路第二图像数据的第一行图像数据、第二行图像数据、…、第M/L行图像数据分别拆解到所述第一路第三图像数据的第一行、所述第二路第三图像数据的第一行、…、所述第M/L路第三图像数据的第一行;将所述第一路第二图像数据的第(M/L+1)行图像数据、第(M/L+2)行图像数据、…、第2*M/L行图像数据分别拆解到所述第一路第三图像数据的第二行、所述第二路第三图像数据的第二行、…、所述第M/L路第三图像数据的第二行;依此类推,直至将所述第一路第二图像数据的第(P-M/L+1)行图像数据、第(P-M/L+2)行图像数据…、第P行图像数据分别拆解到所述第一路第三图像数据的第(P/(M/L))行、所述第二路第三图像数据的第(P/(M/L))行、所述第M/L路第三图像数据的第(P/(M/L))行。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述预设格式包括:YUV格式。
  6. 根据权利要求5所述的方法,其特征在于,所述第一图像数据及所述第三图像数据均为RGB格式的图像,所述预设格式的彩色图像中的灰度部分分量的计算公式为:
    Y=((66*R+129*G+25*B+128)/256+16)*16
    其中,R为RGB格式的图像中红色分量的强度,G为RGB格式的图像绿色分量的强度,B为RGB格式的图像蓝色分量的强度。
  7. 一种图像数据处理装置,用于对由N个图像采集设备所采集的N路第一图像数据进行处理,其特征在于,所述装置包括:
    接收模块,用于接收K路图像数据,所述K路图像数据包含对所述N路第一图像数据中的M路第一图像数据进行合并处理后所得到的L路第二图像数据以及所述N路第一图像数据中未经合并处理的(N-M)路第一图像数据;
    拆分处理模块,用于对所述K路图像数据中的L路第二图像数据进行拆分处理,以得到M路第三图像数据;
    格式转换模块,用于将所述(N-M)路第一图像数据及所述M路第三图像 数据进行格式转换处理,以得到预设格式的彩色图像;
    深度图像获取模块,用于将所述预设格式的彩色图像中的灰度部分分量进行图像处理,以得到深度图像。
  8. 根据权利要求7所述的装置,其特征在于,K小于或等于所能支持的图像数据输入路数的最大值。
  9. 根据权利要求7或8所述的装置,其特征在于,所述L路第二图像数据包括:第一路第二图像数据、第二路第二图像数据、…、第L路第二图像数据;
    所述拆分处理模块具体用于:
    分别对所述第一路第二图像数据、所述第二路第二图像数据、…、所述第L路第二图像数据进行逐行拆分处理,以得到M路第三图像数据;
    其中,每一路第二图像数据拆分为M/L路第三图像数据。
  10. 根据权利要求9所述的装置,其特征在于,所述第一路第二图像数据拆分为第一路第三图像数据、第二路第三图像数据、…、第M/L路第三图像数据,所述第一路第二图像数据包括P行图像数据;
    其中,所述拆分处理模块对第一路第二图像数据进行逐行拆分处理包括:
    将所述第一路第二图像数据的第一行图像数据、第二行图像数据、…、第M/L行图像数据分别拆解到所述第一路第三图像数据的第一行、所述第二路第三图像数据的第一行、…、所述第M/L路第三图像数据的第一行;将所述第一路第二图像数据的第(M/L+1)行图像数据、第(M/L+2)行图像数据、…、第2*M/L行图像数据分别拆解到所述第一路第三图像数据的第二行、所述第二路第三图像数据的第二行、…、所述第M/L路第三图像数据的第二行;依此类推,直至将所述第一路第二图像数据的第(P-M/L+1)行图像数据、第(P-M/L+2)行图像数据…、第P行图像数据分别拆解到所述第一路第三图像数据的第(P/(M/L))行、所述第二路第三图像数据的第(P/(M/L))行、所述第M/L路第三图像数据的第(P/(M/L))行。
  11. 根据权利要求7-10任一项所述的装置,其特征在于,所述预设格式包括:YUV格式。
  12. 根据权利要求11所述的装置,其特征在于,所述第一图像数据及所述第三图像数据均为RGB格式的图像,所述预设格式的彩色图像中的灰度部分分量Y的计算公式为:
    Y=((66*R+129*G+25*B+128)/256+16)*16
    其中,R为RGB格式的图像中红色分量的强度,G为GB格式的图像绿色分量的强度,B为GB格式的图像蓝色分量的强度。
  13. 一种图像处理芯片,其特征在于,包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-6的任一项所述的方法。
  14. 一种飞行器,包括机身,其特征在于,还包括:设置于所述机身上的N个图像采集设备以及设置于所述机身内的图像处理芯片,所述图像处理芯片与所述N个图像采集设备连接,所述N个图像采集设备用于采集N路第一图像数据,所述图像处理芯片为权利要求13所述的图像处理芯片。
  15. 根据权利要求14所述的飞行器,其特征在于,N个图像采集设备包括:2个相互平行安装的前视镜头、2个相互平行安装的下视镜头、2个相互平行安装的后视镜头、2个相互平行安装的上视镜头、2个相互平行安装的左视镜头、2个相互平行安装的右视镜头。
  16. 根据权利要求15所述的飞行器,其特征在于,所述图像处理芯片接 收K路图像数据包括:
    接收由2个相互平行安装的前视镜头、2个相互平行安装的下视镜头所采集的未经合并处理的4路第一图像数据;并接收由2个相互平行安装的后视镜头及2个相互平行安装的上视镜头所采集的进行合并处理后所得到的1路第二图像数据,以及接收2个相互平行安装的左视镜头及2个相互平行安装的右视镜头所采集的进行合并处理后所得到的1路第二图像数据。
  17. 根据权利要求16所述的飞行器,其特征在于,所述飞行器还包括L个合并模块,所述L个合并模块的输入端与用于采集M路第一图像数据的M个图像采集设备连接,所述L个合并模块的输出端与所述图像处理芯片连接,所述L个合并模块用于对所述N路第一图像数据中的M路第一图像数据进行合并处理。
  18. 根据权利要求17所述的飞行器,其特征在于,所述L个合并模块包括第一合并模块及第二合并模块,所述第一合并模块的输入端与所述2个相互平行安装的后视镜头及所述2个相互平行安装的上视镜头连接,所述第一合并模块的输出端与所述图像处理芯片连接,所述第二合并模块的输入端与所述2个相互平行安装的左视镜头及所述2个相互平行安装的右视镜头连接,所述第二合并模块的输出端与所述图像处理芯片连接。
PCT/CN2020/083769 2019-04-12 2020-04-08 一种图像数据处理方法、装置、图像处理芯片及飞行器 WO2020207411A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/490,635 US11949844B2 (en) 2019-04-12 2021-09-30 Image data processing method and apparatus, image processing chip, and aircraft

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910294084.7A CN110009595B (zh) 2019-04-12 2019-04-12 一种图像数据处理方法、装置、图像处理芯片及飞行器
CN201910294084.7 2019-04-12

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/490,635 Continuation US11949844B2 (en) 2019-04-12 2021-09-30 Image data processing method and apparatus, image processing chip, and aircraft

Publications (1)

Publication Number Publication Date
WO2020207411A1 true WO2020207411A1 (zh) 2020-10-15

Family

ID=67171485

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/083769 WO2020207411A1 (zh) 2019-04-12 2020-04-08 一种图像数据处理方法、装置、图像处理芯片及飞行器

Country Status (3)

Country Link
US (1) US11949844B2 (zh)
CN (1) CN110009595B (zh)
WO (1) WO2020207411A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009595B (zh) * 2019-04-12 2022-07-26 深圳市道通智能航空技术股份有限公司 一种图像数据处理方法、装置、图像处理芯片及飞行器
CN110933364A (zh) * 2019-10-25 2020-03-27 深圳市道通智能航空技术有限公司 全向视觉避障实现方法、系统、装置及存储介质
US20230316740A1 (en) * 2022-03-31 2023-10-05 Wing Aviation Llc Method for Controlling an Unmanned Aerial Vehicle to Avoid Obstacles

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104427218A (zh) * 2013-09-02 2015-03-18 北京计算机技术及应用研究所 超高清ccd图像多通道采集与实时传输系统及方法
WO2016149037A1 (en) * 2015-03-16 2016-09-22 Sikorsky Aircraft Corporation Flight initiation proximity warning system
CN107005657A (zh) * 2016-10-13 2017-08-01 深圳市大疆创新科技有限公司 处理数据的方法、装置、芯片和摄像头
CN109041591A (zh) * 2017-09-12 2018-12-18 深圳市大疆创新科技有限公司 图像传输方法、设备、可移动平台、监控设备及系统
CN109496328A (zh) * 2016-06-08 2019-03-19 亚马逊科技公司 用于立体图像的选择性配对成像元件
CN110009595A (zh) * 2019-04-12 2019-07-12 深圳市道通智能航空技术有限公司 一种图像数据处理方法、装置、图像处理芯片及飞行器

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010048632A1 (en) * 2008-10-24 2010-04-29 Real D Stereoscopic image format with depth information
US8836765B2 (en) * 2010-11-05 2014-09-16 Chung-Ang University Industry-Academy Cooperation Foundation Apparatus and method for generating a fully focused image by using a camera equipped with a multi-color filter aperture
TWI594616B (zh) * 2012-06-14 2017-08-01 杜比實驗室特許公司 用於立體及自動立體顯示器之深度圖傳遞格式
CN103220545B (zh) * 2013-04-28 2015-05-06 上海大学 一种立体视频实时深度估计系统硬件实现方法
CN103957398B (zh) * 2014-04-14 2016-01-06 北京视博云科技有限公司 一种立体图像的采样、编码及解码方法及装置
US10567635B2 (en) * 2014-05-15 2020-02-18 Indiana University Research And Technology Corporation Three dimensional moving pictures with a single imager and microfluidic lens
JP2018514748A (ja) * 2015-02-06 2018-06-07 ザ ユニバーシティ オブ アクロンThe University of Akron 光学撮像システムおよびその方法
CN107026959A (zh) * 2016-02-01 2017-08-08 杭州海康威视数字技术股份有限公司 一种图像采集方法及图像采集设备
CN107306347A (zh) * 2016-04-18 2017-10-31 中国科学院宁波材料技术与工程研究所 一种基于拼接式全景摄像机的实时视频流传输方法
CN107972582A (zh) * 2016-10-25 2018-05-01 北京计算机技术及应用研究所 一种全高清环车俯视显示系统
CN107424187B (zh) * 2017-04-17 2023-10-24 奥比中光科技集团股份有限公司 深度计算处理器、数据处理方法以及3d图像设备
US10716643B2 (en) * 2017-05-05 2020-07-21 OrbisMV LLC Surgical projection system and method
US10229537B2 (en) * 2017-08-02 2019-03-12 Omnivor, Inc. System and method for compressing and decompressing time-varying surface data of a 3-dimensional object using a video codec

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104427218A (zh) * 2013-09-02 2015-03-18 北京计算机技术及应用研究所 超高清ccd图像多通道采集与实时传输系统及方法
WO2016149037A1 (en) * 2015-03-16 2016-09-22 Sikorsky Aircraft Corporation Flight initiation proximity warning system
CN109496328A (zh) * 2016-06-08 2019-03-19 亚马逊科技公司 用于立体图像的选择性配对成像元件
CN107005657A (zh) * 2016-10-13 2017-08-01 深圳市大疆创新科技有限公司 处理数据的方法、装置、芯片和摄像头
CN109041591A (zh) * 2017-09-12 2018-12-18 深圳市大疆创新科技有限公司 图像传输方法、设备、可移动平台、监控设备及系统
CN110009595A (zh) * 2019-04-12 2019-07-12 深圳市道通智能航空技术有限公司 一种图像数据处理方法、装置、图像处理芯片及飞行器

Also Published As

Publication number Publication date
CN110009595A (zh) 2019-07-12
CN110009595B (zh) 2022-07-26
US11949844B2 (en) 2024-04-02
US20220103799A1 (en) 2022-03-31

Similar Documents

Publication Publication Date Title
US10802509B2 (en) Selective processing of sensor data
US10488500B2 (en) System and method for enhancing image resolution
US11423792B2 (en) System and method for obstacle avoidance in aerial systems
US10475209B2 (en) Camera calibration
WO2020207411A1 (zh) 一种图像数据处理方法、装置、图像处理芯片及飞行器
US10789722B2 (en) Processing images to obtain environmental information
WO2018210078A1 (zh) 无人机的距离测量方法以及无人机
US11057604B2 (en) Image processing method and device
US20180275659A1 (en) Route generation apparatus, route control system and route generation method
CN111683193A (zh) 图像处理装置
WO2018211926A1 (ja) 画像生成装置、画像生成システム、画像生成方法、及び画像生成プログラム
CN109949381B (zh) 图像处理方法、装置、图像处理芯片、摄像组件及飞行器
JP2024072827A (ja) 制御装置、撮像システム及び撮像方法
WO2021014752A1 (ja) 情報処理装置、情報処理方法、情報処理プログラム
WO2021035746A1 (zh) 图像处理方法、装置和可移动平台
WO2020107487A1 (zh) 图像处理方法和无人机
JP7501535B2 (ja) 情報処理装置、情報処理方法、情報処理プログラム
WO2023039752A1 (zh) 无人飞行器及其控制方法、系统和存储介质
JP2022040134A (ja) 推定システムおよび自動車
JP2021154857A (ja) 操縦支援装置、操縦支援方法、及びプログラム
Bhandari et al. Flight Test Results of the Collision Avoidance System for a Fixed-Wing UAS using Stereoscopic Vision

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20786767

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20786767

Country of ref document: EP

Kind code of ref document: A1