WO2022000300A1 - 图像处理方法、图像获取装置、无人机、无人机系统和存储介质 - Google Patents

图像处理方法、图像获取装置、无人机、无人机系统和存储介质 Download PDF

Info

Publication number
WO2022000300A1
WO2022000300A1 PCT/CN2020/099408 CN2020099408W WO2022000300A1 WO 2022000300 A1 WO2022000300 A1 WO 2022000300A1 CN 2020099408 W CN2020099408 W CN 2020099408W WO 2022000300 A1 WO2022000300 A1 WO 2022000300A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
target area
infrared
processor
Prior art date
Application number
PCT/CN2020/099408
Other languages
English (en)
French (fr)
Inventor
王黎
罗东阳
张青涛
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/099408 priority Critical patent/WO2022000300A1/zh
Priority to CN202080005578.XA priority patent/CN112840374A/zh
Publication of WO2022000300A1 publication Critical patent/WO2022000300A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C27/00Rotorcraft; Rotors peculiar thereto
    • B64C27/04Helicopters
    • B64C27/08Helicopters with two or more rotors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D47/00Equipment not otherwise provided for
    • B64D47/08Arrangements of cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U10/00Type of UAV
    • B64U10/10Rotorcrafts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application relates to the technical field of image processing, and in particular, to an image processing method, an image acquisition device, an unmanned aerial vehicle, an unmanned aerial vehicle system, and a computer-readable storage medium.
  • a fixed-focus short-focus camera is generally selected for image acquisition.
  • the close-range targets can be well followed and the target details can be obtained, but for the slightly farther targets
  • the shooting accuracy will be significantly reduced, so that even if the target is captured, the characteristics of the target itself cannot be restored, affecting the monitoring effect.
  • Embodiments of the present application provide an image processing method, an image acquisition device, an unmanned aerial vehicle, an unmanned aerial vehicle system, and a computer-readable storage medium.
  • Embodiments of the present application provide an image processing method, which is applied to an image acquisition device, where the image acquisition device includes a first camera and a second camera, the focal length of the first camera is smaller than the focal length of the second camera, and the image
  • the processing method includes: acquiring a first image photographed by the first camera and identifying a target area; determining a photographing parameter of the second camera according to the position of the target area in the first image; The parameters control the second camera to acquire a second image; and fuse the first image and the second image to obtain a fused image.
  • An embodiment of the present application further provides an image acquisition device, the image acquisition device includes a first camera, a second camera and a processor, the focal length of the first camera is smaller than the focal length of the second camera, and the processor is used for : obtain the first image shot by the first camera and identify the target area; determine the shooting parameters of the second camera according to the position of the target area in the first image; control the shooting parameters according to the shooting parameters acquiring a second image by the second camera; and fusing the first image and the second image to obtain a fused image.
  • Embodiments of the present application further provide an unmanned aerial vehicle, the unmanned aerial vehicle includes a body, an image acquisition device and a processor, the image acquisition device is disposed on the body, and the image acquisition device includes a first camera and a second camera, the focal length of the first camera is smaller than the focal length of the second camera, and the processor is configured to: acquire a first image captured by the first camera and identify a target area; the position in the first image to determine the shooting parameters of the second camera; control the second camera to obtain a second image according to the shooting parameters; and fuse the first image and the second image, to get the fused image.
  • Embodiments of the present application further provide an unmanned aerial vehicle system
  • the unmanned aerial vehicle system includes an unmanned aerial vehicle, an image acquisition device, and a processor
  • the image acquisition device includes a first camera and a second camera
  • the first camera The focal length of the camera is smaller than the focal length of the second camera
  • the processor is configured to execute the image processing method of the above embodiment.
  • Embodiments of the present application also provide a computer-readable storage medium containing computer-executable instructions.
  • the computer-executable instructions when executed by one or more processors, cause the processors to perform the image processing methods of the above-described embodiments.
  • a first camera with a smaller focal length captures a first image with a larger field of view
  • a first camera with a smaller focal length shoots a first image with a larger field of view
  • the large second camera captures a second image containing the detailed features of the target, and by fusing the first image and the second image to obtain a fused image, not only can an image within a larger field of view be acquired, it is not easy to follow the target, but also Obtain the detailed characteristics of the target, realize the large field of view and high-precision shooting of the target, and have a good monitoring effect on near/long-distance targets, which can be used in security, search and rescue, patrol inspection, infrared monitoring of long-distance targets, etc. field.
  • FIG. 1 is a schematic flowchart of an image processing method according to some embodiments of the present application.
  • FIG. 2 is a schematic structural diagram of an unmanned aerial vehicle system according to some embodiments of the present application.
  • FIG. 3 is a schematic diagram of the principle of an image processing method according to some embodiments of the present application.
  • FIG. 4 is a schematic flowchart of an image processing method according to some embodiments of the present application.
  • FIG. 5 is a schematic flowchart of an image processing method according to some embodiments of the present application.
  • FIG. 6 is a schematic flowchart of an image processing method according to some embodiments of the present application.
  • FIG. 7 is a schematic flowchart of an image processing method according to some embodiments of the present application.
  • FIG. 8 is a schematic flowchart of an image processing method according to some embodiments of the present application.
  • FIG. 9 is a schematic flowchart of an image processing method according to some embodiments of the present application.
  • FIG. 10 is a schematic diagram of connection between a processor and a computer-readable storage medium according to some embodiments of the present application.
  • the terms “installed”, “connected” and “connected” should be understood in a broad sense, for example, it may be a fixed connection or a detachable connection Connection, or integral connection; it can be a mechanical connection, an electrical connection or can communicate with each other; it can be directly connected or indirectly connected through an intermediate medium, it can be the internal communication of two elements or the interaction of two elements relation.
  • installed should be understood in a broad sense, for example, it may be a fixed connection or a detachable connection Connection, or integral connection; it can be a mechanical connection, an electrical connection or can communicate with each other; it can be directly connected or indirectly connected through an intermediate medium, it can be the internal communication of two elements or the interaction of two elements relation.
  • a fixed-focus short-focus camera is generally selected for image acquisition.
  • the close-range target can be well followed and the target details can be obtained, but for the slightly farther target, the The shooting accuracy will be significantly reduced, resulting in that even if the target is captured, the characteristics of the target itself cannot be restored, and the monitoring effect of the long-distance target is poor.
  • a telephoto camera is used for image acquisition in order to monitor long-distance targets, although the detailed features of long-distance targets can be obtained, the monitoring range is small due to the small field of view, which makes it easy to lose the target, and it is easy to lose the target. The monitoring effect is also poor.
  • the zoom camera can realize the monitoring of long-distance and short-distance targets by zooming, it can only maintain the telephoto state or short-focus state.
  • the monitoring range is small and it is easy to lose the target.
  • the short-focus state the detailed features of the long-distance target are insufficiently obtained, and it is difficult to obtain a large-scale and high-precision image of the target.
  • an embodiment of the present application provides an image processing method, which is applied to an image acquisition device 10 .
  • the image acquisition device 10 includes a first camera 11 and a second camera 12 .
  • the focal length of the first camera 11 is Less than the focal length of the second camera 12, the image processing method includes:
  • 011 acquiring the first image captured by the first camera 11 and identifying the target area
  • 012 Determine the shooting parameters of the second camera 12 according to the position of the target area in the first image
  • the embodiment of the present application further provides an image acquisition device 10, the image acquisition device 10 further includes a processor 13, and the processor 13 is configured to: acquire a first image captured by the first camera 11 and identify a target area; The position in an image is used to determine the shooting parameters of the second camera 12; the second camera 12 is controlled to obtain the second image according to the shooting parameters; and the first image and the second image are fused to obtain a fused image. That is, steps 011 to 014 may be implemented by the processor 13 .
  • the embodiment of the present application further provides an unmanned aerial vehicle 100 .
  • the unmanned aerial vehicle 100 includes a fuselage 20 , an image acquisition device 10 and a processor 13 , and the image acquisition device 10 is disposed on the fuselage 20 .
  • the image acquisition device 10 is arranged on the UAV 100 to cooperate with the UAV 100 to acquire and process images.
  • the image acquisition apparatus 10 can also be used as a separate device or as a component on other monitoring platforms to achieve image acquisition and processing.
  • the first camera 11 may be a short-focus camera, and its focal length may be determined according to the field of view required to be captured.
  • the first camera 11 can capture a first image with a wide field of view.
  • the first image may contain one or more targets, and the distance between the targets may be far or near.
  • the first image containing the detailed features of the target can be directly obtained, and the following shooting can be continued.
  • a plurality of first images of the target are used to realize the tracking shooting of the target. For a long-distance target, the target area where the target is located is identified in the first image, and the position of the target area in the first image is determined.
  • the first camera 11 may be an infrared camera (ie, a first infrared camera), then the first image is a first infrared image, and the infrared Images can be used to characterize the thermal and morphological features of the target.
  • the processor 13 can use the pixel value and/or temperature value of the first infrared image, and use the pixel value difference and the temperature value difference caused by the difference in thermal radiation between the human body and the common object to quickly identify the target to determine the target area, wherein the target area can be Is a fixed-size area containing the target, such as a rectangular box area, a circular box area, and so on.
  • the processor 13 calculates and obtains the temperature values of different regions in the first infrared image according to the pixel values of the first infrared image, so as to identify the target based on the temperature value difference between the human body and the ordinary object, and then determine target area; for another example, the processor 13 may also perform image feature matching (eg, matching image features with an image template of a human feature database) according to the pixel value distribution of the first infrared image, so as to identify the target to determine the target area.
  • image feature matching eg, matching image features with an image template of a human feature database
  • the processor 13 can identify the target based on the temperature value and the pixel value, so as to accurately determine the target area, such as determining the type of the target (such as people, cats, dogs, cars, trees, etc.) Determine one or more possible types of the target, and then select the corresponding image template to perform image feature matching according to the determined one or more types, so as to identify the target more quickly and accurately and reduce the probability of misjudgment.
  • the type of the target such as people, cats, dogs, cars, trees, etc.
  • the first camera 11 may also be a visible light camera (ie, the first visible light camera), and the visible light camera may be used to acquire the morphological characteristics of the target, and the first image is the first visible light image, and the processor 13 may be based on the pre- The designed image recognition algorithm, such as extracting image features and comparing with the image model in the database to identify the target, so as to determine the target area.
  • the visible light camera may be used to acquire the morphological characteristics of the target
  • the first image is the first visible light image
  • the processor 13 may be based on the pre- The designed image recognition algorithm, such as extracting image features and comparing with the image model in the database to identify the target, so as to determine the target area.
  • the processor 13 may determine the shooting parameters of the second camera 12 according to the position of the target area in the first image.
  • the second camera 12 can be a telephoto camera, and its focal length can be determined according to the range of the field of view captured and the maximum distance that can obtain the detailed features of the target.
  • target area may also be identified by other methods such as manual work, which is not limited in this embodiment of the present application.
  • the shooting parameters include at least one of a shooting angle, a shooting focal length, and a zoom factor.
  • a shooting angle when the second camera 12 is a fixed-focus camera, its focal length is fixed, and the field of view is also fixed. It is only necessary to adjust the shooting angle of the second camera 12 so that the field of view of the second camera 12 covers the target.
  • the acquisition of the second image can be achieved.
  • the shooting focal length, the zoom factor and the shooting angle can be adjusted so that the field of view covers the target, thereby realizing the acquisition of the second image.
  • the range of the field of view just covers the target, so that the target basically occupies the entire second image, so that the detailed features of the target can be obtained with maximum accuracy on the premise of ensuring the integrity of the target.
  • the second camera 12 may be a rotating camera, and the rotating camera can realize the adjustment of the rotation angle of the second camera 12 through its own driving components.
  • the second camera 12 can also be set on the gimbal 30 of the UAV 100, and the shooting angle of the second camera 12 can be controlled by the gimbal 30, for example, the shooting angle includes the pitch angle and the yaw angle, so as to realize all-round angle adjustment , the gimbal 30 includes a pitch axis and a yaw axis.
  • the pitch angle can be adjusted by rotating the pitch axis
  • the yaw angle can be adjusted by rotating the yaw axis. Since the cameras are mostly circular lenses, there is no need to adjust the roll axis.
  • the structure of the gimbal 30 is relatively simple; for another example, the mirror assembly can also be arranged in front of the second camera 12, and the rotation angle of the mirror can be adjusted by the motor, thereby adjusting the direction of incident light, so that the light reflected by the target is just in the incident light In the direction, at this time, the second camera 12 acquires the light reflected by the target (ie, the field of view of the second camera 12 covers the target). It should be understood that the embodiments of the present application do not limit the manner in which the second camera acquires the second image.
  • the first image and the second image are fused to obtain a fused image.
  • the second image P2 and the part S corresponding to the target area in the first image P1 are fused, for example, the part S corresponding to the target area is replaced with the second image P2 to obtain the fusion image P3.
  • the fusion image P3 contains the detailed features of the target, which can be applied to fields such as security, search and rescue, patrol inspection, and infrared monitoring of long-distance targets.
  • the first image is an image that includes a large field of view of the target
  • the second image is an image that includes a small field of view of the target. That is to say, the shooting content of the second image corresponds to the target in the first image.
  • the content of the area, when merging, the target area in the first image can be directly replaced with the second image that contains the details of the target.
  • the fusion image not only includes the image of the large field of view area of the first image, but also includes The detailed features of the area where the target is located can be obtained, thereby realizing the acquisition of a large field of view including the target and a high-precision fusion image.
  • a first image with a large field of view captured by a first camera 11 with a smaller focal length is fused with a second camera 12 with a larger focal length.
  • the second image containing the detailed features of the target is obtained to obtain a fused image, so that not only can an image within a large field of view be obtained, it is not easy to follow the target, but also the detailed features of the target can be obtained to achieve a large field of view of the target.
  • high-precision shooting it has a good monitoring effect on near/far targets.
  • step 012 includes:
  • 0121 establish a coordinate system based on the first image
  • 0122 determine the location coordinates of the target area in the first image
  • 0123 Determine a shooting angle corresponding to the position coordinates based on a preset mapping table.
  • the processor 13 is further configured to establish a coordinate system based on the first image; determine position coordinates of the target area in the first image; and determine a shooting angle corresponding to the position coordinates based on a preset mapping table. That is, step 0121 , step 0122 and step 0123 may be implemented by the processor 13 .
  • the mapping table can be established by calibration before leaving the factory to establish the mapping table of the target area and the shooting angle. Then, a coordinate system is established based on the first image to obtain the position coordinates of the target area in the first image, and the processor 13 can quickly read the corresponding shooting angle from the mapping table according to the position coordinates.
  • the image processing method before step 012, the image processing method further includes:
  • 015 control the first camera 11 to shoot a calibration plate to obtain a first calibration image, the calibration plate includes a plurality of feature regions, the first calibration image includes a calibration region corresponding to the feature regions, and the calibration plate covers the field of view of the first camera 11;
  • 016 control the optical axis of the second camera 12 to align the center of the feature area, and sequentially acquire the rotation angles corresponding to the feature area;
  • the processor 13 is further configured to control the first camera 11 to photograph a calibration plate to obtain a first calibration image
  • the calibration plate includes a plurality of feature regions
  • the first calibration image includes a calibration region corresponding to the feature regions
  • the calibration plate covers the field of view of the first camera 11; controls the optical axis of the second camera 12 to align with the center of the feature area, and sequentially obtains the rotation angle corresponding to the feature area; and based on the calibration area corresponding to the feature area, and the A mapping table is established for the corresponding rotation angle. That is, steps 015 , 016 and 017 may be implemented by the processor 13 .
  • the processor 13 first controls the first camera 11 to photograph a calibration plate to obtain a first calibration image, the calibration plate includes a plurality of feature regions, and the first calibration image includes a calibration region corresponding to the feature regions.
  • the calibration plate covers the field of view of the first camera 11, for example, the field of view of the first camera 11 is just covered by the calibration plate, or the field of view of the first camera 11 is located in the calibration plate, so as to calibrate the shooting part so that the first image basically consists of the calibration area corresponding to the feature area.
  • the processor 13 controls the optical axis of the second camera 12 to align with the center of the feature area, and then the processor 13 obtains the rotation angle of the second camera 12 at this time, and establishes the rotation angle and the calibration corresponding to the feature area.
  • the mapping relationship of the area repeating the calibration process for each feature area, you can get the mapping relationship between the calibration area and the rotation angle corresponding to all the feature areas on the calibration board, so as to establish a complete mapping table.
  • the shape of the feature area may be the same as the shape of the field of view of the second camera 12. For example, if the shape of the field of view of the second camera 12 is a rectangle, the feature area is a rectangle.
  • step 0123 includes:
  • 0123a Determine the corresponding target calibration area according to the center position coordinates of the target area
  • 0123b Determine the shooting angle corresponding to the target calibration area based on the preset mapping table.
  • the processor 13 is further configured to determine the corresponding target calibration area according to the center position coordinates of the target area; and determine the shooting angle corresponding to the target calibration area based on a preset mapping table. That is, steps 0123a and 0123b may be implemented by the processor 13 .
  • the processor 13 when the processor 13 quickly reads the corresponding shooting angle from the mapping table according to the position coordinates, since the shooting angle matches the calibration area, it is necessary to first determine the target calibration area corresponding to the target area, specifically: the processor 13 First determine the center position coordinates of the target area, then obtain the calibration area including the center position coordinates as the target calibration area, and finally read the shooting angle corresponding to the target calibration area from the mapping table.
  • step 013 includes:
  • 0131 control the second camera 12 to acquire a test image according to the shooting parameters
  • 0132 Calculate the degree of offset between the test image and the image of the target area
  • the processor 13 is further configured to control the second camera 12 to acquire a test image according to the shooting parameters; calculate the degree of offset between the test image and the image of the target area; and adjust the shooting angle according to the degree of offset, and adjust the The second camera 12 is controlled to acquire the second image at the rear shooting angle. That is, step 0131 , step 0132 and step 0133 may be implemented by the processor 13 .
  • the processor 13 can control the second camera 12 to obtain a test image at the shooting angle, and then compare the test image with the image of the target area. , for example, determine the matching region in the first image that matches the test image, then calculate the deviation between the center position coordinates of the matching region and the center position coordinates of the target region, and determine the offset according to the deviation.
  • the offset includes the yaw adjustment angle and the pitch adjustment angle. The larger the ordinate deviation between the position coordinates and the center position coordinates of the target area, the larger the pitch angle that needs to be adjusted, so that the shooting angle can be further adjusted in combination with the image characteristics, and the accuracy of the shooting angle acquisition can be improved.
  • the processor 13 can directly calculate the offset according to the difference between the center position of the target area and the center position of the target calibration area, Then, the shooting angle is further adjusted according to the offset, so as to improve the accuracy of obtaining the shooting angle.
  • step 013 further includes:
  • 0134 Control the second camera 12 to capture time-divisionally according to the plurality of capture parameters to obtain a plurality of second images, where the second images correspond to the target area one-to-one.
  • the processor 13 is further configured to control the second camera 12 to capture time-divisionally according to a plurality of capture parameters to obtain a plurality of second images, and the second images are in one-to-one correspondence with the target area. That is, step 0134 may be implemented by the processor 13 .
  • the processor 13 wants to obtain the detailed features of all the targets, it needs to control the second camera. Shooting at 12 minutes to obtain a plurality of second images respectively, and each second image contains corresponding detailed features of the target.
  • the shooting parameters corresponding to each target area are obtained from the mapping table, and then the multiple obtained shooting parameters are used for time-sharing control in turn.
  • the second camera 12 captures a plurality of second images, and the second images are in one-to-one correspondence with the target area.
  • the part corresponding to the target area in each second image and the first image is fused to obtain a fusion image.
  • the size of the target area is preset, and the size of the scene range included in the target area may not be consistent with the size of the scene range included in the second image. Therefore, during fusion, the second image must first determine its fusion part in the first image ( That is, the part corresponding to the target area in the first image), for example, the processor 13 performs feature matching between the second image and the first image, and determines the fusion part by matching the boundary feature points of the second image, so as to combine the second image with the first image.
  • the image and the fusion part are fused (such as replacing the fusion part with a second image), and then each second image and the corresponding fusion part are fused respectively to obtain a fusion image.
  • the fusion image contains multiple targets at the same time. detail features.
  • the processor 13 can determine the ratio of the focal lengths of the first camera 11 and the second camera 12 according to the The proportion of the overlapping part (the part corresponding to the target area) in the first image (ie, the size of the fused part).
  • the processor 13 may take the center of the target area as the center of the fused part according to the proportion/size of the fused part, so as to accurately determine the fused part.
  • the fusion part can include all the image features of the second image.
  • the processor 13 may first enlarge the size of the first image according to the size ratio of the fusion part and the second image, so that the enlarged fusion part (that is, the part corresponding to the target area in the enlarged first image) and the first image The sizes of the two images are the same, and then the enlarged fusion part is replaced with the second image to obtain the fusion image; for another example, the processor 13 may first reduce the size of the second image according to the size ratio of the fusion part and the second image, In order to make the size of the reduced second image and the fusion part (that is, the part corresponding to the target area in the first image) the same, and then replace the fusion part with the reduced second image to obtain the fusion image; for another example, processing The controller 13 can first scale the size of the first image to the first size according to the size ratio of the fusion part and the second image, and then scale the size of the second image to the second size, so that the scaled fusion part (that is, the scaled The size of the part corresponding to the target
  • the number of second cameras 12 may be multiple, for example, the number of second cameras 12 is two, three, four, etc.
  • the processor 13 gives the The second camera 12 allocates shooting parameters, and when the number of shooting parameters is greater than the number of the second cameras 12, each second camera 12 is allocated at least one shooting parameter, wherein the second camera 12 to which multiple shooting parameters are allocated Time-sharing shooting is performed to obtain a second image corresponding to each shooting parameter; when the shooting parameter is less than or equal to the number of the second cameras 12, the second camera 12 assigned the shooting parameter can simultaneously acquire the second image, thereby The detailed features of multiple targets are obtained at the same time.
  • the shooting parameters need to be adjusted frequently.
  • the detailed features of multiple targets can be acquired at the same time, the targets are not easily lost, and the acquisition of the detailed features of the multiple targets can be achieved more quickly and accurately.
  • the first camera 11 includes the first visible light camera and the first infrared camera
  • the second camera 12 includes the second visible light camera and the second infrared camera
  • step 011 includes:
  • 0111 Acquire a first visible light image captured by a first visible light camera and a first infrared image captured by a first infrared camera;
  • Step 013 includes:
  • 0135 controlling the second visible light camera to obtain the second visible light image according to the shooting parameters, and controlling the second infrared camera to obtain the second infrared image;
  • the image processing method of an embodiment of the present application further includes:
  • Step 014' determine fusion temperature information according to the first infrared image and the second infrared image
  • Step 014 includes:
  • 0141 Fusing the fusion temperature information, the first visible light image and the second visible light image to obtain a fusion image.
  • the processor 13 is further configured to acquire the first visible light image captured by the first visible light camera and the first infrared image captured by the first infrared camera; control the second visible light camera to obtain the second visible light image according to the capturing parameters , and controlling the second infrared camera to obtain a second infrared image; determining fusion temperature information according to the first infrared image and the second infrared image; and fusing the fusion temperature information, the first visible light image and the second visible light image to obtain a fusion image. That is, step 0111, step 0135, step 014' and step 0141 may be implemented by the processor 13.
  • the first visible light camera may acquire the first visible light image
  • the first infrared camera may acquire the first infrared image
  • the processor 13 extracts image features according to a preset image recognition algorithm and compares them with the image model in the database to identify the target, so as to determine the target area of the first visible light image (hereinafter referred to as the first target area).
  • the processor 13 uses the pixel value and/or temperature value of the first infrared image, and uses the pixel value difference and the temperature value difference caused by the difference in thermal radiation between the human body and the common object to quickly identify the target to determine the target area of the first infrared image (below). called the second target area).
  • the first target area and the second target area are in one-to-one correspondence, and both correspond to the same target.
  • the shooting parameters of the second visible light camera are determined according to the position of the first target area in the first visible light image
  • the shooting parameters of the second infrared camera are determined according to the position of the second target area in the first infrared image
  • the processor 13 According to the corresponding shooting parameters, the second visible light camera can be controlled to obtain the second visible light image, and the second infrared camera can be controlled to obtain the second infrared image.
  • the target occupies less pixels, and less infrared information can be obtained.
  • the second infrared image has a smaller field of view, and the second infrared image can contain the target. More infrared information, and the infrared information can be used to characterize the temperature information of the target, and the processor 13 can jointly determine the fusion temperature information of the target according to the first infrared image and the second infrared image.
  • the processor 13 determines the first temperature information according to the first infrared image, determines the second temperature information according to the second infrared image, and determines the fusion temperature information according to the first temperature information and the second temperature information.
  • the processor 13 calculates the first temperature information according to the pixel value of the fusion part (ie, the part corresponding to the second target area) in the first infrared image, and calculates the second temperature information according to the pixel value of the second infrared image.
  • the first temperature information may be the temperature corresponding to the pixel average value of the fusion part
  • the second temperature information may be the temperature corresponding to the pixel average value of the second infrared image
  • the processor 13 determines the fusion temperature according to the first temperature information and the second temperature information. information.
  • the fusion temperature information uses the first temperature information (the second temperature information) as the reference temperature information, and the reference temperature information is adjusted by the second temperature information (the first temperature information), and the second temperature information and the first temperature information
  • the difference between the two is greater than the predetermined threshold
  • the average value of the second temperature information and the first temperature information is taken as the fusion temperature information, or the adjustment value is determined based on the difference, and then the reference temperature information is adjusted according to the adjustment value
  • the first temperature information (second temperature information) is directly used as the fusion temperature information.
  • the processor replaces the fusion part in the first infrared image with the second infrared image, and then according to the pixel value of the edge part of the second infrared image and the pixel value of the part bordering the second infrared image in the first infrared image, A smoothing process is performed on the second infrared image, and the smoothing process can make the pixel value of the edge portion smoothly transition to the pixel value of the border portion. Then, the fusion temperature information is determined by the pixel value of the fusion part after the smoothing process, for example, the temperature corresponding to the pixel average value of the fusion part is used as the fusion temperature information.
  • the processor 13 After determining the fusion temperature information, the processor 13 fuses the fusion temperature information, the first visible light image and the second visible light image to obtain a fusion image.
  • the first visible light image and the second visible light image are first fused to obtain an intermediate image, for example, the part corresponding to the target area in the first visible light image (ie, the fusion part) is replaced with the second visible light image , and then the processor 13 generates a temperature indication circle according to the shape of the fusion part and the fusion temperature information, and the shape of the temperature indication circle matches the shape of the fusion part.
  • the temperature indication circle is a rectangular wire frame; or, If the fusion part is a circle, the temperature indication circle is a circular wire frame; then the processor 13 adds the temperature indication circle to the position corresponding to the fusion part of the intermediate image, and makes the temperature indication circle surround the fusion part (that is, the temperature indication circle is larger than Fusion part), the addition of the temperature indicating circle will not block the first visible light image too much, but is only used to circle the target to be identified.
  • the temperature indicating circle can also be used to display the temperature. For example, as the temperature increases, the temperature indicating circle changes from light red to dark red, so as to visually indicate the temperature of the target of the encircled fusion part.
  • the fusion image not only contains the detailed features of the visible light of the target, but also can intuitively display the current temperature information of the target.
  • the processor 13 first identifies the target in the fusion part and determines the shape of the target, then strokes the edge of the target to determine the shape of the temperature indicating circle, and displays the temperature indication in a color corresponding to the fusion temperature information circle, so as to circle the current target more vividly and display the temperature information intuitively.
  • the processor directly displays the temperature of the fusion part in digital form to generate a fusion image; or, the intermediate image fuses fusion temperature information
  • the fusion image does not display the temperature at the beginning, only when the user clicks the corresponding target, the temperature of the target is displayed, and automatically disappears after a predetermined period of time, so that the display of the temperature will not block the fusion image.
  • the embodiments of the present application may further include a combination of fusing the first infrared image with the second infrared image, fusing the first infrared image with the second visible light image, fusing the first visible light image with the second infrared image, etc. condition.
  • the embodiment of the present application including the visible light camera can realize the multi-light fusion function, can obtain more abundant image information, and is more convenient for the user to analyze the image data.
  • an embodiment of the present application further provides an unmanned aerial vehicle system 1000.
  • the unmanned aerial vehicle system 1000 includes an unmanned aerial vehicle 100, an image acquisition device 10, a remote controller 200, and a processor 13.
  • the image acquisition device 10 includes the first A camera 11 and a second camera 12, the focal length of the first camera 11 is smaller than the focal length of the second camera 12, and the processor 13 is configured to execute the image processing method of any one of the above embodiments.
  • the processor 13 may be used to perform steps 011, 012, 013 and 014; or the processor 13 may be used to perform steps 0121, 0122 and 0123, and so on.
  • the processor 13 may be provided on the drone 100 and/or the remote control 200 .
  • the processor 13 may be provided on the drone 100, or the processor 13 may be provided on the remote controller 200; , and the other is set on the remote controller 200.
  • the processor 13 is provided on the UAV 100.
  • the steps in the above image processing method can be executed by the processor 13 of the drone 100 (remote controller 200 ).
  • the steps in the above image processing method may all be executed in the processor 13 of the drone 100 or the processor 13 of the remote controller 200, Or a part is executed in the processor 13 of the drone 100 , and the other part is executed in the processor 13 of the remote controller 200 .
  • a non-volatile computer-readable storage medium 300 containing computer-executable instructions 302 when the computer-executable instructions 302 are executed by one or more processors 13 , makes the processor 13 Execute the image processing method of any one of the above embodiments. For example, referring to FIG. 1, when the computer-executable instructions 302 are executed by the processor 13, the processor 13 executes the following steps:
  • 011 acquiring the first image captured by the first camera 11 and identifying the target area
  • 012 Determine the shooting parameters of the second camera 12 according to the position of the target area in the first image
  • the processor 13 executes the following steps:
  • 0121 establish a coordinate system based on the first image
  • 0122 determine the location coordinates of the target area in the first image
  • 0123 Determine a shooting angle corresponding to the position coordinates based on a preset mapping table.
  • the embodiment of the present application can improve the imaging details of the long-distance target area and the accuracy of object recognition based on the image of the large field of view, realize the function of local super-resolution, and simulate the observation effect of the human eye on the long-distance target point.
  • the embodiment of the present application can also use the infrared camera itself to be extremely sensitive to the temperature of the object, very easily detect the temperature in the field of view Abnormal target objects, combined with a telephoto infrared camera, can quickly and automatically locate the abnormal objects and capture rich image information; the embodiment of the present application can also use the pan/tilt to complete the imaging of multiple long-distance target areas, which can The number of super-resolution regions is increased; the embodiment of the present application can also cooperate with a visible light camera to realize a multi-light fusion function, and can obtain more abundant image information, which is more convenient for users to analyze image data.
  • Any description of a process or method in a flowchart or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for performing a specified logical function or step of the process , and the scope of the preferred embodiments of the present application includes additional implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending on the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present application belong.
  • Logic and/or steps represented in flowcharts or otherwise described herein, for example, may be considered an ordered listing of executable instructions for performing logical functions, may be embodied in any computer-readable medium,
  • an instruction execution system, apparatus, or device such as a computer-based system, a system including a processor, or other system that can fetch instructions from and execute instructions from an instruction execution system, apparatus, or apparatus
  • a "computer-readable medium" can be any device that can contain, store, communicate, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or apparatus.
  • computer readable media include the following: electrical connections with one or more wiring (electronic devices), portable computer disk cartridges (magnetic devices), random access memory (RAM), Read Only Memory (ROM), Erasable Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Compact Disc Read Only Memory (CDROM).
  • the computer readable medium may even be paper or other suitable medium on which the program may be printed, as may be done, for example, by optically scanning the paper or other medium, followed by editing, interpretation, or other suitable means as necessary process to obtain the program electronically and then store it in computer memory.
  • the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Studio Devices (AREA)

Abstract

一种图像处理方法、图像获取装置(200)、无人机(100)、无人机系统(1000)和计算机可读存储介质(300)。方法包括:(011)获取第一相机的第一图像并识别目标区域;(012)根据目标区域的位置确定第二相机(12)的拍摄参数;(013)根据拍摄参数获取第二相机的第二图像;(014)将第一图像和第二图像融合得到融合图像。

Description

图像处理方法、图像获取装置、无人机、无人机系统和存储介质 技术领域
本申请涉及图像处理技术领域,特别涉及一种用图像处理方法、图像获取装置、无人机、无人机系统以及计算机可读存储介质。
背景技术
目前,为了实现大范围的目标的监控,一般会选取定焦的短焦相机进行图像的获取,虽然对于近距离目标能够很好的进行跟随和目标细节特征的获取,但对于距离稍远的目标的拍摄精度就会明显下降,导致即使拍摄到了目标,也无法还原目标本身的特征,影响监控效果。
发明内容
本申请的实施方式提供一种用于图像处理方法、图像获取装置、无人机、无人机系统以及计算机可读存储介质。
本申请实施方式提供一种图像处理方法,应用于图像获取装置,所述图像获取装置包括第一相机和第二相机,所述第一相机的焦距小于所述第二相机的焦距,所述图像处理方法包括:获取所述第一相机拍摄的第一图像并识别目标区域;根据所述目标区域在所述第一图像中的位置,以确定所述第二相机的拍摄参数;根据所述拍摄参数控制所述第二相机获取第二图像;及将所述第一图像和所述第二图像融合,以得到融合图像。
本申请实施方式还提供一种图像获取装置,所述图像获取装置包括第一相机、第二相机和处理器,所述第一相机的焦距小于所述第二相机的焦距,所述处理器用于:获取所述第一相机拍摄的第一图像并识别目标区域;根据所述目标区域在所述第一图像中的位置,以确定所述第二相机的拍摄参数;根据所述拍摄参数控制所述第二相机获取第二图像;及将所述第一图像和所述第二图像融合,以得到融合图像。
本申请实施方式还提供一种无人机,所述无人机包括机身、图像获取装置和处理器,所述图像获取装置设置在所述机身上,所述图像获取装置包括第一相机和第二相机,所述第一相机的焦距小于所述第二相机的焦距,所述处理器用于:获取所述第一相机拍摄的第一图像并识别目标区域;根据所述目标区域在所述第一图像中的位置,以确定所述第二相机的拍摄参数;根据所述拍摄参数控制所述第二相机获取第二图像;及将所述第一图像和所述第二图像融合,以得到融合图像。
本申请实施方式还提供一种无人机系统,所述无人机系统包括无人机、图像获取装置和处理器,所述图像获取装置包括第一相机和第二相机,所述第一相机的焦距小于所述第二相机的焦距,所述处理器用于执行上述实施方式的图像处理方法。
本申请实施方式还提供一种包含计算机可执行指令的计算机可读存储介质。当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行上述实施方式的图像处理方法。
本申请实施方式的图像处理方法、图像获取装置、无人机、无人机系统和和计算机可读存储介质中,焦距较小的第一相机拍摄较大视场范围的第一图像,焦距较大的第二相机拍摄包含目标细节特征的第二 图像,通过融合第一图像和第二图像以得到融合图像,使得不仅能够获取较大视场范围内的图像,不易跟丢目标,而且还可以获取到目标的细节特征,实现目标的大视场范围和高精度的拍摄,对近/远距离目标均有较好的监控效果,可以应用于安防、搜救、巡检、远距离目标红外监测等领域。
本申请的实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实施方式的实践了解到。
附图说明
本申请的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本申请某些实施方式的图像处理方法的流程示意图。
图2是本申请某些实施方式的无人机系统的结构示意图。
图3是本申请某些实施方式的图像处理方法的原理示意图。
图4是本申请某些实施方式的图像处理方法的流程示意图。
图5是本申请某些实施方式的图像处理方法的流程示意图。
图6是本申请某些实施方式的图像处理方法的流程示意图。
图7是本申请某些实施方式的图像处理方法的流程示意图。
图8是本申请某些实施方式的图像处理方法的流程示意图。
图9是本申请某些实施方式的图像处理方法的流程示意图。
图10是本申请某些实施方式的处理器和计算机可读存储介质的连接示意图。
具体实施方式
下面详细描述本申请的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请,而不能理解为对本申请的限制。
在本申请的描述中,需要理解的是,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个所述特征。在本申请的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
在本申请的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接或可以相互通信;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本申请中的具体含义。
下面详细描述本申请的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性 的,仅用于解释本申请,而不能理解为对本申请的限制。
目前,为了实现大范围的目标监控,一般会选取定焦的短焦相机进行图像的获取,虽然对于近距离目标能够很好的进行跟随和目标细节特征的获取,但对于距离稍远的目标的拍摄精度就会明显下降,导致即使拍摄到了目标,也无法还原目标本身的特征,远距离的目标监控效果较差。
若为了实现远距离目标的监控,采用长焦相机进行图像的获取,虽然能够获取到远距离目标的细节特征,但由于视场范围较小导致监控范围较小,容易丢失目标,对近距离目标的监控效果也较差。
而变焦相机虽然可以通过变焦实现远距离目标和近距离目标的监控,但也只能保持在长焦状态或短焦状态,如在长焦状态下,监控范围较小,容易跟丢目标,在短焦状态下,对远距离目标的细节特征获取不足,难以实现目标的大范围高精度的图像的获取。
为此,请参阅图1和图2,本申请实施方式提供一种图像处理方法,应用于图像获取装置10,图像获取装置10包括第一相机11和第二相机12,第一相机11的焦距小于第二相机12的焦距,该图像处理方法包括:
011:获取第一相机11拍摄的第一图像并识别目标区域;
012:根据目标区域在第一图像中的位置,以确定第二相机12的拍摄参数;
013:根据拍摄参数控制第二相机12获取第二图像;及
014:将第一图像和第二图像融合,以得到融合图像。
本申请实施方式还提供一种图像获取装置10,该图像获取装置10还包括处理器13,处理器13用于:获取第一相机11拍摄的第一图像并识别目标区域;根据目标区域在第一图像中的位置,以确定第二相机12的拍摄参数;根据拍摄参数控制第二相机12获取第二图像;及将第一图像和第二图像融合,以得到融合图像。也即是说,步骤011至步骤014可以由处理器13实现。
本申请实施方式还提供一种无人机100,无人机100包括机身20、图像获取装置10和处理器13,图像获取装置10设置在机身20上。图像获取装置10设置在无人机100上以配合无人机100进行图像的获取和处理。在其他实施方式中,图像获取装置10还可以单独作为一个设备或者作为其他监控平台上的一个组件,实现图像的获取和处理。
具体地,在一个实施例中,第一相机11可为短焦相机,其焦距可根据所需拍摄的视场范围确定。第一相机11可拍摄大视场范围的第一图像。第一图像中可能包含一个或多个目标,目标的距离可能有远有近,对于近距离可获取到细节特征的目标可直接获取包含该目标的细节特征的第一图像,且可持续跟拍多张该目标的第一图像,实现对该目标的跟踪拍摄。对于远距离目标,则在第一图像中识别该目标所在的目标区域,并确定目标区域在第一图像中的位置。
在一个例子中,在野外可疑目标监测、重大设施保护等人员较少的监控场景,第一相机11可以是红外相机(即,第一红外相机),则第一图像为第一红外图像,红外图像可用于表征目标的温度特征和形态特征。处理器13可通过第一红外图像的像素值和/或温度值,利用人体和普通物体的热辐射差异导致的像素值差异和温度值差异,快速识别目标以确定目标区域,其中,目标区域可以是包含该目标的一个固定大小的区域,如矩形框区域、圆形框区域等。
例如,处理器13基于预设的测温算法,根据第一红外图像的像素值计算得到第一红外图像中不同 区域的温度值,从而基于人体和普通物体的温度值差异识别出目标,进而确定目标区域;再例如,处理器13还可以根据第一红外图像的像素值分布进行图像特征匹配(如将图像特征和人体特征数据库的图像模板进行匹配),从而识别出目标以确定目标区域。再例如,处理器13可基于温度值和像素值识别目标,从而准确地确定目标区域,如基于温度值确定目标的类型(如人、猫、狗、汽车、树木等),通过温度的差异来确定目标可能的一个或多个类型,然后根据确定的一种或多种类型,选取对应的图像模板来进行图像特征匹配,从而更为快速和准确地识别目标,降低误判的几率。
在另一个例子中,第一相机11还可以是可见光相机(即,第一可见光相机),可见光相机可用于获取目标的形态特征,则第一图像为第一可见光图像,处理器13可基于预设的图像识别算法,如提取图像特征并与数据库中的图像模型进行比对以识别目标,从而确定目标区域。
在确定目标区域在第一图像中的位置后,处理器13可根据目标区域在第一图像中的位置,确定第二相机12的拍摄参数。第二相机12可为长焦相机,其焦距可根据拍摄的视场范围和能获取到目标的细节特征的最大距离确定。
应当理解,还可以通过人工等其他方法识别目标区域,本申请的实施例在此不作限制。
拍摄参数包括拍摄角度、拍摄焦距和缩放倍数中至少一种。例如,在第二相机12为定焦相机时,其焦距是固定的,视场范围也是固定的,只需调节第二相机12的拍摄角度,以使得第二相机12的视场范围覆盖目标即可实现第二图像的获取。再例如,在第二相机12为变焦相机时,可调节拍摄焦距、缩放倍数和拍摄角度,使得视场范围覆盖目标,从而实现第二图像的获取,通过拍摄焦距和缩放倍数的调节,可使得视场范围刚好覆盖目标,使得目标基本占据整个第二图像,从而在保证目标的完整的前提下,最大精度的获取目标的细节特征。
第二相机12可以是旋转摄像头,旋转摄像头通过自身的驱动部件即可实现第二相机12的旋转角度的调节。第二相机12还可以设置在无人机100的云台30上,通过云台30来控制第二相机12的拍摄角度,例如拍摄角度包括俯仰角和偏航角,从而实现全方位的角度调节,云台30包括俯仰轴和偏航轴,通过旋转俯仰轴实现俯仰角的调节,通过旋转偏航轴实现偏航角的调节,由于相机大都是圆形镜头,故无需进行横滚轴的调节,云台30的结构较为简单;再例如,还可设置反射镜组件在第二相机12前,可通过电机调节反射镜的旋转角度,从而调整入光方向,使得目标反射的光线刚好处于入光方向上,此时,第二相机12获取到目标反射的光线(即,第二相机12的视场范围覆盖目标)。应当理解,本申请的实施例不限制第二相机获取第二图像的方式。
在根据拍摄参数控制第二相机12拍摄目标以获取到第二图像后,将第一图像和第二图像融合,以得到融合图像。例如,请参阅图3,将第二图像P2和第一图像P1中的目标区域对应的部分S进行融合,如将目标区域对应的部分S替换为第二图像P2以得到融合图像P3。使得融合图像P3中包含了目标的细节特征,可以应用于安防、搜救、巡检、远距离目标红外监测等领域。
可以理解,第一图像是包含了目标的大视场范围的图像,第二图像是包含了目标的小视场范围的图像,也即是说,第二图像的拍摄内容对应第一图像中的目标区域的内容,在融合时,可直接将第一图像中的目标区域替换为包含了目标细节特征的第二图像,融合图像不仅包含第一图像的大视场范围的区域的图像,且还包括了目标所在区域的细节特征,从而实现了包含目标的大视场范围且高精度的融合图像 的获取。
本申请的图像处理方法、图像获取装置10、和无人机100中,通过融合焦距较小的第一相机11拍摄的大视场范围的第一图像,和焦距较大的第二相机12拍摄的包含目标细节特征的第二图像,以得到融合图像,使得不仅能够获取较大视场范围内的图像,不易跟丢目标,而且还可以获取到目标的细节特征,实现目标的大视场范围和高精度的拍摄,对近/远距离目标的均有较好的监控效果。
请参阅图2和图4,在某些实施方式中,步骤012包括:
0121:基于第一图像建立坐标系;
0122:确定目标区域在第一图像中的位置坐标;及
0123:基于预设的映射表,确定与位置坐标对应的拍摄角度。
在某些实施方式中,处理器13还用于基于第一图像建立坐标系;确定目标区域在第一图像中的位置坐标;及基于预设的映射表,确定与位置坐标对应的拍摄角度。也即是说,步骤0121、步骤0122和步骤0123可以由处理器13实现。
具体地,在图像获取装置10出厂前,要对其进行标定,除了对第一相机11和第二相机12进行外部参数和内部参数的标定外,还需要对目标区域和拍摄角度进行标定,可以理解,在拍摄目标时,随着目标在第一图像的位置的变化,第二相机12的拍摄角度随之变化,即,目标在第一图像中的位置和拍摄角度是存在对应关系的,因此,在出厂前可通过标定的方式建立映射表以建立目标区域和拍摄角度的映射表。然后基于第一图像建立坐标系,以获取目标区域在第一图像的位置坐标,处理器13根据位置坐标即可从映射表快速读取到对应的拍摄角度。
请参阅图2和图5,在某些实施方式中,在步骤012之前,图像处理方法还包括:
015:控制第一相机11拍摄标定板以获取第一标定图像,标定板包括多个特征区域,第一标定图像包括与特征区域对应的标定区域,标定板覆盖第一相机11的视场范围;
016:控制第二相机12的光轴对准特征区域的中心,依次获取与特征区域对应的旋转角度;及
017:基于与特征区域对应的标定区域、和与特征区域对应的旋转角度,建立映射表。
在某些实施方式中,处理器13还用于控制第一相机11拍摄标定板以获取第一标定图像,标定板包括多个特征区域,第一标定图像包括与特征区域对应的标定区域,标定板覆盖第一相机11的视场范围;控制第二相机12的光轴对准特征区域的中心,依次获取与特征区域对应的旋转角度;及基于与特征区域对应的标定区域、和与特征区域对应的旋转角度,建立映射表。也即是说,步骤015、步骤016和步骤017可以由处理器13实现。
具体地,在标定过程中,处理器13先控制第一相机11拍摄标定板以获取第一标定图像,标定板包括多个特征区域,第一标定图像包括与特征区域对应的标定区域。在标定拍摄时,标定板覆盖第一相机11的视场范围,如第一相机11的视场范围刚好被标定板覆盖,或者第一相机11的视场范围位于标定板内,以拍摄部分标定板;从而使得第一图像基本全部由与特征区域对应的标定区域组成。
然后保持标定板固定,处理器13控制第二相机12的光轴对准特征区域的中心,然后处理器13获取此时的第二相机12的旋转角度,建立旋转角度和与特征区域对应的标定区域的映射关系,对每个特征区域重复该标定过程,即可得到标定板上所有特征区域对应的标定区域和旋转角度的映射关系,从而 建立完整的映射表。特征区域可以与第二相机12的视场范围的形状相同,如第二相机12的视场范围的形状为矩形,则特征区域为矩形,在实现第二相机12的光轴对准特征区域的中心时,可通过判断第二图像的中心和特征区域对应的标定区域的中心的图像特征是否匹配,在匹配时即可确定第二相机12的光轴对准了特征区域的中心,有利于提高标定精度。
请参阅图2和图6,在某些实施方式中,步骤0123包括:
0123a:根据目标区域的中心位置坐标确定对应的目标标定区域;及
0123b:基于预设的映射表,确定目标标定区域对应的拍摄角度。
在某些实施方式中,处理器13还用于根据目标区域的中心位置坐标确定对应的目标标定区域;及基于预设的映射表,确定目标标定区域对应的拍摄角度。也即是说,步骤0123a和步骤0123b可以由处理器13实现。
具体地,处理器13根据位置坐标从映射表快速读取到对应的拍摄角度时,由于拍摄角度是与标定区域匹配的,因此需要先确定目标区域对应的目标标定区域,具体为:处理器13先确定目标区域的中心位置坐标,然后获取包含该中心位置坐标的标定区域以作为目标标定区域,最后从映射表中读取与该目标标定区域对应的拍摄角度。
请参阅图2和图7,在某些实施方式中,步骤013包括:
0131:根据拍摄参数控制第二相机12获取测试图像;
0132:计算测试图像与目标区域的图像的偏移度;及
0133:根据偏移度调整拍摄角度,并以调整后的拍摄角度控制第二相机12获取第二图像。
在某些实施方式中,处理器13还用于根据拍摄参数控制第二相机12获取测试图像;计算测试图像与目标区域的图像的偏移度;及根据偏移度调整拍摄角度,并以调整后的拍摄角度控制第二相机12获取第二图像。也即是说,步骤0131、步骤0132和步骤0133可以由处理器13实现。
具体地,目标区域的中心位置和目标标定区域的中心位置可能是存在差异的,因此,处理器13可以该拍摄角度控制第二相机12获取测试图像,然后将测试图像和目标区域的图像进行对比,例如确定第一图像中与测试图像匹配的匹配区域,然后计算该匹配区域的中心位置坐标和目标区域的中心位置坐标的偏差,根据该偏差确定偏移度。例如,偏移度包括偏航调节角度和俯仰调节角度,匹配区域的中心位置坐标和目标区域的中心位置坐标的横坐标的偏差越大,则需要调节的偏航角越大,匹配区域的中心位置坐标和目标区域的中心位置坐标的纵坐标偏差越大,则需要调节的俯仰角越大,从而结合图像特征实现对拍摄角度的进一步调整,提高拍摄角度获取的准确性。
在其他实施方式中,处理器13在根据初始的映射表得到目标标定区域对应的拍摄角度后,处理器13可直接根据目标区域的中心位置和目标标定区域的中心位置的差异计算偏移度,然后根据偏移度进一步对拍摄角度进行调整,提高拍摄角度获取的准确性。
请参阅图2和图8,在某些实施方式中,步骤013还包括:
0134:根据多个拍摄参数控制第二相机12分时拍摄以得到多张第二图像,第二图像和目标区域一一对应。
在某些实施方式中,处理器13还用于根据多个拍摄参数控制第二相机12分时拍摄以得到多张第二 图像,第二图像和目标区域一一对应。也即是说,步骤0134可以由处理器13实现。
具体地,在实际应用场景(如野外可疑目标检测)中,目标可能为多个,目标区域即为多个,处理器13若要实现对所有目标的细节特征的获取,则需要控制第二相机12分时拍摄,以分别得到多张第二图像,每张第二图像包含对应的目标细节特征。在分时拍摄时,首先根据第一图像中多个目标区域在第一图像中的位置,从映射表中获取每个目标区域对应的拍摄参数,然后依次以获取的多个拍摄参数分时控制第二相机12拍摄多张第二图像,第二图像和目标区域一一对应。
在融合时,将每个第二图像和第一图像中与目标区域对应的部分进行融合以得到融合图像。目标区域的大小为预设的,目标区域包括的场景范围大小和第二图像包括的场景范围大小可能并不一致,因此在融合时,第二图像要先确定其在第一图像中的融合部分(即,第一图像中与目标区域对应的部分),例如,处理器13将第二图像与第一图像进行特征匹配,通过对第二图像的边界特征点的匹配确定融合部分,从而将第二图像和融合部分进行融合(如将融合部分替换为第二图像),然后分别对每个第二图像和对应的融合部分进行融合,以得到融合图像,此时的融合图像同时包含了多个目标的细节特征。再例如,当第一相机11和第二相机12的焦距确定后,两者的视场范围的重合部分就确定了,处理器13可根据第一相机11和第二相机12的焦距比例确定该重合部分(目标区域对应的部分)在第一图像中的比例(即,融合部分的大小)。在一个实施例中,处理器13可根据融合部分的比例/大小,以目标区域的中心为融合部分的中心,从而准确地确定融合部分。
当然,考虑到融合部分和第二图像的尺寸差异,需要先进行尺寸(如分辨率)的转换,使得融合部分的尺寸和第二图像的尺寸基本相同,再将融合部分替换为第二图像。此时的融合部分才能将第二图像的所有图像特征均包含进去。例如,处理器13可根据融合部分和第二图像的尺寸比例,先放大第一图像的尺寸,以使得放大后的融合部分(即放大后的第一图像中与目标区域对应的部分)和第二图像的尺寸相同,然后再将放大后的融合部分替换为第二图像以获取融合图像;再例如,处理器13可根据融合部分和第二图像的尺寸比例,先缩小第二图像的尺寸,以使得缩小后的第二图像和融合部分(即第一图像中与目标区域对应的部分)的尺寸相同,然后再将融合部分替换为缩小后的第二图像以获取融合图像;再例如,处理器13可根据融合部分和第二图像的尺寸比例,先将第一图像的尺寸缩放到第一尺寸,再将第二图像的尺寸缩放到第二尺寸,使得缩放后的融合部分(即缩放后的第一图像中与目标区域对应的部分)的尺寸等于第二尺寸,再将缩放后的融合部分替换为第二图像以获取融合图像。
在其他实施方式中,第二相机12可以是多个,例如第二相机12为两个、三个、四个等,处理器13在获取到多个目标区域对应的多个拍摄参数后,给第二相机12分配拍摄参数,在拍摄参数的个数大于第二相机12的个数时,每个第二相机12至少分配一个拍摄参数,其中,分配了多个拍摄参数的第二相机12则进行分时拍摄以得到每个拍摄参数对应的第二图像;在拍摄参数小于或等于第二相机12的个数时,分配了拍摄参数的第二相机12可同时进行第二图像的获取,从而同时获取到多个目标的细节特征。在仅通过一个第二相机12分时拍摄以获取多个第二图像的实施例中,需要频繁的调整拍摄参数,相比之下,在通过多个第二相机12获取多个第二图像的实施例中,可以同时获取到多个目标的细节特征,不易丢失目标,可更为快速和准确地实现多个目标的细节特征的获取。
请参阅图2和图9,在某些实施方式中,第一相机11包括第一可见光相机和第一红外相机,第二相 机12包括第二可见光相机和第二红外相机,步骤011包括:
0111:获取第一可见光相机拍摄的第一可见光图像、和第一红外相机拍摄的第一红外图像;
步骤013包括:
0135:根据拍摄参数控制第二可见光相机获取第二可见光图像、和控制第二红外相机获取第二红外图像;
在步骤014之前,本申请的一个实施例的图像处理方法还包括:
步骤014’:根据第一红外图像和第二红外图像确定融合温度信息;
步骤014包括:
0141:将融合温度信息、第一可见光图像和第二可见光图像融合,以得到融合图像。
在某些实施方式中,处理器13还用于获取第一可见光相机拍摄的第一可见光图像、和第一红外相机拍摄的第一红外图像;根据拍摄参数控制第二可见光相机获取第二可见光图像、和控制第二红外相机获取第二红外图像;根据第一红外图像和第二红外图像确定融合温度信息;及将融合温度信息、第一可见光图像和第二可见光图像融合,以得到融合图像。也即是说,步骤0111、步骤0135、步骤014’和步骤0141可以由处理器13实现。
具体地,第一可见光相机可获取第一可见光图像,第一红外相机可获取第一红外图像;处理器13根据基于预设的图像识别算法,如提取图像特征并与数据库中的图像模型进行比对以识别目标,从而确定第一可见光图像的目标区域(下称第一目标区域)。处理器13通过第一红外图像的像素值和/或温度值,利用人体和普通物体的热辐射差异导致的像素值差异和温度值差异,快速识别目标以确定第一红外图像的目标区域(下称第二目标区域)。本实施方式中,第一目标区域和第二目标区域一一对应,均对应同一个目标。
可以理解,第二可见光相机的拍摄参数根据第一目标区域在第一可见光图像中的位置确定,而第二红外相机的拍摄参数根据第二目标区域在第一红外图像中的位置确定,处理器13根据对应的拍摄参数可控制第二可见光相机获取第二可见光图像,控制第二红外相机可获取第二红外图像。
在融合时,第一红外图像由于视场范围较大,目标所占的像素会较少,能够获取到的红外信息较少,第二红外图像视场范围较小,第二红外图像可包含目标更多的红外信息,而红外信息可用于表征目标的温度信息,处理器13可根据第一红外图像和第二红外图像共同确定目标的融合温度信息。
具体地,在一个实施例中,处理器13根据第一红外图像确定第一温度信息,根据第二红外图像确定第二温度信息;及根据第一温度信息和第二温度信息确定融合温度信息。
例如,处理器13根据第一红外图像中的融合部分(即,第二目标区域对应的部分)的像素值计算第一温度信息,根据第二红外图像的像素值计算第二温度信息。
第一温度信息可以是融合部分的像素平均值对应的温度,第二温度信息可以是第二红外图像的像素平均值对应的温度;处理器13根据第一温度信息和第二温度信息确定融合温度信息。
例如,融合温度信息以第一温度信息(第二温度信息)为基准温度信息,通过第二温度信息(第一温度信息)对该基准温度信息进行调整,在第二温度信息和第一温度信息的差值大于预定阈值时,取第二温度信息和第一温度信息的平均值作为融合温度信息,或基于该差值确定调整值,然后根据调整值对 基准温度信息进行调整;而在第二温度信息和第一温度信息的差值小于或等于预定阈值时,则直接以第一温度信息(第二温度信息)作为融合温度信息。
再例如,处理器以第二红外图像替换第一红外图像中的融合部分,然后根据第二红外图像的边缘部分的像素值和第一红外图像中与第二红外图像接壤的部分的像素值,对第二红外图像进行平滑处理,平滑处理可使得该边缘部分的像素值平滑过渡到该接壤的部分的像素值。然后以平滑处理后的融合部分的像素值确定融合温度信息,如以融合部分的像素平均值对应的温度作为融合温度信息。
在确定融合温度信息后,处理器13将融合温度信息、第一可见光图像和第二可见光图像融合,以得到融合图像。
在一个例子中,在融合时,首先将第一可见光图像和第二可见光图像融合以得到中间图像,如将第一可见光图像中目标区域对应的部分(即,融合部分)替换为第二可见光图像,然后处理器13根据融合部分的形状和融合温度信息,生成温度指示圈,温度指示圈的形状和融合部分的形状相匹配,例如融合部分为矩形,则温度指示圈为矩形线框;或者,融合部分为圆形,则温度指示圈为圆形线框;然后处理器13将温度指示圈加入到中间图像的融合部分对应的位置,并使得温度指示圈包围融合部分(即,温度指示圈大于融合部分),温度指示圈的加入并不会过多的遮挡第一可见光图像,仅用于圈出想要标识的目标。温度指示圈还可用于显示温度的高低,例如随着温度的升高,温度指示圈从浅红到深红变化,从而形象地指示被圈住的融合部分的目标的温度。如此,融合图像不仅包含目标的可见光的细节特征,还能够直观地显示目标当前的温度信息。在其他实施方式中,处理器13先识别融合部分中的目标并确定目标的形状,然后在目标边缘进行描边以确定温度指示圈的形状,并以与融合温度信息对应的颜色显示该温度指示圈,从而更为形象的圈出当前目标并直观地显示温度信息。
在另一个例子中,处理器将第一可见光图像和第二可见光图像融合以得到中间图像后,直接以数字的形式显示融合部分的温度,以生成融合图像;或者,中间图像融合了融合温度信息以生成融合图像,融合图像在初始时并不显示温度,只有当用户点选相应的目标时才显示该目标的温度,并在预定时长后自动消失,使得温度的显示不会遮挡融合图像。
应当理解,根据实际需要,本申请实施例还可以包括将第一红外图像与第二红外图像融合、第一红外图像与第二可见光图像融合、将第一可见光图像与第二红外图像融合等组合情况。本申请包括可见光相机的实施例能够实现多光融合功能,能够获得更加丰富的图像信息,更加便于用户进行图像数据的分析。
请再次参阅图2,本申请实施方式还提供一种无人机系统1000,无人机系统1000包括无人机100、图像获取装置10、遥控器200和处理器13,图像获取装置10包括第一相机11和第二相机12,第一相机11的焦距小于第二相机12的焦距,处理器13用于执行上述任一实施方式的图像处理方法。例如,处理器13可用于执行步骤011、步骤012、步骤013和步骤014;或者处理器13用于执行0121、步骤0122和步骤0123等。
如图2所示,在某些实施方式中,处理器13可设置在无人机100和/或遥控器200上。例如,处理器13可设置在无人机100上,或者,处理器13可设置在遥控器200上;或者,处理器13为多个(如2个),其中一个设置在无人机100上,另一个设置在遥控器200上。本实施方式中,处理器13设置在无 人机100上。
在处理器13设置在无人机100(遥控器200)时,上述图像处理方法中的步骤均可在无人机100(遥控器200)的处理器13执行。在处理器13为多个且分别设置在无人机100和遥控器200时,上述图像处理方法中的步骤可以都在无人机100的处理器13或遥控器200的处理器13中执行,或者一部分在无人机100的处理器13执行,另一部分在遥控器200的处理器13中执行。
请参阅图10,本申请实施方式的一种包含计算机可执行指令302的非易失性计算机可读存储介质300,当计算机可执行指令302被一个或多个处理器13执行时,使得处理器13执行上述任一实施方式的图像处理方法。例如,请结合图1,计算机可执行指令302被处理器13执行时,使得处理器13执行以下步骤:
011:获取第一相机11拍摄的第一图像并识别目标区域;
012:根据目标区域在第一图像中的位置,以确定第二相机12的拍摄参数;
013:根据拍摄参数控制第二相机12获取第二图像;及
014:将第一图像和第二图像融合,以得到融合图像。
再例如,请结合图4,计算机可执行指令302被处理器13执行时,使得处理器13执行以下步骤:
0121:基于第一图像建立坐标系;
0122:确定目标区域在第一图像中的位置坐标;及
0123:基于预设的映射表,确定与位置坐标对应的拍摄角度。
本申请的实施例能够在大视场的图像基础上,提高远距离目标区域的成像细节和物体识别的准确度,实现局部超分辨率的功能,模拟出人眼对远距离目标点的观测效果,且舍弃掉非目标区域的远距离细节,能够节省图像、视频文件的大小;本申请的实施例还能够借助红外相机本身对于物体温度的极敏感的特点,非常容易地检测到视场内温度异常的目标物体,再配合长焦红外相机,可以快速、自动地定位到异常物体,并拍摄丰富的图像信息;本申请的实施例还能够利用云台完成多个远距离目标区域的成像,可以提高超分辨率区域的数量;本申请的实施例还能够配合可见光相机,实现多光融合功能,能够获得更加丰富的图像信息,更加便于用户进行图像数据的分析。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”、或“一些示例”等的描述意指结合实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于执行特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的执行,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于执行逻辑功能的可执行指令的定序列表,可以具体执行在任何计算机可读介质中,以供指令执行系统、装置或设备(如 基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得程序,然后将其存储在计算机存储器中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (60)

  1. 一种图像处理方法,其特征在于,应用于图像获取装置,所述图像获取装置包括第一相机和第二相机,所述第一相机的焦距小于所述第二相机的焦距,所述图像处理方法包括:
    获取所述第一相机拍摄的第一图像并识别目标区域;
    根据所述目标区域在所述第一图像中的位置,以确定所述第二相机的拍摄参数;
    根据所述拍摄参数控制所述第二相机获取第二图像;及
    将所述第一图像和所述第二图像融合,以得到融合图像。
  2. 根据权利要求1所述的图像处理方法,其特征在于,所述第一相机包括第一红外相机,所述获取所述第一相机拍摄的第一图像并识别目标区域,包括:
    获取所述第一红外相机拍摄的第一红外图像;及
    基于所述第一红外图像的像素值和/或温度值,识别所述第一红外图像中的所述目标区域。
  3. 根据权利要求1所述的图像处理方法,其特征在于,所述第一相机包括第一可见光相机,所述获取所述第一相机拍摄的第一图像并识别目标区域,包括:
    获取所述第一可见光相机拍摄的第一可见光图像;及
    基于预设的图像识别算法,识别所述第一可见光图像中的所述目标区域。
  4. 根据权利要求1-3中任一项所述的图像处理方法,其特征在于,所述拍摄参数包括拍摄角度、拍摄焦距和缩放倍数中至少一种。
  5. 根据权利要求4所述的图像处理方法,其特征在于,所述拍摄参数包括拍摄角度,所述根据所述目标区域在所述第一图像中的位置,以确定所述第二相机的拍摄参数,包括:
    基于所述第一图像建立坐标系;
    确定所述目标区域在所述第一图像中的位置坐标;及
    基于预设的映射表,确定与所述位置坐标对应的所述拍摄角度。
  6. 根据权利要求5所述的图像处理方法,其特征在于,在所述根据所述目标区域在所述第一图像中的位置,以确定所述第二相机的拍摄参数的步骤之前,还包括:
    控制所述第一相机拍摄标定板以获取第一标定图像,所述标定板包括多个特征区域,所述第一标定图像包括与所述特征区域对应的标定区域,所述标定板覆盖所述第一相机的视场范围;
    控制所述第二相机的光轴对准所述特征区域的中心,依次获取与所述特征区域对应的旋转角度;及
    基于与所述特征区域对应的所述标定区域、和与所述特征区域对应的所述旋转角度,建立所述映射表。
  7. 根据权利要求6所述的图像处理方法,其特征在于,所述基于预设的映射表,确定与所述位置坐标对应的所述拍摄角度,包括:
    根据所述目标区域的中心位置坐标确定对应的目标标定区域;及
    基于预设的映射表,确定所述目标标定区域对应的所述拍摄角度。
  8. 根据权利要求5-7中任一项所述的图像处理方法,其特征在于,所述拍摄角度包括俯仰角和偏航角。
  9. 根据权利要求1所述的图像处理方法,其特征在于,所述第二相机为旋转摄像头。
  10. 根据权利要求4所述的图像处理方法,其特征在于,所述第二相机设置在云台上,所述云台用于控制所述第二相机的所述拍摄角度。
  11. 根据权利要求4所述的图像处理方法,其特征在于,所述根据所述拍摄参数控制所述第二相机获取第二图像,包括:
    根据所述拍摄参数控制所述第二相机获取测试图像;
    计算所述测试图像与所述目标区域的图像的偏移度;及
    根据所述偏移度调整所述拍摄角度,并以调整后的所述拍摄角度控制所述第二相机获取所述第二图像。
  12. 根据权利要求1所述的图像处理方法,其特征在于,所述目标区域为多个,所述拍摄参数和所述目标区域一一对应,所述根据所述拍摄参数控制所述第二相机获取第二图像,包括:
    根据多个所述拍摄参数控制所述第二相机分时拍摄以得到多张所述第二图像,所述第二图像和所述目标区域一一对应。
  13. 根据权利要求1所述的图像处理方法,其特征在于,所述目标区域为多个,所述拍摄参数和所述目标区域一一对应,所述第二相机为多个,所述根据所述拍摄参数控制所述第二相机获取第二图像,包括:
    为所述第二相机分配所述拍摄参数;及
    控制分配到所述拍摄参数的所述第二相机,根据对应的所述拍摄参数获取所述第二图像,所述第二图像和所述目标区域一一对应。
  14. 根据权利要求1所述的图像处理方法,其特征在于,所述将所述第一图像和所述第二图像融合,以得到融合图像,包括:
    将所述第一图像缩放到第一尺寸及将所述第二图像缩放到第二尺寸,以使得缩放后的所述第一图像中与所述目标区域对应的部分的尺寸与所述第二尺寸相同;及
    将缩放后的所述第一图像中与所述目标区域对应的部分替换为所述第二图像。
  15. 根据权利要求1所述的图像处理方法,其特征在于,所述将所述第一图像和所述第二图像融合,以得到融合图像,包括:
    放大所述第一图像的尺寸,以使得放大后的所述第一图像中与所述目标区域对应的部分的尺寸等于所述第二图像的尺寸;及
    将放大后的所述第一图像中与所述目标区域对应的部分替换为所述第二图像。
  16. 根据权利要求1所述的图像处理方法,其特征在于,所述将所述第一图像和所述第二图像融合,以得到融合图像,包括:
    缩小所述第二图像的尺寸,以使得缩小后的所述第二图像的尺寸等于所述第一图像中与所述目标区域对应的部分的尺寸;及
    将所述第一图像中与所述目标区域对应的部分替换为缩小后的所述第二图像。
  17. 根据权利要求1所述的图像处理方法,其特征在于,所述图像处理方法还包括:
    根据所述第一相机和所述第二相机的焦距比例确定所述目标区域对应的部分在所述第一图像中的比例。
  18. 根据权利要求1所述的图像处理方法,其特征在于,所述第一相机包括第一可见光相机和第一红外相机,所述第二相机包括第二可见光相机和第二红外相机,所述获取所述第一相机拍摄的第一图像并识别目标区域,包括:
    获取所述第一可见光相机拍摄的第一可见光图像、和所述第一红外相机拍摄的第一红外图像;
    所述根据所述拍摄参数控制所述第二相机获取第二图像,包括:
    根据所述拍摄参数控制所述第二可见光相机获取第二可见光图像、和控制所述第二红外相机获取第二红外图像;
    所述图像处理方法还包括:
    根据所述第一红外图像和所述第二红外图像确定融合温度信息;
    所述将所述第一图像和所述第二图像融合,以得到融合图像,包括:
    将所述融合温度信息、所述第一可见光图像和所述第二可见光图像融合,以得到所述融合图像。
  19. 根据权利要求18所述的图像处理方法,其特征在于,所述根据所述第一红外图像和所述第二红外图像确定融合温度信息,包括:
    根据所述第一红外图像确定第一温度信息;
    根据所述第二红外图像确定第二温度信息;及
    根据所述第一温度信息和所述第二温度信息确定所述融合温度信息。
  20. 一种图像获取装置,其特征在于,所述图像获取装置包括第一相机、第二相机和处理器,所述第一相机的焦距小于所述第二相机的焦距,所述处理器用于:
    获取所述第一相机拍摄的第一图像并识别目标区域;
    根据所述目标区域在所述第一图像中的位置,以确定所述第二相机的拍摄参数;
    根据所述拍摄参数控制所述第二相机获取第二图像;及
    将所述第一图像和所述第二图像融合,以得到融合图像。
  21. 根据权利要求20所述的图像获取装置,其特征在于,所述第一相机包括第一红外相机,所述处理器还用于:
    获取所述第一红外相机拍摄的第一红外图像;及
    基于所述第一红外图像的像素值和/或温度值,识别所述第一红外图像中的所述目标区域。
  22. 根据权利要求20所述的图像获取装置,其特征在于,所述第一相机包括第一可见光相机,所述处理器还用于:
    获取所述第一可见光相机拍摄的第一可见光图像;及
    基于预设的图像识别算法,识别所述第一可见光图像中的所述目标区域。
  23. 根据权利要求20-22中任一项所述的图像获取装置,其特征在于,所述拍摄参数包括拍摄角度、拍摄焦距和缩放倍数中至少一种。
  24. 根据权利要求23所述的图像获取装置,其特征在于,所述处理器还用于:
    基于所述第一图像建立坐标系;
    确定所述目标区域在所述第一图像中的位置坐标;及
    基于预设的映射表,确定与所述位置坐标对应的所述拍摄角度。
  25. 根据权利要求24所述的图像获取装置,其特征在于,所述处理器还用于:
    控制所述第一相机拍摄标定板以获取第一标定图像,所述标定板包括多个特征区域,所述第一标定图像包括与所述特征区域对应的标定区域,所述标定板覆盖所述第一相机的视场范围;
    控制所述第二相机的光轴对准所述特征区域的中心,依次获取与所述特征区域对应的旋转角度;及
    基于与所述特征区域对应的所述标定区域、和与所述特征区域对应的所述旋转角度,建立所述映射表。
  26. 根据权利要求25所述的图像获取装置,其特征在于,所述处理器还用于:
    根据所述目标区域的中心位置坐标确定对应的目标标定区域;及
    基于预设的映射表,确定所述目标标定区域对应的所述拍摄角度。
  27. 根据权利要求24-26中任一项所述的图像获取装置,其特征在于,所述拍摄角度包括俯仰角和偏航角。
  28. 根据权利要求20所述的图像获取装置,其特征在于,所述第二相机为旋转摄像头。
  29. 根据权利要求23所述的图像获取装置,其特征在于,所述第二相机设置在云台上,所述云台用于控制所述第二相机的所述拍摄角度。
  30. 根据权利要求23所述的图像获取装置,其特征在于,所述处理器还用于:
    根据所述拍摄参数控制所述第二相机获取测试图像;
    计算所述测试图像与所述目标区域的图像的偏移度;及
    根据所述偏移度调整所述拍摄角度,并以调整后的所述拍摄角度控制所述第二相机获取所述第二图像。
  31. 根据权利要求20所述的图像获取装置,其特征在于,所述目标区域为多个,所述拍摄参数和所述目标区域一一对应,所述处理器还用于:
    根据多个所述拍摄参数控制所述第二相机分时拍摄以得到多张所述第二图像,所述第二图像和所述目标区域一一对应。
  32. 根据权利要求20所述的图像获取装置,其特征在于,所述目标区域为多个,所述拍摄参数和所述目标区域一一对应,所述第二相机为多个,所述处理器还用于:
    为所述第二相机分配所述拍摄参数;及
    控制分配到所述拍摄参数的所述第二相机,根据对应的所述拍摄参数获取所述第二图像,所述第二图像和所述目标区域一一对应。
  33. 根据权利要求20所述的图像获取装置,其特征在于,所述处理器还用于:
    将所述第一图像缩放到第一尺寸及将所述第二图像缩放到第二尺寸,以使得缩放后的所述第一图像中与所述目标区域对应的部分的尺寸与所述第二尺寸相同;及
    将缩放后的所述第一图像中与所述目标区域对应的部分替换为所述第二图像。
  34. 根据权利要求20所述的图像获取装置,其特征在于,所述处理器还用于:
    放大所述第一图像的尺寸,以使得放大后的所述第一图像中与所述目标区域对应的部分的尺寸等于所述第二图像的尺寸;及
    将放大后的所述第一图像中与所述目标区域对应的部分替换为所述第二图像。
  35. 根据权利要求20所述的图像获取装置,其特征在于,所述处理器还用于:
    缩小所述第二图像的尺寸,以使得缩小后的所述第二图像的尺寸等于所述第一图像中与所述目标区域对应的部分的尺寸;及
    将所述第一图像中与所述目标区域对应的部分替换为缩小后的所述第二图像。
  36. 根据权利要求20所述的图像获取装置,其特征在于,所述处理器还用于:根据所述第一相机和所述第二相机的焦距比例确定所述目标区域对应的部分在所述第一图像中的比例。
  37. 根据权利要求20所述的图像获取装置,其特征在于,所述第一相机包括第一可见光相机和第一红外相机,所述第二相机包括第二可见光相机和第二红外相机,所述处理器还用于:获取所述第一可见光相机拍摄的第一可见光图像、和所述第一红外相机拍摄的第一红外图像;根据所述拍摄参数控制所述第二可见光相机获取第二可见光图像、和控制所述第二红外相机获取第二红外图像;根据所述第一红外图像和所述第二红外图像确定融合温度信息;及将所述融合温度信息、所述第一可见光图像和所述第二可见光图像融合,以得到所述融合图像。
  38. 根据权利要求37所述的图像获取装置,其特征在于,所述处理器还用于:
    根据所述第一红外图像确定第一温度信息;
    根据所述第二红外图像确定第二温度信息;及
    根据所述第一温度信息和所述第二温度信息确定所述融合温度信息。
  39. 一种无人机,其特征在于,所述无人机包括机身、图像获取装置和处理器,所述图像获取装置设置在所述机身上,所述图像获取装置包括第一相机和第二相机,所述第一相机的焦距小于所述第二相机的焦距,所述处理器用于:
    获取所述第一相机拍摄的第一图像并识别目标区域;
    根据所述目标区域在所述第一图像中的位置,以确定所述第二相机的拍摄参数;
    根据所述拍摄参数控制所述第二相机获取第二图像;及
    将所述第一图像和所述第二图像融合,以得到融合图像。
  40. 根据权利要求39所述的无人机,其特征在于,所述第一相机包括第一红外相机,所述处理器还用于:
    获取所述第一红外相机拍摄的第一红外图像;及
    基于所述第一红外图像的像素值和/或温度值,识别所述第一红外图像中的所述目标区域。
  41. 根据权利要求39所述的无人机,其特征在于,所述第一相机包括第一可见光相机,所述处理器还用于:
    获取所述第一可见光相机拍摄的第一可见光图像;及
    基于预设的图像识别算法,识别所述第一可见光图像中的所述目标区域。
  42. 根据权利要求39-41中任一项所述的无人机,其特征在于,所述拍摄参数包括拍摄角度、拍摄焦距和缩放倍数中至少一种。
  43. 根据权利要求42所述的无人机,其特征在于,所述处理器还用于:
    基于所述第一图像建立坐标系;
    确定所述目标区域在所述第一图像中的位置坐标;及
    基于预设的映射表,确定与所述位置坐标对应的所述拍摄角度。
  44. 根据权利要求43所述的无人机,其特征在于,所述处理器还用于:
    控制所述第一相机拍摄标定板以获取第一标定图像,所述标定板包括多个特征区域,所述第一标定图像包括与所述特征区域对应的标定区域,所述标定板覆盖所述第一相机的视场范围;
    控制所述第二相机的光轴对准所述特征区域的中心,依次获取与所述特征区域对应的旋转角度;及
    基于与所述特征区域对应的所述标定区域、和与所述特征区域对应的所述旋转角度,建立所述映射表。
  45. 根据权利要求44所述的无人机,其特征在于,所述处理器还用于:
    根据所述目标区域的中心位置坐标确定对应的目标标定区域;及
    基于预设的映射表,确定所述目标标定区域对应的所述拍摄角度。
  46. 根据权利要求43-45任一项所述的无人机,其特征在于,所述拍摄角度包括俯仰角和偏航角。
  47. 根据权利要求39所述的无人机,其特征在于,所述第二相机为旋转摄像头。
  48. 根据权利要求42所述的无人机,其特征在于,所述无人机还包括云台,所述第二相机设置在所述云台上,所述云台用于控制所述第二相机的所述拍摄角度。
  49. 根据权利要求42所述的无人机,其特征在于,所述处理器还用于:
    根据所述拍摄参数控制所述第二相机获取测试图像;
    计算所述测试图像与所述目标区域的图像的偏移度;及
    根据所述偏移度调整所述拍摄角度,并以调整后的所述拍摄角度控制所述第二相机获取所述第二图像。
  50. 根据权利要求39所述的无人机,其特征在于,所述目标区域为多个,所述拍摄参数和所述目标区域一一对应,所述处理器还用于:
    根据多个所述拍摄参数控制所述第二相机分时拍摄以得到多张所述第二图像,所述第二图像和所述目标区域一一对应。
  51. 根据权利要求39所述的无人机,其特征在于,所述目标区域为多个,所述拍摄参数和所述目标区域一一对应,所述第二相机为多个,所述处理器还用于:
    为所述第二相机分配所述拍摄参数;及
    控制分配到所述拍摄参数的所述第二相机,根据对应的所述拍摄参数获取所述第二图像,所述第二图像和所述目标区域一一对应。
  52. 根据权利要求39所述的无人机,其特征在于,所述处理器还用于:
    将所述第一图像缩放到第一尺寸及将所述第二图像缩放到第二尺寸,以使得缩放后的所述第一图像 中与所述目标区域对应的部分的尺寸与所述第二尺寸相同;及
    将缩放后的所述第一图像中与所述目标区域对应的部分替换为所述第二图像。
  53. 根据权利要求39所述的无人机,其特征在于,所述处理器还用于:
    放大所述第一图像的尺寸,以使得放大后的所述第一图像中与所述目标区域对应的部分的尺寸等于所述第二图像的尺寸;及
    将放大后的所述第一图像中与所述目标区域对应的部分替换为所述第二图像。
  54. 根据权利要求39所述的无人机,其特征在于,所述处理器还用于:
    缩小所述第二图像的尺寸,以使得缩小后的所述第二图像的尺寸等于所述第一图像中与所述目标区域对应的部分的尺寸;及
    将所述第一图像中与所述目标区域对应的部分替换为缩小后的所述第二图像。
  55. 根据权利要求39所述的无人机,其特征在于,所述处理器还用于:根据所述第一相机和所述第二相机的焦距比例确定所述目标区域对应的部分在所述第一图像中的比例。
  56. 根据权利要求39所述的无人机,其特征在于,所述第一相机包括第一可见光相机和第一红外相机,所述第二相机包括第二可见光相机和第二红外相机,所述处理器还用于:获取所述第一可见光相机拍摄的第一可见光图像、和所述第一红外相机拍摄的第一红外图像;根据所述拍摄参数控制所述第二可见光相机获取第二可见光图像、和控制所述第二红外相机获取第二红外图像;根据所述第一红外图像和所述第二红外图像确定融合温度信息;及将所述融合温度信息、所述第一可见光图像和所述第二可见光图像融合,以得到所述融合图像。
  57. 根据权利要求56所述的无人机,其特征在于,所述处理器还用于:
    根据所述第一红外图像确定第一温度信息;
    根据所述第二红外图像确定第二温度信息;及
    根据所述第一温度信息和所述第二温度信息确定所述融合温度信息。
  58. 一种无人机系统,其特征在于,所述无人机系统包括无人机、图像获取装置和处理器,所述图像获取装置包括第一相机和第二相机,所述第一相机的焦距小于所述第二相机的焦距,所述处理器用于执行权利要求1-19任一所述的图像处理方法。
  59. 根据权利要求58所述的无人机系统,所述无人机系统还包括遥控器,所述处理器设置在所述无人机和/或所述遥控器上。
  60. 一种包含计算机可执行指令的计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行权利要求1至19中任一项所述的图像处理方法。
PCT/CN2020/099408 2020-06-30 2020-06-30 图像处理方法、图像获取装置、无人机、无人机系统和存储介质 WO2022000300A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/099408 WO2022000300A1 (zh) 2020-06-30 2020-06-30 图像处理方法、图像获取装置、无人机、无人机系统和存储介质
CN202080005578.XA CN112840374A (zh) 2020-06-30 2020-06-30 图像处理方法、图像获取装置、无人机、无人机系统和存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/099408 WO2022000300A1 (zh) 2020-06-30 2020-06-30 图像处理方法、图像获取装置、无人机、无人机系统和存储介质

Publications (1)

Publication Number Publication Date
WO2022000300A1 true WO2022000300A1 (zh) 2022-01-06

Family

ID=75926591

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/099408 WO2022000300A1 (zh) 2020-06-30 2020-06-30 图像处理方法、图像获取装置、无人机、无人机系统和存储介质

Country Status (2)

Country Link
CN (1) CN112840374A (zh)
WO (1) WO2022000300A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627252A (zh) * 2022-02-10 2022-06-14 北京大学深圳研究生院 获取地表温度分布的无人机和地表温度分布图的获取方法
CN114827466A (zh) * 2022-04-20 2022-07-29 武汉三江中电科技有限责任公司 一种仿人眼的设备图像采集装置及图像采集方法
CN114967756A (zh) * 2022-07-07 2022-08-30 华能盐城大丰新能源发电有限责任公司 海上风机巡检无人机辅助降落方法、系统、装置及存储介质
CN117649613A (zh) * 2024-01-30 2024-03-05 之江实验室 一种光学遥感图像优化方法、装置、存储介质及电子设备
CN114967756B (zh) * 2022-07-07 2024-05-24 华能盐城大丰新能源发电有限责任公司 海上风机巡检无人机辅助降落方法、系统、装置及存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177884A (zh) * 2021-05-27 2021-07-27 江苏北方湖光光电有限公司 一种夜视组合变倍图像融合系统的工作方法
CN113409405A (zh) * 2021-07-19 2021-09-17 北京百度网讯科技有限公司 评估相机标定位置的方法、装置、设备和存储介质
CN113688824B (zh) * 2021-09-10 2024-02-27 福建汇川物联网技术科技股份有限公司 一种施工节点的信息采集方法、装置及存储介质
CN115002373A (zh) * 2022-04-27 2022-09-02 西安应用光学研究所 机载红外搜索导航吊舱用的大小视场复合显示方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090046157A1 (en) * 2007-08-13 2009-02-19 Andrew Cilia Combined wide-angle/zoom camera for license plate identification
CN110719444A (zh) * 2019-11-07 2020-01-21 中国人民解放军国防科技大学 多传感器融合的全方位监控与智能摄像方法与系统
CN111147755A (zh) * 2020-01-02 2020-05-12 普联技术有限公司 双摄像头的变焦处理方法、装置及终端设备
CN111345029A (zh) * 2019-05-30 2020-06-26 深圳市大疆创新科技有限公司 一种目标追踪方法、装置、可移动平台及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090046157A1 (en) * 2007-08-13 2009-02-19 Andrew Cilia Combined wide-angle/zoom camera for license plate identification
CN111345029A (zh) * 2019-05-30 2020-06-26 深圳市大疆创新科技有限公司 一种目标追踪方法、装置、可移动平台及存储介质
CN110719444A (zh) * 2019-11-07 2020-01-21 中国人民解放军国防科技大学 多传感器融合的全方位监控与智能摄像方法与系统
CN111147755A (zh) * 2020-01-02 2020-05-12 普联技术有限公司 双摄像头的变焦处理方法、装置及终端设备

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627252A (zh) * 2022-02-10 2022-06-14 北京大学深圳研究生院 获取地表温度分布的无人机和地表温度分布图的获取方法
CN114627252B (zh) * 2022-02-10 2024-05-14 北京大学深圳研究生院 获取地表温度分布的无人机和地表温度分布图的获取方法
CN114827466A (zh) * 2022-04-20 2022-07-29 武汉三江中电科技有限责任公司 一种仿人眼的设备图像采集装置及图像采集方法
CN114827466B (zh) * 2022-04-20 2023-07-04 武汉三江中电科技有限责任公司 一种仿人眼的设备图像采集装置及图像采集方法
CN114967756A (zh) * 2022-07-07 2022-08-30 华能盐城大丰新能源发电有限责任公司 海上风机巡检无人机辅助降落方法、系统、装置及存储介质
CN114967756B (zh) * 2022-07-07 2024-05-24 华能盐城大丰新能源发电有限责任公司 海上风机巡检无人机辅助降落方法、系统、装置及存储介质
CN117649613A (zh) * 2024-01-30 2024-03-05 之江实验室 一种光学遥感图像优化方法、装置、存储介质及电子设备
CN117649613B (zh) * 2024-01-30 2024-04-26 之江实验室 一种光学遥感图像优化方法、装置、存储介质及电子设备

Also Published As

Publication number Publication date
CN112840374A (zh) 2021-05-25

Similar Documents

Publication Publication Date Title
WO2022000300A1 (zh) 图像处理方法、图像获取装置、无人机、无人机系统和存储介质
CN109167924B (zh) 基于混合相机的视频成像方法、系统、设备及存储介质
WO2018201809A1 (zh) 基于双摄像头的图像处理装置及方法
CN109922251B (zh) 快速抓拍的方法、装置及系统
US11924539B2 (en) Method, control apparatus and control system for remotely controlling an image capture operation of movable device
US11210796B2 (en) Imaging method and imaging control apparatus
WO2019041276A1 (zh) 一种图像处理方法、无人机及系统
CN109982029B (zh) 一种摄像机监控场景自动调节方法及装置
CN112312113B (zh) 用于生成三维模型的方法、装置和系统
CN101924923A (zh) 嵌入式智能自动变焦抓拍系统及方法
CN111279393A (zh) 相机标定方法、装置、设备及存储介质
CN110933297B (zh) 智能摄影系统的摄影控制方法、装置、存储介质及系统
CN110602376B (zh) 抓拍方法及装置、摄像机
CN113949814B (zh) 一种枪球联动抓拍方法、装置、设备及介质
CN109543496A (zh) 一种图像采集方法、装置、电子设备及系统
CN113627385A (zh) 视线方向的检测方法、装置及其检测系统及可读存储介质
WO2022126430A1 (zh) 辅助对焦方法、装置及系统
JP6483661B2 (ja) 撮像制御装置、撮像制御方法およびプログラム
CN112585945A (zh) 对焦方法、装置及设备
CN112514366A (zh) 图像处理方法、图像处理装置和图像处理系统
CN114638880B (zh) 平面测距方法、单目摄像头及计算机可读存储介质
CN116301029A (zh) 无人机自主巡检抓拍方法
WO2021138856A1 (zh) 相机控制方法、设备及计算机可读存储介质
CN106643665A (zh) 基于反射镜的折转光路摄像测量装置及方法
CN112990187A (zh) 一种基于手持终端图像的目标位置情报生成方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20942888

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20942888

Country of ref document: EP

Kind code of ref document: A1