CN112907704B - Image fusion method, computer equipment and device - Google Patents

Image fusion method, computer equipment and device Download PDF

Info

Publication number
CN112907704B
CN112907704B CN202110158628.4A CN202110158628A CN112907704B CN 112907704 B CN112907704 B CN 112907704B CN 202110158628 A CN202110158628 A CN 202110158628A CN 112907704 B CN112907704 B CN 112907704B
Authority
CN
China
Prior art keywords
image
fusion
fusion weight
pixel offset
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110158628.4A
Other languages
Chinese (zh)
Other versions
CN112907704A (en
Inventor
瞿二平
况璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110158628.4A priority Critical patent/CN112907704B/en
Publication of CN112907704A publication Critical patent/CN112907704A/en
Application granted granted Critical
Publication of CN112907704B publication Critical patent/CN112907704B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image fusion method, comprising the following steps: acquiring pixel offset differences of a target object in a first image and a second image; determining a fusion weight of the first image based on the pixel offset difference; the first image and the second image are fused into one image based on the fusion weight. By the method, the double image problem in the image fusion process can be eliminated.

Description

Image fusion method, computer equipment and device
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image fusion method, a computer device, and an apparatus.
Background
The dual-spectrum camera is provided with an infrared camera shooting assembly and a visible light camera shooting assembly which are mutually independent. The infrared camera shooting assembly and the visible light camera shooting assembly respectively acquire image data, and then fusion calculation of an infrared image and a visible light image is completed through the processor so as to obtain a fusion image. In terms of the principle of lens optical perspective geometry, certain parallax exists in the content imaged by two imaging assemblies which do not share the same optical center.
The inventors of the present application have found that, during long-term development, parallax at different distances varies in degree, eventually leading to the occurrence of visually unacceptable flaws in the stitched image, such as ghosts, dislocation breaks of continuous lines, and the like.
Disclosure of Invention
The technical problem that this application mainly solves is to provide an image fusion method, computer equipment and device, can eliminate the ghost image problem in the image fusion processing.
In order to solve the technical problems, one technical scheme adopted by the application is as follows: provided is an image fusion method including: acquiring pixel offset differences of a target object in a first image and a second image; determining a fusion weight of the first image based on the pixel offset difference; the first image and the second image are fused into one image based on the fusion weight.
In order to solve the technical problems, another technical scheme adopted by the application is as follows: there is provided a computer device comprising a processor and an image capturing apparatus coupled to each other, the image capturing apparatus being for capturing an image, the processor being for executing instructions to implement an image fusion method as described above.
In order to solve the technical problems, another technical scheme adopted by the application is as follows: there is provided an apparatus having a storage function, storing program data which can be read by a computer and executed by a processor to realize the image fusion method as described above.
The beneficial effects of this application are: different from the condition of the prior art, the fusion weight of one image is adjusted according to the pixel offset difference of the target object in the two images. When the pixel offset difference is large, the fusion weight proportion of one image is reduced, so that the fused image can only contain the image data of the other image, and the double image of the target object is avoided. Therefore, even when the distance between the target object and the image pickup device changes so that the pixel offset difference of the target object in the two images is different, the target object in the fused image can still be kept clear, and the problem of double images can not occur.
Drawings
FIG. 1 is a schematic illustration of imaging an object at the same distance from different locations;
FIG. 2 is a schematic imaging of objects at different distances;
FIG. 3 is a flow chart of an image fusion method according to an embodiment of the present application;
FIG. 4 is a flow chart of an image fusion method according to another embodiment of the present application;
FIG. 5 is a schematic diagram of an image fusion system according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a device with a storage function according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and effects of the present application clearer and more specific, the present application will be further described in detail below with reference to the accompanying drawings and examples.
According to the image fusion method disclosed by the application, the fusion weight of the two images is determined according to the pixel offset difference of the target object in the two images, and fusion processing is carried out based on the fusion weight so as to obtain a clear image of the target object. The embodiments of the present application can be applied to various image fusion processes of an image pickup apparatus including at least two image pickup modules, such as a dual spectrum image pickup apparatus, an image pickup apparatus of a dual visible light image pickup module, or an image pickup apparatus of a plurality of image pickup modules. The application scenario described in the embodiments of the present application is for more clearly describing the technical solution of the embodiments of the present application, and does not constitute a limitation on the ending solution provided in the embodiments of the present application. Those skilled in the art can know that the technical scheme provided by the embodiment of the application is also applicable to similar technical problems in other application scenes without carrying out creative work.
The dual-spectrum camera is initialized and calibrated before being used, and subsequent image fusion is carried out by using calibrated parameters. The calibration process is initiated by adjusting a plurality of fusion parameters such that the fused image of the object at the standard distance is free of ghosts. For example, an offset or the like of the target object in at least one image. Referring to fig. 1 and 2, fig. 1 is a schematic view of imaging an object at the same distance from different locations; fig. 2 is a schematic image of objects at different distances. Wherein object a and object b have the same vertical distance from the two camera assemblies 110 and 120. The offset vector between the connection of the object a and the first lens 111 and the connection of the object b and the first lens 111 on the first sensing element 112 is L 1 . The offset vector between the connection line of the object a and the second lens 121 and the connection line of the object b and the second lens 121 on the second sensing element 122 is L 2 . Wherein L is 1 And L is equal to 2 Equal length, equal direction, can be understood as the distance and direction of change from the position of object a to the position of object b, the imaging position on the first inductive element 112The distance and direction of change to the imaging position on the second sensing element 122 is the same. I.e. the pixel shift errors in the images taken by the two camera assemblies are the same for objects at the same distance from different positions within the effective viewing angle range. However, when the target object is at different distances from the camera, the pixel offset difference of the target object in the two images will be different, so that the target object will generate double images in the fused image. As shown in fig. 2, the vertical distance between the object c and the object d and the two camera modules 110 and 120 is different. The offset vector between the connection of the object c and the first lens 111 and the connection of the object d and the first lens 111 on the first sensing element 112 is L 3 . The offset vector between the connection line of the object c and the second lens 121 and the connection line of the object d and the second lens 121 on the second sensing element 122 is L 4 . Wherein L is 3 And L is equal to 4 The length and direction are different and it is understood that the distance and direction of the change in imaging position on the first sensing element 112 is different from the distance and direction of the change in imaging position on the second sensing element 122 from the position of object c to the position of object d. This also illustrates that the difference in pixel offset of the target object in the two images will be different when the target object is at different distances from the camera.
In order to solve the above problems, the present application discloses an image fusion method, which is described in detail below. Referring to fig. 3, fig. 3 is a flow chart of an image fusion method according to an embodiment of the present application. It should be noted that, if there are substantially the same results, the embodiment is not limited to the flow sequence shown in fig. 3. As shown in fig. 3, the method includes:
step 310: a pixel offset difference of the target object in the first image and the second image is acquired.
In an embodiment, the first image and the second image may refer to an image captured by the visible light camera component and an image captured by the infrared camera component at the same time, respectively. The visible light camera shooting assembly is used for shooting images and comprises Y-component image data which are used for shooting the brightness of the images; and U-component image data and V-component image data for capturing colors of an image. The image shot by the infrared shooting component can only acquire black and white images. When the fusion processing is carried out, only Y-component image data in the image shot by the visible light shooting component and the image shot by the infrared shooting component are required to be fused. Thus, the further first image may refer to Y-component image data in an image captured by the visible-light imaging assembly. In another embodiment, the first image and the second image may also be images captured by two visible light camera assemblies.
The target object may refer to any movable object such as an automobile, a person, an animal, etc. Embodiments of the present application may be applied to a variety of target object recognition or monitoring systems, including but not limited to: a vehicle recognition system, a portrait recognition system or an animal recognition system, etc.
The pixel offset difference may refer to an offset difference of position coordinates of the target object on the first image and the second image.
Step 330: based on the pixel offset differences, a fusion weight for the first image is determined.
The fusion weights of the first image may be a plurality of different values. The fusion weight of the first image is determined from a target offset difference of the target object on the first image and the second image. Specifically, when the pixel offset difference is small, the fusion weight of the first image may be a large value (e.g., 1); the fusion weight of the first image may be a small value (e.g., 0) when the pixel offset difference is large.
The fusion weight of the second image may be a fixed value, for example 1. The infrared camera shooting assembly can still shoot clear images in the dark-light environment, so that when the images are fused, the fusion weight of the image data shot by the infrared camera shooting assembly is set to a fixed value, and the fused images can be ensured to clearly display target objects.
Step 350: the first image and the second image are fused into one image based on the fusion weight.
In an embodiment, the first image may be preprocessed according to the fusion weight, and then the first image and the second image are fused according to a preset fusion processing method, so as to obtain a fused image. The fusion processing method can be any image fusion processing method, including but not limited to a co-located pixel image fusion method, a logic filter method, a mathematical morphology method, an image algebra method, a high-pass filtering method or a pyramid decomposition method, and the like.
In the embodiment of the application, the fusion weight of one image is adjusted according to the pixel offset difference of the target object in the two images. When the pixel offset difference is large, the fusion weight proportion of one image is reduced, so that the fused image can only contain the image data of the other image, and the double image of the target object is avoided. Therefore, even when the distance between the target object and the image pickup device changes so that the pixel offset difference of the target object in the two images is different, the target object in the fused image can still be kept clear, and the problem of double images can not occur.
Referring to fig. 4, fig. 4 is a flow chart of an image fusion method according to another embodiment of the present application. It should be noted that, if there are substantially the same results, the embodiment is not limited to the flow sequence shown in fig. 4. As shown in fig. 4, the method includes:
step 410: an image is acquired.
An image captured by an image capturing device including at least two image capturing modules is acquired. The group of images at the same moment comprises at least two images, namely a first image and a second image. In one embodiment, the first image is a Y component image in a visible light image and the second image is an infrared light image.
Step 420: a ratio of the data value of the target object at the first image to the data value at the second image is determined.
Wherein the data values of the image refer to data values that are capable of characterizing pixel values of the target object in the image. For example, it may be the maximum value of the pixel values of the target object, the average value of the pixel values of the target object, or the sum of the pixel values of the target object, or the like.
When comparing the data values of the first image with the data values of the second image, the data values of the two images respectively take the same type of data value.
Step 430: and judging whether the ratio is larger than a third threshold value.
In one embodiment, if not, go to step 440; if so, then step 450 is performed.
The third threshold may be a preset threshold, or may be a threshold determined according to actual situations. The third threshold is a relatively small value, for example 0.1. When the ratio of the data value of the first image to the data value of the second image is less than or equal to the third threshold value, the data value of the first image may be considered to be very small relative to the data value of the second image. Thus, even if the fusion weight of the first image is set to a high value, the image obtained by the fusion process is not ghost. Therefore, when the ratio of the data value of the first image to the data value of the second image is less than or equal to the third threshold value, the step of determining the fusion weight of the first image may be directly performed.
Further, when the data value of the first image is sufficiently small, the fusion weight of the first image may be set to 1. In this case, the image data of the fused image can be enriched, and no ghost occurs.
Step 440: a pixel offset difference of the target object in the first image and the second image is acquired.
In one embodiment, a method of obtaining a pixel offset difference includes: determining the distance of the target object based on the positions of the target object in the first image and the second image respectively; acquiring a focal length of the first image or the second image; the pixel offset difference is determined based on a ratio of a focal length of the first image or the second image to a distance of the target object. Specifically, first, the target object in the first image and the second image is identified, and the coordinate position of the rectangular frame of the target object is determined. The first image and the second image can be analyzed to obtain attributes of a plurality of objects in the images, and the target object is determined according to the attributes. The attributes of the object may include the sex of the person, the height of the person, the expression of the person, the license plate or model number, etc. And then determining the distance D between the target object and the image pickup device according to the position of the target object on the first image and the position of the target object on the second image. Further, the focal length f of the first image or the second image needs to be acquired. Since the focal lengths f of the first image and the second image are the same, the focal length f of either image may be employed in calculating the pixel shift difference. The calculation formula of the pixel offset difference is shown in formula (1):
pClip=sLen*f/(D*pixel) (1)
where pClip represents the pixel offset difference, sLen represents the distance between the two camera assemblies, f represents the focal length, and pixel represents the pixel size in the camera assemblies.
Step 450: a fusion weight of the first image is determined.
The fusion weight of the first image is a plurality of different values, and the fusion weight of the first image can be determined according to different conditions. The fusion weight of the second image may be a fixed value, for example 1.
In an embodiment, the fusion weight of the first image is set to the fourth fusion weight when the ratio of the data value of the first image to the data value of the second image is less than or equal to the third threshold. The fourth fusion weight may be a larger value, for example, may be 1.
In an embodiment, the fusion weight of the first image is the first fusion weight when the pixel offset difference is greater than the first threshold and less than the second threshold. That is, when the pixel shift difference is located in a section between the first threshold value and the second threshold value, the fusion weight of the first image may be the first fusion weight. The first threshold value and the second threshold value may be preset or may be set according to the specific situation of the image. The first threshold and the second threshold need to be met, when the pixel offset difference is smaller than the first threshold, the offset condition of the target object on the two images is not easy to be observed by naked eyes, and therefore double images are not easy to form. When the pixel offset difference is greater than the second threshold, the difference between the two images of the target object is too large, and a ghost image is more obvious. For example, the first threshold may be 1. When the pixel shift difference is less than 1, the shift of the target object on the two images is not easily observed. For another example, the second threshold may be 5. At pixel offset differences greater than 5, the offset of the target object on the two images is quite noticeable.
In one embodiment, the value of the first fusion weight decreases as the pixel offset difference increases. The first fusion weight may be a value determined from the pixel offset difference and decreases as the pixel offset difference increases. When the pixel offset difference is in a section between the first threshold value and the second threshold value, the fused weight of the first image can be adjusted so that the fused image does not have the ghost problem.
Further: the first fusion weight is the ratio of the difference between the second threshold and the pixel offset difference to the difference between the second threshold and the first threshold. The relationship between the first fusion weight and the pixel offset difference is as shown in formula (2):
ratio=(b-pClip)/(b-a) (2)
where ratio is the first fusion weight, a is the first threshold, b is the second threshold, and pClip is the pixel offset difference. Wherein a < pClip < b.
For example, with the first threshold being 1 and the second threshold being 5, the relationship between the first fusion weight and the pixel offset difference is as shown in equation (3):
ratio=(5-pClip)/4 (3)
in an embodiment, when the pixel offset difference is less than or equal to the first threshold, the fusion weight of the first image is the second fusion weight; and when the pixel offset difference is greater than or equal to the second threshold value, the fusion weight of the first image is the third fusion weight.
When the pixel offset difference is less than or equal to the first threshold, the offset between the first image and the second image is insignificant, and therefore a relatively close fusion weight may be employed for the first image and the second image. When the pixel offset difference is greater than or equal to the second threshold, the offset between the two images is too large, and the fusion weight of one of the images needs to be reduced to avoid ghost images.
In one embodiment, the second fusion weight is greater than the first fusion weight; the third fusion weight is less than the first fusion weight. Specifically, the second fusion weight may be a value of 1 or close to 1. The third fusion weight may be 0 or a value near 0. The fusion weight of the second image may be a fixed value of 1 or close to 1.
Step 460: the first image and the second image are fused into one image based on the fusion weight.
In one embodiment, the data value of the first image is multiplied by the fusion weight to obtain weighted data of the first image; and carrying out fusion processing on the weighted data of the first image and the data of the second image to obtain an image.
Any suitable image fusion processing method may be used in the fusion processing of the first image and the second image, and is not limited herein. For example, a co-located pixel fusion algorithm may be used for the fusion process. Taking a single pixel point of the P position as an example, the P-position pixel data in the first image is vis, and the P-position pixel data in the second image is nir, then the fused P-position pixel data is fusion=max (ratio×vis, nir). Or, adopting different fusion formulas, wherein the fused p-position pixel Y component is fusion-H=ratio, is cis, alpha+nir (1-alpha), and alpha is a preset fusion parameter.
In the embodiment of the application, the fusion weight of the first image is set to be a variable value, and when the pixel offset difference between the first image and the second image is large, the fusion weight is low; and when the pixel offset difference between the first image and the second image is smaller, adopting a higher fusion weight. Therefore, the problem of double images after fusion processing can be effectively avoided while more image data are used for image fusion as much as possible.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an image fusion system according to an embodiment of the present application. In this embodiment, the image fusion system includes an acquisition module 510, a determination module 520, and a fusion module 530. It should be noted that, the apparatus of this embodiment may perform the steps in the above method, and details of the related content refer to the above method section, which is not described herein again.
The acquiring module 510 is configured to acquire a pixel offset difference of the target object in the first image and the second image. The obtaining module 510 is further configured to determine a distance of the target object based on positions of the target object in the first image and the second image, respectively; acquiring a focal length of the first image or the second image; the pixel offset difference is determined based on a ratio of a focal length of the first image or the second image to a distance of the target object.
The determining module 520 is configured to determine a fusion weight of the first image based on the pixel offset difference. The determining module 520 may be further configured to determine that the fusion weight of the first image is the first fusion weight when the pixel offset difference is greater than the first threshold and less than the second threshold; when the pixel offset difference is smaller than or equal to a first threshold value, the fusion weight of the first image is a second fusion weight; and when the pixel offset difference is greater than or equal to the second threshold value, the fusion weight of the first image is the third fusion weight. Wherein the value of the first fusion weight decreases as the pixel offset difference increases. The first fusion weight is the ratio of the difference value between the second threshold value and the pixel offset difference to the difference value between the second threshold value and the first threshold value. Wherein the second fusion weight is greater than the first fusion weight; the third fusion weight is less than the first fusion weight.
The fusion module 530 is configured to fuse the first image and the second image into one image based on the fusion weight. The fusion module 530 is further configured to multiply the data value of the first image with the fusion weight to obtain weighted data of the first image; and carrying out fusion processing on the weighted data of the first image and the data of the second image to obtain an image.
The image fusion system further comprises a judging module (not shown) for determining a ratio of the data value of the target object in the first image to the data value in the second image and judging whether the ratio is larger than a third threshold; if not, acquiring pixel offset differences of the target object in the first image and the second image.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application. In this embodiment, the computer device 600 includes a processor 610 and an imaging apparatus 620 coupled to each other.
The processor 610 may also be referred to as a CPU (Central Processing Unit ). The processor 610 may be an integrated circuit chip with signal processing capabilities. Processor 610 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The image pickup device 620 may be an image pickup device having at least two image pickup modules. The image pickup device 620 is used to take an image.
Computer device 600 may further include memory (not shown) for storing instructions and data needed for the operation of processor 610.
The processor 610 is configured to execute instructions to implement the image fusion method provided by any of the embodiments and any non-conflicting combinations described above.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a device with a storage function according to an embodiment of the present application. The apparatus 700 with a storage function of the present embodiment stores program data that can be read by a computer, and the program data can be executed by a processor to implement the image fusion method provided by any embodiment and any non-conflicting combination of the present application. The program data may be stored in the above-mentioned apparatus having a storage function in the form of a software product, so that a computer device (may be a personal computer, a server, or a network device, etc.) or a processor (processor) performs all or part of the steps of the methods of the embodiments of the present application. The aforementioned device 700 with a memory function includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes, or a terminal device such as a computer, a server, a mobile phone, a tablet, or the like.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is only the embodiments of the present application, and not the patent scope of the present application is limited by the foregoing description, but all equivalent structures or equivalent processes using the contents of the present application and the accompanying drawings, or directly or indirectly applied to other related technical fields, which are included in the patent protection scope of the present application.

Claims (9)

1. An image fusion method, comprising:
acquiring pixel offset differences of a target object in a first image and a second image, wherein the pixel offset value refers to the offset differences of position coordinates of the target object on the first image and the second image;
determining a fusion weight of the first image based on the pixel offset difference;
based on the fusion weight, fusing the first image and the second image into one image;
the step of obtaining the pixel offset difference of the target object in the first image and the second image comprises the following steps: determining a distance of the target object based on the positions of the target object in the first image and the second image respectively; acquiring a focal length of the first image or the second image; the pixel offset difference is determined based on a ratio of a focal length of the first image or the second image to a distance of the target object.
2. The image fusion method of claim 1, wherein the determining the fusion weight of the first image based on the pixel offset difference comprises:
when the pixel offset difference is larger than a first threshold value and smaller than a second threshold value, the fusion weight of the first image is a first fusion weight;
when the pixel offset difference is smaller than or equal to a first threshold value, the fusion weight of the first image is a second fusion weight;
and when the pixel offset difference is greater than or equal to a second threshold value, the fusion weight of the first image is a third fusion weight.
3. The image fusion method of claim 2, wherein the value of the first fusion weight decreases as the pixel offset difference increases.
4. The image fusion method of claim 2, wherein the first fusion weight is a ratio of a difference of the second threshold and the pixel offset difference to a difference of the second threshold and the first threshold.
5. The method of image fusion according to claim 2, wherein,
the second fusion weight is greater than the first fusion weight;
the third fusion weight is less than the first fusion weight.
6. The image fusion method of claim 1, wherein the acquiring the pixel offset difference of the target object in the first image and the second image further comprises:
determining a ratio of data values of the target object at the first image to data values at the second image,
judging whether the ratio is larger than a third threshold value;
if not, acquiring the pixel offset difference of the target object in the first image and the second image.
7. The image fusion method according to claim 1, wherein fusing the first image and the second image into one image based on the fusion weight, comprises:
multiplying the data value of the first image with the fusion weight to obtain weighted data of the first image;
and carrying out fusion processing on the weighted data of the first image and the data of the second image to obtain an image.
8. A computer device comprising a processor and an imaging means coupled to each other, the imaging means for capturing images, the processor for executing instructions to implement the image fusion method of any of claims 1-7.
9. An apparatus having a storage function, characterized in that program data is stored, which can be read by a computer, and which program data can be executed by a processor to realize the image fusion method according to any one of claims 1 to 7.
CN202110158628.4A 2021-02-04 2021-02-04 Image fusion method, computer equipment and device Active CN112907704B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110158628.4A CN112907704B (en) 2021-02-04 2021-02-04 Image fusion method, computer equipment and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110158628.4A CN112907704B (en) 2021-02-04 2021-02-04 Image fusion method, computer equipment and device

Publications (2)

Publication Number Publication Date
CN112907704A CN112907704A (en) 2021-06-04
CN112907704B true CN112907704B (en) 2024-04-12

Family

ID=76122578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110158628.4A Active CN112907704B (en) 2021-02-04 2021-02-04 Image fusion method, computer equipment and device

Country Status (1)

Country Link
CN (1) CN112907704B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USH1914H (en) * 1996-07-08 2000-11-07 The United States Of America As Represented By The Secretary Of The Army Method and system for mitigation of image distortion due to optical turbulence
CN102685369A (en) * 2012-04-23 2012-09-19 Tcl集团股份有限公司 Method for eliminating left and right eye image ghosting, ghosting eliminator and 3D player
JP2014158119A (en) * 2013-02-15 2014-08-28 Canon Inc Surveillance camera system
CN104574332A (en) * 2014-12-26 2015-04-29 北京航天控制仪器研究所 Image fusion method for airborne optoelectronic pod
CN107507160A (en) * 2017-08-22 2017-12-22 努比亚技术有限公司 A kind of image interfusion method, terminal and computer-readable recording medium
CN108053386A (en) * 2017-11-27 2018-05-18 北京理工大学 For the method and device of image co-registration
CN109493282A (en) * 2018-11-21 2019-03-19 清华大学深圳研究生院 A kind of stereo-picture joining method for eliminating movement ghost image
CN107534735B (en) * 2016-03-09 2019-05-03 华为技术有限公司 Image processing method, device and the terminal of terminal
CN112204608A (en) * 2019-08-27 2021-01-08 深圳市大疆创新科技有限公司 Image processing method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013065543A1 (en) * 2011-10-31 2013-05-10 富士フイルム株式会社 Disparity adjustment device and method, photography device, and play display device
TWI537875B (en) * 2015-04-08 2016-06-11 大同大學 Image fusion method and image processing apparatus
US10313584B2 (en) * 2017-01-04 2019-06-04 Texas Instruments Incorporated Rear-stitched view panorama for rear-view visualization

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USH1914H (en) * 1996-07-08 2000-11-07 The United States Of America As Represented By The Secretary Of The Army Method and system for mitigation of image distortion due to optical turbulence
CN102685369A (en) * 2012-04-23 2012-09-19 Tcl集团股份有限公司 Method for eliminating left and right eye image ghosting, ghosting eliminator and 3D player
JP2014158119A (en) * 2013-02-15 2014-08-28 Canon Inc Surveillance camera system
CN104574332A (en) * 2014-12-26 2015-04-29 北京航天控制仪器研究所 Image fusion method for airborne optoelectronic pod
CN107534735B (en) * 2016-03-09 2019-05-03 华为技术有限公司 Image processing method, device and the terminal of terminal
CN107507160A (en) * 2017-08-22 2017-12-22 努比亚技术有限公司 A kind of image interfusion method, terminal and computer-readable recording medium
CN108053386A (en) * 2017-11-27 2018-05-18 北京理工大学 For the method and device of image co-registration
CN109493282A (en) * 2018-11-21 2019-03-19 清华大学深圳研究生院 A kind of stereo-picture joining method for eliminating movement ghost image
CN112204608A (en) * 2019-08-27 2021-01-08 深圳市大疆创新科技有限公司 Image processing method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Decision Fusion of D-InSAR and Pixel Offset Tracking for Coal Mining Deformation Monitoring;depin qu;remote sensing;20180704;第10卷(第7期);全文 *
一种基于特征点的无人机影像自动拼接方法;鲁恒;李永树;何敬;陈强;任志明;;地理与地理信息科学;20100930(05);全文 *
针对动态目标的高动态范围图像融合算法研究;都琳;孙华燕;王帅;高宇轩;齐莹莹;;光学学报;20170430;37(04);全文 *

Also Published As

Publication number Publication date
CN112907704A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
US8000559B2 (en) Method of correcting image distortion and apparatus for processing image using the method
US7324701B2 (en) Image noise reduction
EP2359604B1 (en) Modifying color and panchromatic channel cfa image
CN109712192B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
US9581436B2 (en) Image processing device, image capturing apparatus, and image processing method
CN110660090B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN108717530B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN109685853B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN104380342A (en) Image processing apparatus, imaging apparatus, and image processing method
CN110519585B (en) Imaging calibration method and device applied to image acquisition equipment
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109559353B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
CN111345029A (en) Target tracking method and device, movable platform and storage medium
US20140184853A1 (en) Image processing apparatus, image processing method, and image processing program
JP7378219B2 (en) Imaging device, image processing device, control method, and program
CN109559352B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
WO2014006514A2 (en) Image processing in a multi-channel camera
CN113744256A (en) Depth map hole filling method and device, server and readable storage medium
CN110490196A (en) Subject detection method and apparatus, electronic equipment, computer readable storage medium
CN109584311B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
CN113159229B (en) Image fusion method, electronic equipment and related products
CN111866369B (en) Image processing method and device
CN111161211B (en) Image detection method and device
CN112907704B (en) Image fusion method, computer equipment and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant