CN115409754B - Multi-exposure image fusion method and system based on image area validity - Google Patents

Multi-exposure image fusion method and system based on image area validity Download PDF

Info

Publication number
CN115409754B
CN115409754B CN202211360730.3A CN202211360730A CN115409754B CN 115409754 B CN115409754 B CN 115409754B CN 202211360730 A CN202211360730 A CN 202211360730A CN 115409754 B CN115409754 B CN 115409754B
Authority
CN
China
Prior art keywords
raw
image
ratio
map
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211360730.3A
Other languages
Chinese (zh)
Other versions
CN115409754A (en
Inventor
林景洲
郭奇锋
张齐宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shenzhi Future Intelligence Co ltd
Original Assignee
Shenzhen Shenzhi Future Intelligence Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shenzhi Future Intelligence Co ltd filed Critical Shenzhen Shenzhi Future Intelligence Co ltd
Priority to CN202211360730.3A priority Critical patent/CN115409754B/en
Publication of CN115409754A publication Critical patent/CN115409754A/en
Application granted granted Critical
Publication of CN115409754B publication Critical patent/CN115409754B/en
Priority to PCT/CN2023/137047 priority patent/WO2024094222A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4023Decimation- or insertion-based scaling, e.g. pixel or line decimation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multi-exposure image fusion method and a multi-exposure image fusion system based on image area effectiveness, which relate to the technical field of image processing, and are characterized in that firstly, a plurality of raw images with different exposures are obtained and normalized; calculating the current noise level based on the normalized raw images with different exposures; obtaining an effectiveness graph corresponding to each raw graph based on the current noise level; calculating a corresponding weight map based on the effectiveness map corresponding to each raw map; and fusing each raw image according to the corresponding weight image to obtain a fused image. When the multi-exposure image fusion is carried out, the method has the advantages of small calculated amount, good real-time performance and better fusion effect, and can reduce the influence of the image signal processing process.

Description

Multi-exposure image fusion method and system based on image area validity
Technical Field
The invention relates to the technical field of image processing, in particular to a multi-exposure image fusion method and system based on image area validity.
Background
With the development of computer and multimedia technologies, various multimedia applications have placed a wide demand for high quality images. High quality images can provide rich information and a realistic visual experience. However, in the image acquisition process, due to the influence of factors such as image acquisition equipment, acquisition environment, noise and the like, the image presented on the display terminal is often a low-quality image. Therefore, how to reconstruct a high quality image from a low quality image has been a challenge in the field of image processing.
Multi-exposure image fusion is a technique of fusing a plurality of images with different exposures into one image. The multi-exposure image fusion technology can improve the imaging quality of the image and avoid the loss of details of a highlight or dark area caused by single exposure. The existing multi-Exposure image Fusion method mainly comprises Exposure Fusion-Mertens TMO, image Fusion based on guide filtering and the like, but the existing Fusion method still has the following defects:
exposure Fusion-Mertens TMO: the three channels of the image need to be calculated by using a Laplace operator, a large amount of calculation power needs to be consumed, and color cast is easy to occur.
Image fusion based on guided filtering: the color image is subjected to multiple filtering operations, the process is complex, a large amount of calculation force needs to be consumed, and the halo phenomenon is easy to occur.
Therefore, how to overcome the above-mentioned defects of the prior art and solve the technical problems of large calculation amount, poor effect and the like in the multi-exposure image fusion process is a problem that needs to be solved urgently by the technical personnel in the field.
Disclosure of Invention
In view of this, the invention provides a multi-exposure image fusion method and system based on image area validity, and the method is simple, strong in scene adaptability and fast in speed.
In order to achieve the above purpose, the invention provides the following technical scheme:
a multi-exposure image fusion method based on image area validity comprises the following steps:
step 1, obtaining a plurality of raw images with different exposures, and normalizing, wherein the subsequent image processing is based on the normalized raw images;
step 2, calculating the current noise level based on a plurality of raw images with different exposures;
step 3, obtaining an effectiveness map corresponding to each raw map based on the current noise level;
step 4, calculating a corresponding weight map based on the effectiveness map corresponding to each raw map;
and 5, fusing each raw image according to the corresponding weight image to obtain a fused image.
Optionally, in step 1, the plurality of raw maps with different exposures are sorted in a descending manner according to exposure time, so as to form a set a = { raw0, raw1, · rawi,. Raw (n-1) }, where the exposure time ratio is ratio, n represents the number of raw maps in the set a, and n is greater than or equal to 2.
Optionally, in step 2, the method for calculating the current noise level includes:
and calculating according to a principle of consistent scenes, wherein the noise mean value noise _ u = mean (rawj-ratio _ raw (j + 1)), and the noise variance noise _ std = std (rawj-ratio _ raw (j + 1)), wherein rawj and raw (j + 1) are any two adjacent raw graphs in the set A, and j is more than or equal to 0 and less than or equal to n-2.
The method utilizes the multi-frame exposure characteristic to evaluate the noise level, and is simpler than other noise evaluation methods.
Optionally, in step 3, the method for obtaining the validity map corresponding to each raw map includes:
for the ith raw image, rawi, the effectiveness of each point except the shortest frame is calculated according to the exposure time proportion:
e_q=(ratio-1)*noise_u;
d_q=sqrt(ratio**2+1)*noise_std;
Si=rawi-ratio*raw(i+1);
Pi=(Si-(e_q-d_q))/max(delta,2*d_q),delta=1e -6
Img_valid_i=(1-Pi);
the minimum frame validity is obtained by subtracting the validity of the penultimate short frame from 1.
According to the invention, through the noise level characteristic, the generated image exposure effectiveness is more accurate, the method is more suitable for the fusion of high-noise scenes at night, and the scene adaptability is strong.
And the invention makes validity evaluation in raw domain to obtain validity chart, because raw data is more linear, the invention can more accurately reflect the progressive process of image brightness, and reduce the influence of gamma on brightness value in the image signal processing process.
Optionally, in step 4, for the validity map Img _ valid _ i corresponding to the ith raw map, the method for calculating the corresponding weight map includes:
step 4.1, performing Gaussian blur on the effectiveness chart Img _ valid _ i by using a Laplacian pyramid method;
step 4.2, reducing the size of the effectiveness chart Img _ valid _ i image after Gaussian blur to 1/2 of the original size;
step 4.3, performing a plurality of scale transformations according to the step 4.2 to form an image sequence from large to small, forming a Gaussian pyramid Img _ valid _ i _ pyramid according to the principle that an upper-layer image is larger than a lower-layer image, and then performing the following operations from the image at the bottom layer of the Gaussian pyramid to the second top layer from the last:
A. enlarging the image to be consistent with the previous layer by using bilinear interpolation;
B. calculating and synthesizing a new image with the image of the previous layer;
C. replacing the original previous layer image with the new image;
the resulting top-level image becomes the weight map w _ i, which can enhance the smoothness of the fused image.
The method judges the weight through effectiveness, and is more accurate and less in calculation amount compared with the method of simply judging the weight by using the value of the brightness graph.
Optionally, in step 5, each raw image is fused according to the corresponding weight map to obtain a fused image, where the fusion formula is as follows:
Raw_hdr=raw0*w_0+raw1*w_1*ratio+.....+rawi*w_i*ratio i +...+raw(n-1)*w_(n-1)*ratio n-1 wherein w _ i represents a weight graph corresponding to the ith raw graph.
The invention performs image fusion in the raw domain, and can increase the retention proportion of image information compared with a jpg image.
A multi-exposure image fusion system based on image region validity, comprising:
the exposure image acquisition module is used for acquiring a plurality of raw images with different exposures;
the noise level calculation module is used for calculating the current noise level based on the normalized raw images with different exposures;
the effectiveness map acquisition module is used for acquiring an effectiveness map corresponding to each raw map based on the current noise level;
the weight map acquisition module is used for calculating a corresponding weight map based on the effectiveness map corresponding to each raw map;
and the fusion module is used for fusing each raw image according to the corresponding weight image.
According to the technical scheme, the invention discloses and provides a multi-exposure image fusion method and system based on image area effectiveness, and compared with the prior art, the method and system have the following beneficial effects:
speed aspect: because the calculated amount is greatly reduced, the method can be quickly deployed on the terminal equipment, and good real-time performance can be realized.
The effect is that: according to the invention, the unprocessed original image data is fused in the raw domain, so that the later-stage image signal processing and debugging can be better matched, and the final image effect of the image is ensured.
In the aspect of applicability: the invention is suitable for most sensors, has strong portability, can be expanded to other task fields, and can be widely applied to photographing and camera shooting related systems, terminal equipment and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a block diagram of the system of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a multi-exposure image fusion method based on image area validity, which is shown in figure 1 and comprises the following steps:
step 1, obtaining a plurality of raw images with different exposures, normalizing the raw images, and sorting the raw images according to decreasing exposure time to form a set A = { raw0, raw1,. Rightwards, raw (n-1) }, wherein the exposure time ratio is ratio, n represents the number of the raw images in the set A, and n is more than or equal to 2.
Step 2, calculating the current noise level based on the normalized raw images with different exposures:
and calculating according to a scene consistency principle, wherein the noise mean value noise _ u = mean (rawj-ratio × raw (j + 1)), and the noise variance noise _ std = std (rawj-ratio × raw (j + 1)), wherein rawj and raw (j + 1) are any two adjacent raw graphs in the set A, and j is more than or equal to 0 and less than or equal to n-2.
And 3, acquiring an effectiveness map corresponding to each raw map based on the current noise level.
Specifically, for the ith raw image rawi, the effectiveness of each point except for the shortest frame (raw image with shortest exposure time) is calculated according to the exposure time proportion:
e_q=(ratio-1)*noise_u;
d_q=sqrt(ratio**2+1)*noise_std;
Si=rawi-ratio*raw(i+1);
Pi=(Si-(e_q-d_q))/max(delta,2*d_q),delta=1e -6
Img_valid_i=(1-Pi);
the shortest frame validity is obtained by subtracting the validity of the second to last short frame from 1.
In other embodiments, the effectiveness may also be calculated by:
Si=rawi-ratio*raw(i+1);
e_q=(ratio-1)*noise_u;
Img_valid_i=(Si-e_q)*ratio;
and 4, calculating a corresponding weight map based on the effectiveness map corresponding to each raw map.
Specifically, for the validity graph Img _ valid _ i corresponding to the ith raw graph, the method for calculating the corresponding weight graph comprises the following steps:
step 4.1, performing Gaussian blur on the effectiveness chart Img _ valid _ i by using a Laplacian pyramid method;
step 4.2, reducing the size of the effectiveness chart Img _ valid _ i image after Gaussian blur to 1/2 of the original size;
step 4.3, performing a plurality of scale transformations according to the step 4.2 to form an image sequence from large to small, wherein the top layer is the original size, and the top layer is reduced to the lower layer according to a fixed proportion, for example, the width and the height of the first layer are =20x10, and the next layer is 10x5, forming a gaussian pyramid Img _ valid _ i _ pyramid according to the rule, and then performing the following operations from the image of the bottom layer of the gaussian pyramid to the second last top layer:
A. enlarging the image to be consistent with the previous layer by using bilinear interpolation;
B. calculating and synthesizing a new image with the image of the previous layer;
C. replacing the original previous layer image with the new image;
the resulting top-level image becomes the weight map w _ i.
In other embodiments, the laplacian pyramid can be replaced by other methods for smoothing.
And 5, fusing each raw image according to the corresponding weight graph to obtain a fused image, wherein the fusion formula is as follows:
Raw_hdr=raw0*w_0+raw1*w_1*ratio+.....+rawi*w_i*ratio i +...+raw(n-1)*w_(n-1)*ratio n-1 wherein w _ i represents a weight graph corresponding to the ith raw graph.
The embodiment of the invention also discloses a multi-exposure image fusion system based on the validity of the image area, which corresponds to the embodiment of the method, and with reference to fig. 2, comprises the following steps:
the exposure image acquisition module is used for acquiring a plurality of raw images with different exposures;
the noise level calculation module is used for calculating the current noise level based on the normalized raw images with different exposures;
the effectiveness map acquisition module is used for acquiring an effectiveness map corresponding to each raw map based on the current noise level;
the weight map acquisition module is used for calculating a corresponding weight map based on the effectiveness map corresponding to each raw map;
and the fusion module is used for fusing each raw image according to the corresponding weight image.
The following describes the present invention with specific examples of the case where ratio =8, n = 3.
1. Acquiring multiple exposure images
The exposure time between the long and short exposures was set to a fixed 8 times, i.e. ratio =8. 3 images of different exposures were obtained, noting the long exposure as raw0, the medium exposure as raw1 and the short exposure as raw2.raw plots were normalized to the [0,1] interval. The subsequent steps are image processing of the normalized raw image.
2. Noise level calculation
According to the obtained multiple exposure images, based on the assumption of consistent scene, the estimation value of the current noise level is counted:
noise mean value: noise _ u = mean (raw 0-8 × raw 1);
noise variance: noise _ std = std (raw 0-8 × raw 1).
3. Validity calculation
According to the noise level, calculating the effectiveness of each point except the shortest frame according to the exposure multiplying power relation from two frames of images with different exposures:
e_q=(ratio-1)*noise_u;
d_q=sqrt(ratio**2+1)*noise_std;
S0=raw0-8*raw1;
P0=(S0-(e_q-d_q))/max(delta,2*d_q),delta=1e -6
Img_valid_0=(1-P0);
the shortest frame validity is obtained by subtracting the validity of the second to last short frame from 1.
4. Weight calculation
And (3) applying a Laplacian pyramid method, firstly carrying out Gaussian blur on the effectiveness map Img _ valid _ i, and then reducing the size of the image to 1/2 of the original size. Performing a plurality of scale transformations according to the above manner to form an image sequence from large to small, forming a gaussian pyramid Img _ valid _ i _ pyramid according to the principle that an upper-layer image is larger than a lower-layer image, and then performing the following operations from the image at the bottom layer of the gaussian pyramid to the second top layer from the last:
A. enlarging the image to be consistent with the previous layer by using bilinear interpolation;
B. calculating and synthesizing a new image with the image of the previous layer;
C. replacing the original previous layer image with the new image;
the resulting top-level image becomes the weight map w _ i.
5. Fusion
The original raw graph is synthesized into a final hdr raw graph according to the weight:
Raw_hdr=raw0*w_0+raw1*w_1*8+raw2*w_2*64。
the embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The system device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (4)

1. A multi-exposure image fusion method based on image area validity is characterized by comprising the following steps:
step 1, obtaining a plurality of raw images with different exposures, and sorting the raw images according to decreasing exposure time to form a set A = { raw0, raw 1., raw., (n-1) }, wherein the exposure time proportion is ratio, n represents the number of raw images in the set A, raw represents the ith raw image, and n is more than or equal to 2; and carrying out normalization;
step 2, calculating the current noise level based on the normalized raw images with different exposures, wherein the method comprises the following steps:
calculating according to a principle of consistent scenes, wherein the noise mean value noise _ u = mean (rawj-ratio × raw (j + 1)), and the noise variance noise _ std = std (rawj-ratio × raw (j + 1)), wherein rawj and raw (j + 1) are any two adjacent raw graphs in the set A, and j is more than or equal to 0 and less than or equal to n-2;
step 3, obtaining an effectiveness map corresponding to each raw map based on the current noise level, wherein the method comprises the following steps:
for the ith raw image, ravi, calculate the effectiveness of each point except the shortest frame according to the exposure time ratio:
e_q=(ratio-1)*noise_u;
d_q=sqrt(ratio*2+1)*noise_std;
Si=rawi-ratio*raw(i+1);
Pi=(Si-(e_q-d_q))/max(delta,2*d_q),delta=1e -6
Img_valid_i=1-Pi;
the validity of the shortest frame is obtained by subtracting the validity of the penultimate short frame from 1;
step 4, calculating a corresponding weight map based on the effectiveness map corresponding to each raw map;
and 5, fusing each raw image according to the corresponding weight image.
2. The method for fusing multi-exposure images based on image area validity as claimed in claim 1, wherein in the step 4, for the validity map Img _ valid _ i corresponding to the ith raw map, the method for calculating the corresponding weight map is as follows:
step 4.1, performing Gaussian blur on the effectiveness chart Img _ valid _ i by using a Laplacian pyramid method;
step 4.2, reducing the size of the effectiveness chart Img _ valid _ i image after Gaussian blur to 1/2 of the original size;
step 4.3, performing a plurality of scale transformations according to the step 4.2 to form an image sequence from large to small, forming a Gaussian pyramid Img _ valid _ i _ pyramid according to the principle that an upper-layer image is larger than a lower-layer image, and then performing the following operations from the image at the bottom layer of the Gaussian pyramid to the second top layer from the last:
A. enlarging the image to be consistent with the previous layer by using bilinear interpolation;
B. calculating and synthesizing a new image with the image of the previous layer;
C. replacing the original previous layer image with the new image;
the resulting top-level image becomes the weight map w _ i.
3. The multi-exposure image fusion method based on image area validity according to claim 1, wherein in the step 5, each raw image is fused according to the corresponding weight map to obtain a fusion image, and the fusion formula is as follows:
Raw_hdr=raw0*w_0+raw1*w_1*ratio+...+rawi*w_i*ratio i +...+raw
(n-1)*w_(n-1)*ratio n-1 wherein w _ i represents a weight graph corresponding to the ith raw graph.
4. A multi-exposure image fusion system based on image region validity, comprising:
an exposure image acquisition module, configured to acquire a plurality of raw images of different exposures, and sort the raw images in a descending manner according to exposure time to form a set a = { raw0, raw1, · rawi,. And raw (n-1) }, where the exposure time ratio is ratio, n denotes the number of raw images in the set a, and n is greater than or equal to 2; and carrying out normalization;
a noise level calculation module, configured to calculate a current noise level based on the normalized raw images with different exposures, specifically:
calculating according to a principle of consistent scenes, wherein the noise mean value noise _ u = mean (rawj-ratio × raw (j + 1)), and the noise variance noise _ std = std (rawj-ratio × raw (j + 1)), wherein rawj and raw (j + 1) are any two adjacent raw graphs in the set A, and j is more than or equal to 0 and less than or equal to n-2;
the significance map obtaining module is configured to obtain, based on the current noise level, a significance map corresponding to each raw map, specifically:
for the ith raw image, ravi, calculate the effectiveness of each point except the shortest frame according to the exposure time ratio:
e_q=(ratio-1)*noise_u;
d_q=sqrt(ratio*2+1)*noise_std;
Si=rawi-ratio*raw(i+1);
Pi=(Si-(e_q-d_q))/max(delta,2*d_q),delta=1e -6
Img_valid_i=1-Pi;
the validity of the shortest frame is obtained by subtracting the validity of the penultimate short frame from 1;
the weight map acquisition module is used for calculating a corresponding weight map based on the effectiveness map corresponding to each raw map;
and the fusion module is used for fusing each raw image according to the corresponding weight image.
CN202211360730.3A 2022-11-02 2022-11-02 Multi-exposure image fusion method and system based on image area validity Active CN115409754B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211360730.3A CN115409754B (en) 2022-11-02 2022-11-02 Multi-exposure image fusion method and system based on image area validity
PCT/CN2023/137047 WO2024094222A1 (en) 2022-11-02 2023-12-07 Multi-exposure image fusion method and system based on image region validity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211360730.3A CN115409754B (en) 2022-11-02 2022-11-02 Multi-exposure image fusion method and system based on image area validity

Publications (2)

Publication Number Publication Date
CN115409754A CN115409754A (en) 2022-11-29
CN115409754B true CN115409754B (en) 2023-03-24

Family

ID=84169237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211360730.3A Active CN115409754B (en) 2022-11-02 2022-11-02 Multi-exposure image fusion method and system based on image area validity

Country Status (2)

Country Link
CN (1) CN115409754B (en)
WO (1) WO2024094222A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022021999A1 (en) * 2020-07-27 2022-02-03 虹软科技股份有限公司 Image processing method and image processing apparatus

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108419062B (en) * 2017-02-10 2020-10-02 杭州海康威视数字技术股份有限公司 Image fusion apparatus and image fusion method
CN107451986B (en) * 2017-08-10 2020-08-14 南京信息职业技术学院 Single infrared image enhancement method based on fusion technology
CN110728648B (en) * 2019-10-25 2022-07-19 北京迈格威科技有限公司 Image fusion method and device, electronic equipment and readable storage medium
US11430094B2 (en) * 2020-07-20 2022-08-30 Samsung Electronics Co., Ltd. Guided multi-exposure image fusion
CN115147304A (en) * 2022-07-01 2022-10-04 西安欧珀通信科技有限公司 Image fusion method and device, electronic equipment, storage medium and product

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022021999A1 (en) * 2020-07-27 2022-02-03 虹软科技股份有限公司 Image processing method and image processing apparatus

Also Published As

Publication number Publication date
CN115409754A (en) 2022-11-29
WO2024094222A1 (en) 2024-05-10

Similar Documents

Publication Publication Date Title
CN110163237B (en) Model training and image processing method, device, medium and electronic equipment
JP4118059B2 (en) Method and apparatus for digital video processing
TWI399703B (en) Forward and backward resizing method
US11748894B2 (en) Video stabilization method and apparatus and non-transitory computer-readable medium
CN109558901B (en) Semantic segmentation training method and device, electronic equipment and storage medium
CN104144298A (en) Wide dynamic image synthesis method
TWI482468B (en) Device, method and computer readable storage medium thereof for detecting object
WO2017113917A1 (en) Imaging method, imaging apparatus, and terminal
CN113067979A (en) Imaging method, device, equipment and storage medium based on bionic pulse camera
CN110992395B (en) Image training sample generation method and device and motion tracking method and device
US20230074180A1 (en) Method and apparatus for generating super night scene image, and electronic device and storage medium
CN112837245A (en) Dynamic scene deblurring method based on multi-mode fusion
CN110958363B (en) Image processing method and device, computer readable medium and electronic device
CN103124325A (en) Image processing device, image processing method, and recording medium
CN111932453A (en) High-resolution image generation method and high-speed camera integrated with same
CN115409754B (en) Multi-exposure image fusion method and system based on image area validity
CN106375675A (en) Aerial camera multi-exposure image fusion method
CN114650361B (en) Shooting mode determining method, shooting mode determining device, electronic equipment and storage medium
CN113781321A (en) Information compensation method, device, equipment and storage medium for image highlight area
CN112419161B (en) Image processing method and device, storage medium and electronic equipment
Zhang et al. DarkVision: a benchmark for low-light image/video perception
CN108961169A (en) Monitor grasp shoot method and device
CN115760663A (en) Method for synthesizing high dynamic range image from low dynamic range image based on multi-frame multi-exposure
CN108573478A (en) Median filter method and device
CN103841312B (en) Object detecting apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant