CN117974469A - Depth of field synthesis method and device based on multiple fusion strategies, electronic equipment and storage medium - Google Patents

Depth of field synthesis method and device based on multiple fusion strategies, electronic equipment and storage medium Download PDF

Info

Publication number
CN117974469A
CN117974469A CN202311772202.3A CN202311772202A CN117974469A CN 117974469 A CN117974469 A CN 117974469A CN 202311772202 A CN202311772202 A CN 202311772202A CN 117974469 A CN117974469 A CN 117974469A
Authority
CN
China
Prior art keywords
image sequence
image
determining
target image
pixel points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311772202.3A
Other languages
Chinese (zh)
Inventor
王大伟
周逸铭
黄康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Cztek Co ltd
Original Assignee
Shenzhen Cztek Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Cztek Co ltd filed Critical Shenzhen Cztek Co ltd
Priority to CN202311772202.3A priority Critical patent/CN117974469A/en
Publication of CN117974469A publication Critical patent/CN117974469A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The application provides a depth of field synthesis method and device based on various fusion strategies, electronic equipment and a storage medium, and relates to the technical field of computers. Wherein the method comprises the following steps: acquiring a plurality of color images of different focusing planes under the same coordinate; based on gray images corresponding to the color images, wherein each color image corresponds to one gray image; calculating image sequence defocus indexes of all pixel points in the gray image; determining a target image sequence based on an image sequence defocus index of pixel points, wherein each pixel point corresponds to the target image sequence; and fusing the target image sequences of all the pixel points. The application solves the problems of insufficient image contrast and low processing efficiency of depth of field synthesis in the related technology.

Description

Depth of field synthesis method and device based on multiple fusion strategies, electronic equipment and storage medium
Technical Field
The application relates to the technical field of computers, in particular to a depth of field synthesis method and device based on various fusion strategies, electronic equipment and a storage medium.
Background
Depth of field refers to the range of imaging that is relatively sharp before and after the focal point of the camera. The depth of field synthesis is to analyze the image sequence of the non-planar object collected during continuous zooming of the camera lens, extract the relatively clear focused region in each frame of image in the sequence, and form a new full depth of field image with clear regions.
In the existing depth-of-field synthesis algorithm, the method is mainly divided into a spatial domain-based method and a transform domain-based method. The depth of field synthesis based on the spatial domain realizes fusion by analyzing the spatial characteristics of images, but the method is easy to generate blocking effect and has lower contrast, and the depth of field synthesis based on the transformation domain has the problems of higher image quality, complex calculation and high time consumption.
From the above, the conventional depth-of-field synthesis method has problems of insufficient image contrast and low processing efficiency.
Disclosure of Invention
The application provides a depth of field synthesis method, a device, electronic equipment and a storage medium based on various fusion strategies, which can solve the problems of insufficient image contrast and low processing efficiency in the related technology. The technical scheme is as follows:
A depth of field synthesis method based on a plurality of fusion strategies comprises the following steps: acquiring a plurality of color images of different focusing planes under the same coordinate; based on gray images corresponding to the color images, wherein each color image corresponds to one gray image; calculating image sequence defocus indexes of all pixel points in the gray image; determining a target image sequence based on an image sequence defocus index of pixel points, wherein each pixel point corresponds to the target image sequence; fusing the target image sequences of all the pixel points; the process of determining the target image sequence by each pixel point is as follows: if the defocusing index of the image sequence is matched with the clear set, determining the most clear image sequence as a target image sequence; if the image sequence defocus index is matched with the middle set, acquiring a plurality of image sequences of the pixel points, and carrying out weighted average to determine a target image sequence; if the defocus index of the image sequence is matched with the fuzzy set, the equivalent pixel points are obtained, and the image sequence of the equivalent pixel points is determined to be a target image sequence.
According to one aspect of the present application, a depth of field synthesizing apparatus based on a plurality of fusion strategies includes: the color image acquisition module is used for acquiring color images of different focusing planes under the same coordinates; the gray image acquisition module is used for acquiring gray images corresponding to color images, wherein each color image corresponds to one gray image; the index calculation module is used for calculating image sequence defocus indexes of all pixel points in the gray image; the target image sequence determining module is used for determining a target image sequence based on the image sequence defocus index of the pixel points, wherein each pixel point corresponds to the target image sequence; the fusion module is used for fusing the target image sequences of all the pixel points; the process of determining the target image sequence by each pixel point is as follows: the first sequence determining module is used for determining the clearest image sequence as a target image sequence if the defocusing index of the image sequence is matched with the clear set; the second sequence determining module is used for acquiring a plurality of image sequences of pixel points to carry out weighted average to determine a target image sequence if the image sequence defocus index is matched with the middle set; and the third sequence determining module is used for acquiring the equivalent pixel points and determining the image sequence of the equivalent pixel points as a target image sequence if the image sequence defocus index is matched with the fuzzy set.
In an exemplary embodiment, the apparatus further includes a focus metric determining module, configured to determine a focus metric corresponding to the pixel point; and the gamma correction module is used for carrying out gamma correction on the focusing measurement and determining a corresponding image sequence defocus index.
In an exemplary embodiment, the apparatus further comprises a selection module for selecting two critical values a, b, wherein a is greater than b; the first matching module is used for matching the clear set when a is smaller than the defocus index of the image sequence; the second matching module is used for matching the middle set when the defocus index of the image sequence b is smaller than or equal to a; and the third matching module is used for matching the fuzzy set when the defocus index of the image sequence is less than or equal to b.
In an exemplary embodiment, the apparatus further includes a first focus metric acquisition module, configured to acquire focus metrics corresponding to all pixels; the first target sequence determination module is used for determining the largest focusing metric as a target image sequence based on all focusing metrics.
In an exemplary embodiment, the apparatus further includes a second focusing power obtaining module, configured to obtain focusing metrics corresponding to all pixels; the focusing measure quantity determining module is used for determining N appointed focusing measures based on all the focusing measures, wherein the N focusing measures are the largest focusing measures in all the focusing measures, and N is more than or equal to 3; the second target sequence determination module performs weighted averaging on the N specified focus metrics for determining a target image sequence.
In an exemplary embodiment, the apparatus further includes a third defocus index obtaining module, configured to obtain a defocus index of a peripheral image sequence of a peripheral pixel corresponding to a pixel to be replaced; the peripheral pixel point set selecting module is used for selecting a peripheral pixel point set with a peripheral image sequence defocus index larger than a; and the equal-replacement pixel point determining module is used for determining equal-replacement pixel points based on the nearest distance of the pixel points to be replaced of the peripheral side pixel point set.
According to one aspect of the application, an electronic device comprises at least one processor and at least one memory, wherein the memory has computer readable instructions stored thereon; the computer readable instructions are executed by one or more of the processors to cause an electronic device to implement a depth of field composition method based on a plurality of fusion policies as described above.
According to one aspect of the application, a storage medium has stored thereon computer readable instructions that are executed by one or more processors to implement the depth of view synthesis method based on a variety of fusion policies as described above.
According to one aspect of the application, a computer program product includes computer readable instructions stored in a storage medium, one or more processors of an electronic device reading the computer readable instructions from the storage medium, loading and executing the computer readable instructions, causing the electronic device to implement a depth of view synthesis method based on a plurality of fusion policies as described above.
The technical scheme provided by the application has the beneficial effects that: the defocusing degree of the image sequence determines the image fusion rule used for the corresponding image pixel points, so that the calculated amount of image fusion can be reduced, the fusion efficiency can be improved, and the condition of reduced contrast of the depth of field synthetic image can be slowed down; and the local weighted average mode is adopted, so that the quality of the depth-of-field synthesized image can be improved.
In the above technical solution, the image sequence defocus index of all pixels in the gray image can be calculated based on the gray image corresponding to the color image, where the image sequence defocus index of the same pixel in each image is different, a target image sequence can be determined based on the image sequence defocus index of the pixel, and then the target image sequences of all pixels are fused, where if the image sequence defocus index matches a clear set, the clearest image sequence is determined as the target image sequence, if the image sequence defocus index matches an intermediate set, a plurality of image sequences of the pixel are obtained to perform weighted average to determine the target image sequence, and if the image sequence defocus index matches a fuzzy set, an equal-replacement pixel is obtained and the image sequence of the equal-replacement pixel is determined as the target image sequence; in the scheme of the application, the target image sequence can be determined in different modes aiming at the difference of the defocus indexes of the image sequences of the pixel points, so that the image sequences which are relatively clearly attached can be determined to be fused, the calculated amount of image fusion can be reduced, the fusion efficiency can be improved, and the condition of reduced contrast of the depth of field synthetic image can be slowed down; and the local weighted average mode is adopted, so that the quality of the depth of field synthesized image can be improved, and the problems of insufficient image contrast and low processing efficiency in the related technology can be effectively solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings that are required to be used in the description of the embodiments of the present application will be briefly described below. It is evident that the drawings in the following description are only some embodiments of the application and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an implementation environment in accordance with the present application;
FIG. 2 is a flow chart illustrating a depth of view synthesis method based on multiple fusion strategies, according to an example embodiment;
FIG. 3 is a flowchart illustrating steps performed by S131 through S133 in a depth of view synthesis method based on a plurality of fusion strategies, according to an exemplary embodiment;
FIG. 4 is a flowchart illustrating steps performed by S1311 through S1312 in a depth of view synthesis method based on multiple fusion strategies, according to an exemplary embodiment;
FIG. 5 is a flowchart showing steps performed by S1321 through S1323 in a depth of view synthesis method based on multiple fusion strategies, according to an exemplary embodiment;
FIG. 6 is a flowchart illustrating steps performed by S1331 through S1333 in a depth of field composition method based on multiple fusion strategies, according to an exemplary embodiment;
FIG. 7 is a block diagram illustrating a depth of view synthesizing apparatus based on a plurality of fusion strategies, according to an exemplary embodiment;
FIG. 8 is a block diagram illustrating another depth of view synthesizing apparatus based on multiple fusion strategies, according to an exemplary embodiment;
fig. 9 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification of this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
The related art has the defects of insufficient image contrast and low processing efficiency.
Therefore, the depth of field synthesizing method based on the multiple fusion strategies can effectively improve the accuracy of the depth of field synthesizing based on the multiple fusion strategies, and correspondingly, the depth of field synthesizing method based on the multiple fusion strategies is applicable to the depth of field synthesizing device based on the multiple fusion strategies, the depth of field synthesizing device based on the multiple fusion strategies can be deployed in electronic equipment,
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an implementation environment involved in a depth of field synthesis method based on multiple fusion strategies. The implementation environment comprises an acquisition end and a server.
Specifically, the capturing end provides an image capturing function, and may be electronic devices such as a desktop computer, a notebook computer, a tablet computer, a smart phone, and the like, which are not limited herein.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms. The server is an electronic device for providing background services, for example, in the present implementation environment, the server provides an image fusion service for the acquisition end.
The server establishes communication connection with the acquisition end in advance in a wired or wireless mode and the like, and realizes linkage with the acquisition end through the communication connection. Through interaction between the acquisition end and the server, the acquisition end can upload the acquired color images of different focusing planes under the same coordinates to the server so as to facilitate the server to provide an image fusion service, specifically, based on the gray images corresponding to the received color images, the image sequence defocus indexes of all pixel points in the gray images are calculated, and then image fusion is carried out based on the image sequence defocus indexes, so that the problems of insufficient image contrast and low processing efficiency are solved.
The embodiment of the application provides a depth of field synthesis method based on various fusion strategies, which is suitable for electronic equipment, wherein the electronic equipment can be a server in an implementation environment shown in fig. 1.
In the following method embodiments, for convenience of description, the execution subject of each step of the method is described as an electronic device, but this configuration is not particularly limited.
The following is an embodiment of the device of the present application, which may be used to execute the depth of field synthesis method based on various fusion strategies according to the present application. For details not disclosed in the embodiment of the apparatus of the present application, please refer to an embodiment of a method for synthesizing a depth of field based on a plurality of fusion strategies according to the present application.
Referring to fig. 2, a depth of field synthesis method based on multiple fusion strategies includes:
S100, acquiring a plurality of color images of different focusing planes under the same coordinate.
S110, gray images corresponding to the color images are based;
Wherein, each color image corresponds to a gray image, and each gray image is different, the gray image is composed of a plurality of pixels, and the gray conversion of converting the color image into the gray image is shown as: gray=0.299×r+0.587×g+0.114×b, where R is an R value in a color image RGB channel, G is a G value in the color image RGB channel, and B is a B value in the color image RGB channel.
S120, calculating image sequence defocus indexes of all pixel points in the gray image;
The image sequence defocus indexes corresponding to the pixel points in the image are different, and in the process of calculating the image sequence defocus indexes, the following steps are further performed:
s121, determining focusing measurement corresponding to the pixel point;
The following is an example, in which the focus metric adopts a gray variance metric method, and the focus metric calculation formula of the kth image at the pixel coordinates (x, y) is as follows:
Wherein Ω (x, y) is a neighboring pixel region of r×r centered on the point (x, y), I k (x, y) is a gray value of (x, y) in the kth image coordinate; mu is the average gray value of the image area
S122, gamma correction is carried out on the focusing measurement to determine a corresponding image sequence defocus index;
Wherein, by adding gamma correction, the weight of the image sequence with clear focus in the image fusion process can be increased in the image fusion process, and the gamma correction is as follows:
Wherein Ω= Σ kFk, γ > 1.
Thus, the focus metric for the image pixel coordinates (x, y) is:
fx,y=(ω1(x,y),ω2(x,y),...,ωK(x,y)),
where ω k (x, y) represents the corrected focus measure for the kth image in the image sequence, and K represents the total number of images.
In the embodiment of the present application, the image focusing characteristics are represented by the image sequence defocus index, which is defined as follows:
Wherein,
S130, determining a target image sequence based on an image sequence defocus index of the pixel points for fusion;
In the embodiment of the application, the image region with clear texture and low noise influence in the image is corresponding to the clear set, the image region with clear texture and high noise influence in the image is corresponding to the middle set, and the image region with little texture information in the image or high noise influence is corresponding to the fuzzy region.
In the embodiment of the application, parameters a and b are selected for set matching, wherein a is larger than b, and referring to fig. 3, it is pointed out that the larger the focusing measure is, the clearer the texture in the image is;
S131, when a is smaller than the image sequence defocus index, and the image sequence defocus index is matched with the clear set, determining the clearest image sequence as the target image sequence.
S132, when the defocus index of the image sequence less than b is less than or equal to a and the defocus index of the image sequence is matched with the middle set, acquiring a plurality of image sequences of the pixel points, and carrying out weighted average to determine a target image sequence.
S133, when the defocus index of the image sequence is less than or equal to b and the defocus index of the image sequence is matched with the fuzzy set, the equivalent pixel point is obtained, and the image sequence of the equivalent pixel point is determined to be a target image sequence.
Through the difference of the texture degree and the noise degree of the image area, the image area to be fused can be determined by selecting different rules, and the specific expression form of the image fusion rule is determined according to the defocusing degree of the image sequence according to different conditions as follows:
Wherein, The parameter a=0.5×maxh (x, y), H (x, y) denotes an image sequence defocus index with image coordinates (x, y). Provided that a point p (i, j) exists at a position near the point (x, y) where the Euclidean distance is nearest and H (i, j) < a, then/>
In particular, when a.ltoreq.H (x, y) < b and m is equal to 1 or the total number of sequences K, the range of the fused image is narrowed down to two images.
The procedure for specifically determining the target image sequence is as follows.
In determining the sharpest image sequence as the target image sequence, referring to fig. 4, the method further includes:
S1311, acquiring all focusing metrics corresponding to pixel points;
as already mentioned above, there are a plurality of focus metrics for each pixel point, and a call is made to determine the applicable focus metric.
S1312 determines a maximum focus metric based on all focus metrics, and takes the maximum focus metric as the target image sequence.
By the above procedure, the sharpest image sequence can be determined.
In the process of obtaining a plurality of image sequences of pixel points and performing weighted average to determine a target image sequence, referring to fig. 5, the method further includes:
s1321, acquiring all focusing metrics corresponding to the pixel points.
S1322 determining N specified focus metrics based on all focus metrics;
N designated focusing metrics are the largest focusing metrics among all focusing metrics, and N is equal to or greater than 3, for example, in the embodiment of the application, N is 3, and at this time, the largest three focusing metrics are selected from all focusing metrics of the corresponding pixel points.
S1323, carrying out weighted average on the 3 appointed focus metrics to determine a target image sequence; the depth of field synthetic image quality can be improved by fusing the target image sequence of which the weighted average obtains the focus metric.
In the process of acquiring the equivalent pixel points and determining the image sequence of the equivalent pixel points as the target image sequence, referring to fig. 6, the method further includes:
S1331, acquiring a peripheral image sequence defocus index of a peripheral pixel point corresponding to a pixel point to be replaced;
the pixel points to be replaced are pixel points corresponding to texture blurring, if the blurred image sequences are fused, the quality of the fused image is affected, and the peripheral pixel points are pixel points distributed around Xu Tihuan pixel points.
S1332, selecting a peripheral pixel point set with a peripheral image sequence defocus index > a;
wherein, there are a plurality of image areas with clear textures at the pixel points to be replaced.
S1333, determining equal-replacement pixel points based on the nearest distance of the pixel points to be replaced of the peripheral side pixel point set;
After the defocus index > a of the image sequence corresponding to the pixel point is met, the distance between the pixel point meeting the condition and the pixel point to be replaced needs to be determined, and as the pixel point distance is shorter, the pixel point with the shortest distance is selected as an equal-substituted pixel point, and the target image sequence can be determined through the equal-substituted pixel point.
And S140, fusing the target image sequences of all the pixel points.
It should be noted that, the above solution is aimed at image fusion of gray images, and in the fusion process, compared with the existing fusion method, the calculation amount in the fusion process can be reduced, so that the fusion efficiency of images in the fusion process can be improved, and the local weighted average method is also adopted as a fusion rule, so that the loss of contrast ratio of fused images can be reduced in the fusion process, and the quality of image fusion can be ensured.
And for the color images, the corresponding depth-of-field fusion images are calculated by using the above formulas for the channels with different colors respectively, and finally the three fusion images are combined.
Wherein R k is the R channel of the kth color image, G k is the G channel of the kth color image, and B k is the B channel of the kth color image.
ψ(x,y)=merge([ψR(x,y),ψG(x,y),ψB(x,y)])。
The embodiment of the application provides a depth of field synthesis device based on various fusion strategies, and referring to fig. 7, the depth of field synthesis device comprises: a color image acquisition module 201, a gray image acquisition module 202, an index calculation module 203, a target image sequence determination module 204, a fusion module 205, a first sequence determination module 206, a second sequence determination module 207, and a third sequence determination module 208;
The color image acquisition module 201 is configured to acquire color images of different focusing planes under the same coordinates;
A gray image obtaining module 202, configured to obtain gray images based on color images, where each color image corresponds to a gray image;
An index calculation module 203, configured to calculate an image sequence defocus index of all pixels in a gray image, where focus metrics corresponding to the pixels in the image are different;
The target image sequence determining module 204 is configured to determine a target image sequence based on an image sequence defocus index of pixel points, where each pixel point corresponds to the target image sequence;
the fusion module 205 is configured to fuse the target image sequences of all the pixel points;
wherein, referring to fig. 8, the process of determining the target image sequence for each pixel point is as follows:
a first sequence determining module 206, configured to determine the most clear image sequence as the target image sequence if the image sequence defocus index matches the clear set;
a second sequence determining module 207, configured to obtain a plurality of image sequences of pixels and perform weighted average to determine a target image sequence if the image sequences defocus index matches the intermediate set;
the third sequence determining module 208 is configured to obtain the equivalent pixel point and determine the image sequence of the equivalent pixel point as the target image sequence if the image sequence defocus index matches the blur set.
In an exemplary embodiment, the apparatus further comprises: a focus metric determination module 300 and a gamma correction module 310;
The focusing metric determining module 300 is configured to determine a focusing metric corresponding to the pixel point;
a gamma correction module 310 is configured to gamma correct the focus metric and determine a corresponding image sequence defocus index.
In an exemplary embodiment, the apparatus further comprises: a selected module 400, a first matching module 410, a second matching module 420, a third matching module 430;
Wherein, the selecting module 400 is configured to select two critical values a, b, where a is greater than b;
A first matching module 410 for matching the sharp set when a < the image sequence defocus index;
a second matching module 420 for matching the intermediate set when b < the defocus index of the image sequence is less than or equal to a;
The third matching module 430 is configured to match the blur set when the image sequence defocus index is less than or equal to b.
In an exemplary embodiment, the apparatus further comprises: a first focus metric acquisition module 500 and a first target sequence determination module 510;
the first focus metric obtaining module 500 is configured to obtain focus metrics corresponding to all the pixels;
the first target sequence determination module 510 is configured to determine a maximum focus metric as a target image sequence based on all focus metrics.
In an exemplary embodiment, the apparatus further comprises: a second power acquisition module 600, a focus metric quantity determination module 610, a second target sequence determination module 620;
The second focusing power obtaining module 600 is configured to obtain all focusing metrics corresponding to the pixel points;
a focus metric number determination module 610 for determining N specified focus metrics based on the overall image sequence defocus index, wherein the N specified focus metrics are the largest focus metrics of the overall focus metrics and N.gtoreq.3;
The second target sequence determination module 620 weight averages the N specified focus metrics for determining a target image sequence.
In an exemplary embodiment, the apparatus further comprises: a third defocus index acquisition module 700, a surrounding pixel set selection module 710, and an equal-replacement pixel determination module 720;
The third defocus index obtaining module 700 obtains a defocus index of a peripheral image sequence of a peripheral pixel corresponding to the pixel to be replaced;
a surrounding pixel point set selecting module 710, configured to select a surrounding pixel point set with a surrounding image sequence defocus index greater than a;
the equal-substituted pixel point determining module 720 is configured to determine an equal-substituted pixel point based on a closest distance between pixels to be replaced in the surrounding pixel point set.
It should be noted that, when the depth of field synthesizing device based on the multiple fusion strategies provided in the foregoing embodiment performs the depth of field synthesizing based on the multiple fusion strategies, only the division of the functional modules is used for illustration, in practical application, the functional allocation may be completed by different functional modules according to needs, that is, the internal structure of the depth of field synthesizing device based on the multiple fusion strategies will be divided into different functional modules, so as to complete all or part of the functions described above.
In addition, the depth of field synthesizing device based on the multiple fusion strategies provided in the above embodiment and the embodiment of the depth of field synthesizing method based on the multiple fusion strategies belong to the same concept, wherein the specific manner of executing the operations of each module has been described in detail in the method embodiment, and is not repeated here.
Referring to fig. 9, in an embodiment of the present application, an electronic device 4000 is provided, and the electronic device 400 may include: desktop computers, notebook computers, servers, etc.
In fig. 9, the electronic device 4000 includes at least one processor 4001 and at least one memory 4003.
Among other things, data interaction between the processor 4001 and the memory 4003 may be achieved by at least one communication bus 4002. The communication bus 4002 may include a path for transferring data between the processor 4001 and the memory 4003. The communication bus 4002 may be a PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus or an EISA (Extended Industry Standard Architecture ) bus or the like. The communication bus 4002 can be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 9, but not only one bus or one type of bus.
Optionally, the electronic device 4000 may further comprise a transceiver 4004, the transceiver 4004 may be used for data interaction between the electronic device and other electronic devices, such as transmission of data and/or reception of data, etc. It should be noted that, in practical applications, the transceiver 4004 is not limited to one, and the structure of the electronic device 4000 is not limited to the embodiment of the present application.
The Processor 4001 may be a CPU (Central Processing Unit ), general purpose Processor, DSP (DIGITAL SIGNAL Processor, data signal Processor), ASIC (Application SPECIFIC INTEGRATED Circuit), FPGA (Field Programmable GATE ARRAY ) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. The processor 4001 may also be a combination that implements computing functionality, e.g., comprising one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc.
Memory 4003 may be, but is not limited to, ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, EEPROM (ELECTRICALLY ERASABLE PROGRAMMABLE READ ONLY MEMORY ), CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program instructions or code in the form of instructions or data structures and that can be accessed by electronic device 400.
The memory 4003 has computer readable instructions stored thereon, and the processor 4001 can read the computer readable instructions stored in the memory 4003 through the communication bus 4002.
The computer readable instructions are executed by the one or more processors 4001 to implement the depth of view synthesis method based on a variety of fusion strategies in the embodiments described above.
Furthermore, in an embodiment of the present application, a storage medium is provided, on which computer readable instructions are stored, the computer readable instructions being executed by one or more processors to implement a depth of view synthesis method based on a plurality of fusion policies as described above.
In an embodiment of the present application, a computer program product is provided, where the computer program product includes computer readable instructions, where the computer readable instructions are stored in a storage medium, and where one or more processors of an electronic device read the computer readable instructions from the storage medium, load and execute the computer readable instructions, so that the electronic device implements a depth of view synthesis method based on a plurality of fusion policies as described above.
Compared with the related art, in the scheme of the application, in the process of fusing the images, the target image sequences can be determined in different modes aiming at different image sequence defocus indexes of pixel points, in the process of determining the target image sequences in three modes, clear textures in all the image sequences are selected as the target image sequences for fusing, in the process of determining the target image sequences, the calculation can be determined by adopting relatively simple calculation, the calculation amount of the image fusion is reduced in the process, so that the fusion efficiency is improved, and in addition, the image sequences with clear textures are obtained for fusing, and the situation of reduced contrast of a depth of field synthetic image can be slowed down; and the local weighted average mode is adopted, so that the quality of the depth of field synthesized image can be improved, and the problems of insufficient image contrast and low processing efficiency in the related technology can be effectively solved.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
The foregoing is only a partial embodiment of the present application, and it should be noted that it will be apparent to those skilled in the art that modifications and adaptations can be made without departing from the principles of the present application, and such modifications and adaptations are intended to be comprehended within the scope of the present application.

Claims (10)

1. A depth of field synthesis method based on a plurality of fusion strategies is characterized by comprising the following steps:
acquiring a plurality of color images of different focusing planes under the same coordinate;
based on gray images corresponding to the color images, wherein each color image corresponds to one gray image;
Calculating image sequence defocus indexes of all pixel points in the gray image;
determining a target image sequence based on the image sequence defocus index of the pixel points for fusion, wherein each pixel point corresponds to the target image sequence;
Fusing the target image sequences of all the pixel points;
the process of determining the target image sequence by each pixel point is as follows:
If the defocusing index of the image sequence is matched with the clear set, determining the most clear image sequence as a target image sequence;
if the image sequence defocus index is matched with the middle set, acquiring a plurality of image sequences of the pixel points, and carrying out weighted average to determine a target image sequence;
If the defocus index of the image sequence is matched with the fuzzy set, the equivalent pixel points are obtained, and the image sequence of the equivalent pixel points is determined to be a target image sequence.
2. The method of claim 1, wherein in the calculating of the image sequence defocus index for all pixels in the gray image, the method further comprises:
Determining a focusing metric corresponding to the pixel point;
and gamma correction is carried out on the focusing measurement to determine a corresponding image sequence defocus index.
3. The method of claim 2, wherein in matching the set of image sequence defocus indices, the method further comprises:
Selecting two critical values a and b, wherein a is greater than b;
When a is smaller than the defocus index of the image sequence, matching the clear set;
When b is smaller than the defocus index of the image sequence and smaller than a, matching the middle set;
And when the defocus index of the image sequence is less than or equal to b, matching the fuzzy set.
4. A method according to claim 3, wherein in said determining that the most clear image sequence is the target image sequence, the method further comprises:
Acquiring all focusing metrics corresponding to the pixel points;
The largest focus metric is determined as the target image sequence based on all focus metrics.
5. A method according to claim 3, wherein in the step of obtaining a plurality of image sequences at pixels for weighted averaging to determine a target image sequence, the method further comprises:
Acquiring all focusing metrics corresponding to the pixel points;
Determining N appointed focus metrics based on all focus metrics, wherein the N appointed focus metrics are the largest focus metrics in all focus metrics, and N is more than or equal to 3;
the N specified focus metrics are weighted averaged to determine a target image sequence.
6. The method of claim 3, wherein in the process of acquiring the equivalent pixel and determining the image sequence of the equivalent pixel as the target image sequence, the method further comprises:
acquiring a peripheral image sequence defocus index of a peripheral pixel point corresponding to a pixel point to be replaced;
Selecting a peripheral pixel point set with a peripheral image sequence defocus index greater than a;
and determining the equivalent pixel points based on the nearest distance of the pixel points to be replaced of the peripheral side pixel point set.
7. A depth of field synthesis device based on a plurality of fusion strategies, comprising:
the color image acquisition module is used for acquiring color images of different focusing planes under the same coordinates;
The gray image acquisition module is used for acquiring gray images corresponding to color images, wherein each color image corresponds to one gray image;
The index calculation module is used for calculating image sequence defocus indexes of all pixel points in the gray image;
the target image sequence determining module is used for determining a target image sequence based on the image sequence defocus index of the pixel points, wherein each pixel point corresponds to the target image sequence;
The fusion module is used for fusing the target image sequences of all the pixel points;
the process of determining the target image sequence by each pixel point is as follows:
the first sequence determining module is used for determining the clearest image sequence as a target image sequence if the defocusing index of the image sequence is matched with the clear set;
The second sequence determining module is used for acquiring a plurality of image sequences of pixel points to carry out weighted average to determine a target image sequence if the image sequence defocus index is matched with the middle set;
and the third sequence determining module is used for acquiring the equivalent pixel points and determining the image sequence of the equivalent pixel points as a target image sequence if the image sequence defocus index is matched with the fuzzy set.
8. The apparatus of claim 7, wherein the apparatus further comprises:
The focusing measurement determining module is used for determining the focusing measurement corresponding to the pixel point;
and the gamma correction module is used for carrying out gamma correction on the focusing measurement and determining a corresponding image sequence defocus index.
9. An electronic device, comprising: at least one processor, and at least one memory, wherein,
The memory has computer readable instructions stored thereon;
The computer readable instructions are executable by one or more of the processors to cause an electronic device to implement a depth of view synthesis method based on a plurality of fusion policies as recited in any one of claims 1 to 6.
10. A storage medium having stored thereon computer readable instructions that are executed by one or more processors to implement the depth of view synthesis method based on a plurality of fusion policies of any one of claims 1 to 6.
CN202311772202.3A 2023-12-21 2023-12-21 Depth of field synthesis method and device based on multiple fusion strategies, electronic equipment and storage medium Pending CN117974469A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311772202.3A CN117974469A (en) 2023-12-21 2023-12-21 Depth of field synthesis method and device based on multiple fusion strategies, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311772202.3A CN117974469A (en) 2023-12-21 2023-12-21 Depth of field synthesis method and device based on multiple fusion strategies, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117974469A true CN117974469A (en) 2024-05-03

Family

ID=90845015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311772202.3A Pending CN117974469A (en) 2023-12-21 2023-12-21 Depth of field synthesis method and device based on multiple fusion strategies, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117974469A (en)

Similar Documents

Publication Publication Date Title
KR101643607B1 (en) Method and apparatus for generating of image data
US9558543B2 (en) Image fusion method and image processing apparatus
WO2019105154A1 (en) Image processing method, apparatus and device
JP5709911B2 (en) Image processing method, image processing apparatus, image processing program, and imaging apparatus
JP6469678B2 (en) System and method for correcting image artifacts
Eltoukhy et al. Computationally efficient algorithm for multifocus image reconstruction
JP5824364B2 (en) Distance estimation device, distance estimation method, integrated circuit, computer program
CN103119927B (en) Image processing device, image capture device, and image processing method
US10334229B2 (en) Method for obtaining a refocused image from a 4D raw light field data using a shift correction parameter
CN102905084B (en) Method and camera for providing an estimation of a mean signal to noise ratio value for an image
US10182183B2 (en) Method for obtaining a refocused image from 4D raw light field data
US8041110B2 (en) Pixel interpolation method
JPWO2011122283A1 (en) Image processing apparatus, image processing method, image processing program, and imaging apparatus using the same
CN111667416A (en) Image processing method, image processing apparatus, learning model manufacturing method, and image processing system
JP2013531268A (en) Measuring distance using coded aperture
JP6353233B2 (en) Image processing apparatus, imaging apparatus, and image processing method
JP5528139B2 (en) Image processing apparatus, imaging apparatus, and image processing program
WO2016113805A1 (en) Image processing method, image processing apparatus, image pickup apparatus, program, and storage medium
CN111385461A (en) Panoramic shooting method and device, camera and mobile terminal
JP2017208642A (en) Imaging device using compression sensing, imaging method, and imaging program
CN117974469A (en) Depth of field synthesis method and device based on multiple fusion strategies, electronic equipment and storage medium
CN114782507B (en) Asymmetric binocular stereo matching method and system based on unsupervised learning
US20220405892A1 (en) Image processing method, image processing apparatus, image processing system, and memory medium
KR102119138B1 (en) Bayesian based image restoration method for camera
CN114936987B (en) Lens distortion correction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication