CN116456191A - Image generation method, device, equipment and computer readable storage medium - Google Patents

Image generation method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN116456191A
CN116456191A CN202310374384.2A CN202310374384A CN116456191A CN 116456191 A CN116456191 A CN 116456191A CN 202310374384 A CN202310374384 A CN 202310374384A CN 116456191 A CN116456191 A CN 116456191A
Authority
CN
China
Prior art keywords
focusing
image
preset number
images
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310374384.2A
Other languages
Chinese (zh)
Inventor
高蔓蔓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Wingtech Electronic Technology Co Ltd
Original Assignee
Shanghai Wingtech Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Wingtech Electronic Technology Co Ltd filed Critical Shanghai Wingtech Electronic Technology Co Ltd
Priority to CN202310374384.2A priority Critical patent/CN116456191A/en
Publication of CN116456191A publication Critical patent/CN116456191A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The present disclosure relates to the field of image processing, and provides an image generating method, an apparatus, a device, and a computer readable storage medium, where the method includes: determining a preset number of focusing points in a picture in the range of the view finding frame; focusing each focusing point in the preset number of focusing points in a fusion focusing mode to obtain a preset number of original images corresponding to the preset number of focusing points one by one; extracting image blocks with definition higher than a preset definition threshold value from each original image to obtain a plurality of sub-images; and synthesizing the plurality of sub-images into one image to obtain an image generation result of the picture in the view-finding frame range. According to the method and the device, the multiple focusing points are determined in the view finding range, each focusing point is focused respectively to obtain multiple original images, the clear part in each original image is combined into the complete image with the best definition, and the overall definition of the image is effectively improved.

Description

Image generation method, device, equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image generating method, an image generating device, an image generating apparatus, and a computer readable storage medium.
Background
With the advancement of technology, current photographing apparatuses, such as digital cameras, mobile phones with photographing functions, etc., can collect images after focusing the center of a photographed scene or the area where a target object is located through Auto Focus (Auto Focus).
The depth of field is one of important parameters of an image sensor (sensor) in a photographing apparatus, and when a distance between the photographing apparatus and a target object coincides with the depth of field of the image sensor, the photographing apparatus has the highest definition of imaging after focusing on the target object. In addition to the target object, other objects which are not on the same plane with the target object exist in the shooting scene, so that in the image acquired after focusing the target object, the image definition of the area where the target object is located is higher, and the image except the area where the target object is located is more fuzzy, and the image quality is reduced.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image generation method, apparatus, device, and computer-readable storage medium that can improve image quality.
The embodiment of the application provides an image generation method, which comprises the following steps:
determining a preset number of focusing points in a picture in the range of the view finding frame;
Focusing each focusing point in the preset number of focusing points in a fusion focusing mode to obtain a preset number of original images corresponding to the preset number of focusing points one by one;
extracting image blocks with definition higher than a preset definition threshold value from each original image to obtain a plurality of sub-images;
and synthesizing the plurality of sub-images into one image to obtain an image generation result of the picture in the view-finding frame range.
In one embodiment, the determining the preset number of pairs of focuses in the frame in the range of the viewfinder includes:
dividing the picture in the view-finding frame range into a preset number of view-finding areas on average, and determining the central point of each view-finding area as a focusing point; or,
and determining a preset number of target objects contained in the picture in the view frame range, and determining the center point of each target object as a focusing point.
In one embodiment, for each of the preset number of focus points, focusing is performed by a fused focusing manner, and a preset number of original images corresponding to the preset number of focus points one to one is obtained, including:
focusing each focusing point in the preset number of focusing points in a first focusing mode to acquire a first focusing image and the confidence coefficient of the first focusing image, wherein the first focusing mode is any one of phase focusing and contrast focusing;
If the confidence coefficient of the first focusing image is lower than a preset confidence coefficient threshold value, focusing is carried out in a second focusing mode to obtain a second focusing image, wherein the second focusing mode is one of phase focusing and contrast focusing except the first focusing mode;
and taking the second focusing image as an original image corresponding to the focusing point.
In one embodiment, after focusing by the first focusing mode for each of the preset number of focusing points and obtaining the first focusing image and the confidence level of the first focusing image, the method further includes:
and if the confidence coefficient of the first focusing image is not lower than a preset confidence coefficient threshold value, taking the first focusing image as an original image corresponding to the focusing point.
In one embodiment, before extracting the image blocks with the sharpness higher than the preset sharpness threshold in each original image to obtain the multiple sub-images, the method further includes:
for each original image, determining the definition of each image block in the original image based on the gradient value of each pixel point in the original image.
In one embodiment, the determining, for each original image, the sharpness of each image block in the original image based on the gradient value of each pixel point in the original image includes:
Calculating a gradient value of each pixel point in the original image;
and calculating the average value of gradient values of pixel points contained in the image blocks aiming at each image block in the original image to obtain the definition of the image blocks.
In one embodiment, the synthesizing the plurality of sub-images into one image to obtain an image generation result of the frame in the view-finder frame range includes:
determining the position information of each sub-image in an original image corresponding to the sub-image;
and according to the position information of each sub-image, synthesizing the plurality of sub-images into one image to obtain an image generation result of the picture in the view-finding frame range.
An embodiment of the present application provides an image generating apparatus, including:
a first determining module, configured to determine a preset number of focal points in a frame within a range of a viewfinder;
the focusing module is used for focusing each focusing point in the preset number of focusing points in a fusion focusing mode to obtain a preset number of original images corresponding to the preset number of focusing points one by one;
the extraction module is used for extracting image blocks with the definition higher than a preset definition threshold value in each original image to obtain a plurality of sub-images;
And the synthesis module is used for synthesizing the plurality of sub-images into one image to obtain an image generation result of the picture in the view-finding frame range.
The embodiment of the application provides an electronic device, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the image generation method provided by any embodiment of the application when executing the computer program.
Embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image generation method provided by any embodiment of the present application.
According to the image generation method, the device, the equipment and the computer readable storage medium, the plurality of focusing points are determined in the view finding range, focusing is conducted on each focusing point to acquire the original images corresponding to each focusing point, clear portions in each original image are further segmented and extracted and synthesized into the complete image with the best definition, and the overall definition of the image is effectively improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a flowchart of an image generation method provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an image generation interface provided by an embodiment of the present disclosure;
FIG. 3 is a flowchart of an image generation method according to another embodiment of the present disclosure;
FIG. 4 is a flowchart of an image generation method according to another embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present disclosure;
fig. 6 is an internal structural diagram of a computer device in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The embodiments of the present disclosure provide an image generating method, which is described below with reference to specific embodiments.
In one embodiment, as shown in fig. 1, an image generating method is provided, where the embodiment is applied to a terminal to illustrate the method, it is understood that the method may also be applied to a server, and may also be applied to a system including the terminal and the server, and implemented through interaction between the terminal and the server. The terminal may be an image acquisition device with image acquisition and processing functions, such as a smart phone, a palm computer, a tablet computer, a wearable device with a display screen, a desktop computer, a notebook computer, an integrated machine, a smart home device, and the like. It can be appreciated that the image acquisition method provided by the embodiment of the present disclosure may also be applied in other scenarios.
The following describes the image generation method shown in fig. 1, and the method includes the following specific steps:
s101, determining a preset number of focusing points in a picture in a view frame range.
The terminal can acquire images of shooting scenes through the camera, and can display current preview images, namely pictures in the range of a view finding frame, on a display screen of the terminal before shooting. Based on a preset focusing point selection rule or in response to a user's screen selection operation within the scope of the viewfinder, the terminal determines a preset number of focusing points in the screen within the scope of the viewfinder.
Fig. 2 is a schematic diagram of an image acquisition interface according to an embodiment of the disclosure. As shown in fig. 2, three pairs of focuses are determined in the screen within the scope of the viewfinder, and indicated by circles, the three pairs of focuses being located at the upper left position, the upper right position, and the center position of the screen, respectively. It is to be understood that the number and location of focal points in the present embodiment are merely illustrative examples and are not limiting of the present disclosure.
S102, focusing each focusing point in the preset number of focusing points in a fusion focusing mode to obtain a preset number of original images corresponding to the preset number of focusing points one by one.
Focusing refers to a method of adjusting a focal distance to obtain a clear picture when using an image acquisition device. In general, an automatic focusing mode is adopted, the principle of object light reflection is utilized, reflected light is received by a sensor CCM on the image acquisition equipment, and an electric focusing device is driven to focus through computer processing.
The principle of phase focusing, also called phase difference focusing, is that a plurality of shielding pixel points are reserved on a photosensitive element and are specially used for phase detection, and the offset value of focusing is determined by the distance between pixels, the change of the distance between pixels and the like, so that accurate focusing is realized. Specifically, a few black dead pixels (Shield pixels) are regularly inserted into a photosensitive element (Image Sensor) of the Image acquisition device, half of the original photosensitive pixels are covered, so that left and right eye imaging of a person is simulated, a phase difference of a position where a current Voice Coil Motor (VCM) is located is perceived, and a movement direction and a movement distance of the clear Image VCM are obtained through the phase difference. At a phase difference of 0, the obtained image is the clearest. The VCM is used for adjusting the position of the lens and presenting clear images.
The contrast type focusing is realized by the photosensitive element and the image processor, the lens is driven, the image is acquired in real time through the photosensitive element and is transmitted to the image processor, then the inverse quantity (namely the image contrast) is calculated, and the image with the largest contrast is screened by contrast.
The embodiment of the disclosure adopts a fusion focusing mode to focus, for example, a clearer image can be selected from images acquired by a plurality of different focusing modes to serve as an original image.
The terminal focuses on each focusing point respectively, acquires corresponding original images, and finally acquires an original image corresponding to the focusing point respectively for each focusing point, namely the quantity of the finally acquired original images is consistent with the quantity of the focusing points.
And S103, extracting image blocks with the definition higher than a preset definition threshold value from each original image to obtain a plurality of sub-images.
The depth of field is the range of the distance between the front and back of the shot object measured by the imaging of the clear image which can be obtained by the image acquisition device, when the image acquisition device focuses, only one plane is actually in true focus, and the distance between the plane and the image acquisition device is the depth of field. Therefore, among the images obtained by focusing the same focus, only the image in the vicinity of the focus is clear.
Therefore, in the above steps, among the plurality of original images acquired for focusing with respect to different focuses, only the image near the focus and the image corresponding to the object in the same plane as the photographed object corresponding to the focus are clear in each original image.
Specifically, the terminal extracts clear image blocks in each original image to obtain a plurality of sub-images. The number of sub-images obtained after extracting the sub-images from the original images with a preset number is not constant, and the number of sub-images is generally greater than or equal to the preset number.
Specifically, the terminal processes each original image, extracts characteristic information (definition) of the image, compares the characteristic information with required characteristics, and filters out images conforming to the characteristics, namely filters out parts of each original image, the definition of which is higher than a preset definition threshold value, so as to obtain a plurality of sub-images.
S104, combining the plurality of sub-images into one image to obtain an image generation result of the picture in the view-finding frame range.
According to the acquired multiple independent sub-images, the terminal synthesizes the independent elements into one image, namely, an image generation result of a picture in a view-finding frame range, and finally feeds back to a user.
Specifically, each sub-image is synthesized into a complete image with optimal definition by adopting an interpolation algorithm.
According to the embodiment of the disclosure, the plurality of focusing points are determined in the view finding range, each focusing point is focused respectively to obtain the plurality of original images, and the clear part in each original image is further segmented, extracted and synthesized into the complete image with the best definition, so that the overall definition of the image is effectively improved.
In addition, in the embodiment of the disclosure, each focusing point is focused by using a method of combining two focusing modes, and an image with a better focusing effect is selected from the two focusing modes to be applied, so that the definition of the acquired image is further ensured.
On the basis of the above embodiment, the determining a preset number of focusing points in the frame within the scope of the viewfinder includes: dividing the picture in the view-finding frame range into a preset number of view-finding areas on average, and determining the central point of each view-finding area as a focusing point; or determining a preset number of target objects contained in the picture within the view-finding frame range, and determining the center point of each target object as a focusing point.
The terminal can determine the focusing point in the picture in the range of the view frame according to the actual requirement of the user.
In some embodiments, the terminal equally divides the frame in the frame range into a preset number of frame regions, such as nine rectangular frame regions in a nine-grid fashion, with the geometric center of each frame region determined as the focus.
In other embodiments, the terminal respectively determines four corners and the center of the picture in the range of the view-finding frame as focusing points, respectively, and rapidly focuses the five focusing points and acquires five original images.
In other embodiments, the terminal identifies and detects the target object on the preview screen within the scope of the viewfinder, determines a plurality of target objects within the screen, such as a plurality of face areas, a plurality of building bodies, and the like, and takes the center of each target object as the focus point.
In other embodiments, the terminal may also identify and extract a large-area co-color region in the preview image within the scope of the viewfinder, and use the center of the co-color region as the focal point.
It will be appreciated that the predetermined number of focal points may be determined in the frame in other ways, and are not illustrated here.
According to the embodiment of the disclosure, the focal point is selected and determined in different modes according to different picture conditions, so that the flexibility of the image generation method is further improved.
Fig. 3 is a flowchart of an image generating method according to another embodiment of the present disclosure, as shown in fig. 3, the method includes the following steps:
s301, determining a preset number of focusing points in a picture in a view-finding frame range.
Specifically, the implementation process and principle of S301 and S101 are identical, and will not be described herein.
S302, focusing each focusing point in the preset number of focusing points in a first focusing mode to obtain a first focusing image and the confidence coefficient of the first focusing image, wherein the first focusing mode is any one of phase focusing and contrast focusing.
After the terminal determines a preset number of focusing points in the range of the view-finding frame, focusing each focusing point and collecting corresponding original images. Specifically, each focusing point is focused by a first focusing mode, wherein the first focusing mode is any one of phase focusing and contrast focusing.
In some embodiments, the phase focusing is used as a first focusing mode, the terminal carries out quick focusing on each focusing point through the phase focusing to obtain a corresponding first focusing image and a confidence level corresponding to the first focusing image, the confidence level represents the focusing effect of the first focusing image, and the higher the confidence level is, the better the focusing effect of the first focusing image is, and the clearer the image is; the lower the confidence, the worse the focusing effect of the first focused image, and the more blurred the image.
In other embodiments, the contrast focus is used as the first focusing mode, and the terminal performs quick focusing on each focusing point through the contrast focus to obtain a corresponding first focusing image and a confidence level corresponding to the first focusing image.
S303, judging whether the first confidence coefficient is lower than a preset confidence coefficient threshold value. If yes, executing S304; if not, S306 is performed.
S304, focusing is carried out in a second focusing mode to obtain a second focusing image, wherein the second focusing mode is one of phase focusing and contrast type focusing except the first focusing mode.
When the confidence coefficient of the first focusing image is lower than a preset confidence coefficient threshold value, the focusing effect of the first focusing image obtained in the first focusing mode is poor, the definition is poor, and at the moment, the second focusing mode is replaced to focus the focusing point so as to obtain an original image corresponding to the focusing point.
Specifically, in some embodiments, when phase focusing is used as the first focusing mode, the terminal acquires a first focusing image with a confidence level lower than a preset confidence level threshold through phase focusing, and at this time, the terminal replaces contrast type focusing to focus the same focusing point again to obtain a second focusing image.
In other embodiments, when contrast focusing is used as the first focusing mode, the terminal acquires the first focusing image with the confidence level lower than the preset confidence level threshold through contrast focusing, and at this time, the phase focusing is replaced to focus the same focusing point again to obtain the second focusing image.
And S305, taking the second focusing image as an original image corresponding to the focusing point.
S306, taking the first focusing image as an original image corresponding to the focusing point.
When the first confidence coefficient is not lower than the preset confidence coefficient threshold value, the first focusing image obtained by the terminal at the moment is considered to have better focusing effect, the image is clearer, and the first focusing image is directly used as an original image corresponding to the focusing point to be stored.
It can be understood that the terminal can acquire a preset number of first focusing images and confidence degrees after focusing each focusing point in a first focusing mode in sequence, and then filter out the focusing points of one or more first focusing images with the confidence degrees lower than a preset confidence degree threshold value to refocus in a second focusing mode; or, the terminal may also acquire, for one focusing point, a first focusing image and a confidence coefficient of the focusing point after focusing in a first focusing manner, and if the confidence coefficient is lower than a preset confidence coefficient threshold, continue focusing on the focusing point in a second focusing manner and acquire a second focusing image, and then execute focusing operation on the next focusing point.
S307, extracting image blocks with the definition higher than a preset definition threshold value from each original image to obtain a plurality of sub-images.
And S308, synthesizing the plurality of sub-images into one image to obtain an image generation result of the picture in the view-finding frame range.
Specifically, the implementation processes and principles of S307 to S308 and S103 to S104 are identical, and will not be described here again.
According to the embodiment of the disclosure, each focusing point is focused by combining two different focusing modes, so that the quality of an original image acquired for each focusing point is guaranteed to be higher to the greatest extent, the quality of an image generation result obtained based on the combination of a plurality of original images is guaranteed to be higher, and the effect of the image generation method is further improved.
Fig. 4 is a flowchart of an image generating method according to another embodiment of the present disclosure, as shown in fig. 4, the method includes the following steps:
s401, determining a preset number of focusing points in a picture in a view frame range.
S402, focusing each focusing point in the preset number of focusing points in a fusion focusing mode to obtain a preset number of original images corresponding to the preset number of focusing points one by one.
Specifically, the implementation processes and principles of S401 to S402 and S101 to S102 are identical, and are not described here again.
S403, determining the definition of each image block in each original image based on the gradient value of each pixel point in the original image.
Specifically, calculating a gradient value of each pixel point in the original image; and calculating the average value of gradient values of pixel points contained in the image blocks aiming at each image block in the original image to obtain the definition of the image blocks.
Gradient is a parameter that reflects the change. The gradient of a pixel characterizes the magnitude of the difference between the pixel and the adjacent pixel point, and in general, it is considered that the higher the gradient value, the more likely the pixel is to form a higher definition image.
Specifically, the terminal divides an original image into a plurality of image blocks, calculates an average value of gradient values of pixel points contained in each image block, and obtains definition of the image blocks.
In some embodiments, the split original image with the view finding regional degree can be directly split according to the split original image with the view finding regional degree in the above embodiments, so as to obtain a plurality of image blocks; alternatively, the original image may be divided into a plurality of rectangular image blocks of the same size, and the like, which is not limited by the present disclosure.
In some embodiments, calculating the gradient value for each pixel in the original image includes: calculating the square of the gray level difference of two adjacent pixel points as the gradient value of the target pixel point; alternatively, respectively extracting gradient values of the target pixel point in the horizontal and vertical directions by adopting a Sobel operator, and calculating the variance of convolution of the edge detection operator in the horizontal and vertical directions as the gradient value of the target pixel point; alternatively, the variance of the gray value of the target pixel point and the average gray value of the whole image is calculated as the gradient value of the target pixel point, and so on.
S404, extracting image blocks with definition higher than a preset definition threshold value from each original image to obtain a plurality of sub-images.
S405, combining the plurality of sub-images into one image to obtain an image generation result of the picture in the view-finding frame range.
Specifically, the implementation processes and principles of S404 to S405 and S103 to S104 are identical, and will not be described here again.
According to the embodiment of the disclosure, the definition of the image is detected based on the gradient values of the pixel points, so that a plurality of clear sub-images in different areas are obtained, the sub-images are further synthesized to obtain a final image generation result, and the quality of the image generation result is improved.
On the basis of the above embodiment, the synthesizing the plurality of sub-images into one image to obtain an image generation result of the frame within the view-finder frame range includes: determining the position information of each sub-image in an original image corresponding to the sub-image; and according to the position information of each sub-image, synthesizing the plurality of sub-images into one image to obtain an image generation result of the picture in the view-finding frame range.
Each sub-image is obtained by dividing and extracting the corresponding original image, and the view finding range corresponding to each original image is the same, so that the images presented by the sub-images at the same position in different original images are the same. Thus, in the final image generation result, each sub-image should still be located at the same position as in the original image, i.e. the position of the sub-image in the final image generation result can be determined based on its position in its corresponding original image.
Specifically, the terminal determines the position information of each sub-image in the original image corresponding to the sub-image, and further performs stitching and synthesis on a plurality of sub-images according to the position information. For example, an interpolation algorithm is used to splice and synthesize the plurality of sub-images.
According to the embodiment of the disclosure, the plurality of sub-images are synthesized according to the position information of the sub-images in the original image, so that the finally synthesized image generation result can be ensured to accord with the expected view finding range of the user, and the accuracy of the image generation method is ensured.
It should be understood that, although the steps in the flowcharts of fig. 1, 3, and 4 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps of fig. 1, 3, and 4 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the sub-steps or stages are performed necessarily occur sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
In one embodiment, as shown in fig. 5, there is provided an image generating apparatus 50 including: a first determining module 51, a focusing module 52, an extracting module 53, a synthesizing module 54; wherein, the first determining module 51 is configured to determine a preset number of focal points in the frame within the scope of the viewfinder; the focusing module 52 is configured to focus each of the preset number of focus points by means of fusion focusing, so as to obtain a preset number of original images corresponding to the preset number of focus points one by one; the extracting module 53 is configured to extract an image block in each original image, where the sharpness is higher than a preset sharpness threshold, so as to obtain a plurality of sub-images; the synthesizing module 54 is configured to synthesize the plurality of sub-images into one image, and obtain an image generation result of the frame within the viewfinder.
Optionally, the first determining module 51 is configured to divide the frame within the range of the viewfinder into a preset number of viewfinder areas on average, and determine a center point of each viewfinder area as a focusing point; or determining a preset number of target objects contained in the picture within the view-finding frame range, and determining the center point of each target object as a focusing point.
Optionally, the focusing module 52 includes a first acquiring unit 521, a second acquiring unit 522, and a first determining unit 523; the first obtaining unit 521 is configured to focus, for each of the preset number of focus points, by a first focusing mode, to obtain a first focusing image and a confidence level of the first focusing image, where the first focusing mode is any one of phase focusing and contrast focusing; the second obtaining unit 522 is configured to perform focusing by a second focusing mode if the confidence level of the first focusing image is lower than a preset confidence level threshold, and obtain a second focusing image, where the second focusing mode is one of phase focusing and contrast focusing other than the first focusing mode; the first determining unit 523 is configured to take the second focused image as an original image corresponding to the focusing point.
Optionally, the first determining unit 523 is further configured to, if the confidence level of the first focused image is not lower than a preset confidence level threshold, take the first focused image as the original image corresponding to the focusing point.
Optionally, the image generating apparatus 50 further comprises a second determining module 55, configured to determine, for each original image, sharpness of each image block in the original image based on the gradient value of each pixel point in the original image.
Optionally, the second determining module 55 includes a calculating unit 551, a third obtaining unit 552; the calculating unit 551 is configured to calculate a gradient value of each pixel point in the original image; the third obtaining unit 552 is configured to calculate, for each image block in the original image, an average value of gradient values of pixel points included in the image block, and obtain sharpness of the image block.
Optionally, the synthesizing module 54 includes a second determining unit 541, a synthesizing unit 542; the second determining unit 541 is configured to determine, for each sub-image, location information of the sub-image in an original image corresponding to the sub-image; the synthesizing unit 542 is configured to synthesize the plurality of sub-images into one image according to the position information of each sub-image, and obtain an image generation result of the frame within the viewfinder range.
The image generating device of the embodiment shown in fig. 5 can be used for executing the technical scheme of the embodiment of the method, and by determining a plurality of focusing points in the view finding range, each focusing point is focused respectively to obtain a plurality of original images, and the clear part in each original image is further segmented, extracted and synthesized into a complete image with the best definition, so that the overall definition of the image is effectively improved.
For specific limitations of the image generating apparatus, reference may be made to the above limitations of the image generating method, and no further description is given here. The respective modules in the above-described image generating apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or independent of a processor in the electronic device, or may be stored in software in a memory in the electronic device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, an electronic device is provided, which may be a server, and the internal structure of which may be as shown in fig. 6. The electronic device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the electronic device is for storing image data. The network interface of the electronic device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image generation method.
It will be appreciated by those skilled in the art that the structure shown in fig. 6 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the electronic device to which the present application is applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the image generating apparatus provided herein may be implemented in the form of a computer program that is executable on an electronic device as shown in fig. 6. The memory of the electronic device may store therein various program modules constituting the image generating apparatus, such as the first determination module 51, the focusing module 52, the extraction module 53, and the synthesizing module 54 shown in fig. 5. The computer program constituted by the respective program modules causes the processor to execute the steps in the image generation method of the respective embodiments of the present application described in the present specification.
For example, the electronic apparatus shown in fig. 6 can determine a preset number of foci in a picture within the scope of the viewfinder by the first determination module 51 in the image generating device as shown in fig. 5. The electronic device may perform focusing on each of the preset number of focus points through the focusing module 52 in a fused focusing manner, so as to obtain a preset number of original images corresponding to the preset number of focus points one by one. The electronic device may perform extraction of the image blocks with the sharpness higher than the preset sharpness threshold in each original image by the extraction module 53, so as to obtain a plurality of sub-images. The electronic device may perform the combination of the plurality of sub-images into one image through the combination module 54, so as to obtain an image generation result of the frame within the viewfinder.
In one embodiment, an electronic device is provided comprising a memory storing a computer program and a processor that when executing the computer program performs the steps of: determining a preset number of focusing points in a picture in the range of the view finding frame; focusing each focusing point in the preset number of focusing points in a fusion focusing mode to obtain a preset number of original images corresponding to the preset number of focusing points one by one; extracting image blocks with definition higher than a preset definition threshold value from each original image to obtain a plurality of sub-images; and synthesizing the plurality of sub-images into one image to obtain an image generation result of the picture in the view-finding frame range.
In one embodiment, the processor when executing the computer program further performs the steps of: dividing the picture in the view-finding frame range into a preset number of view-finding areas on average, and determining the central point of each view-finding area as a focusing point; or determining a preset number of target objects contained in the picture within the view-finding frame range, and determining the center point of each target object as a focusing point.
In one embodiment, the processor when executing the computer program further performs the steps of: focusing each focusing point in the preset number of focusing points in a first focusing mode to acquire a first focusing image and the confidence coefficient of the first focusing image, wherein the first focusing mode is any one of phase focusing and contrast focusing; if the confidence coefficient of the first focusing image is lower than a preset confidence coefficient threshold value, focusing is carried out in a second focusing mode to obtain a second focusing image, wherein the second focusing mode is one of phase focusing and contrast focusing except the first focusing mode; and taking the second focusing image as an original image corresponding to the focusing point.
In one embodiment, the processor when executing the computer program further performs the steps of: and if the confidence coefficient of the first focusing image is not lower than a preset confidence coefficient threshold value, taking the first focusing image as an original image corresponding to the focusing point.
In one embodiment, the processor when executing the computer program further performs the steps of: for each original image, determining the definition of each image block in the original image based on the gradient value of each pixel point in the original image.
In one embodiment, the processor when executing the computer program further performs the steps of: calculating a gradient value of each pixel point in the original image; and calculating the average value of gradient values of pixel points contained in the image blocks aiming at each image block in the original image to obtain the definition of the image blocks.
In one embodiment, the processor when executing the computer program further performs the steps of: determining the position information of each sub-image in an original image corresponding to the sub-image; and according to the position information of each sub-image, synthesizing the plurality of sub-images into one image to obtain an image generation result of the picture in the view-finding frame range.
According to the embodiment of the disclosure, the plurality of focusing points are determined in the view finding range, each focusing point is focused respectively to obtain the plurality of original images, and the clear part in each original image is further segmented, extracted and synthesized into the complete image with the best definition, so that the overall definition of the image is effectively improved.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of: determining a preset number of focusing points in a picture in the range of the view finding frame; focusing each focusing point in the preset number of focusing points in a fusion focusing mode to obtain a preset number of original images corresponding to the preset number of focusing points one by one; extracting image blocks with definition higher than a preset definition threshold value from each original image to obtain a plurality of sub-images; and synthesizing the plurality of sub-images into one image to obtain an image generation result of the picture in the view-finding frame range.
In one embodiment, the computer program when executed by the processor further performs the steps of: dividing the picture in the view-finding frame range into a preset number of view-finding areas on average, and determining the central point of each view-finding area as a focusing point; or determining a preset number of target objects contained in the picture within the view-finding frame range, and determining the center point of each target object as a focusing point.
In one embodiment, the computer program when executed by the processor further performs the steps of: focusing each focusing point in the preset number of focusing points in a first focusing mode to acquire a first focusing image and the confidence coefficient of the first focusing image, wherein the first focusing mode is any one of phase focusing and contrast focusing; if the confidence coefficient of the first focusing image is lower than a preset confidence coefficient threshold value, focusing is carried out in a second focusing mode to obtain a second focusing image, wherein the second focusing mode is one of phase focusing and contrast focusing except the first focusing mode; and taking the second focusing image as an original image corresponding to the focusing point.
In one embodiment, the computer program when executed by the processor further performs the steps of: and if the confidence coefficient of the first focusing image is not lower than a preset confidence coefficient threshold value, taking the first focusing image as an original image corresponding to the focusing point.
In one embodiment, the computer program when executed by the processor further performs the steps of: for each original image, determining the definition of each image block in the original image based on the gradient value of each pixel point in the original image.
In one embodiment, the computer program when executed by the processor further performs the steps of: calculating a gradient value of each pixel point in the original image; and calculating the average value of gradient values of pixel points contained in the image blocks aiming at each image block in the original image to obtain the definition of the image blocks.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining the position information of each sub-image in an original image corresponding to the sub-image; and according to the position information of each sub-image, synthesizing the plurality of sub-images into one image to obtain an image generation result of the picture in the view-finding frame range.
According to the embodiment of the disclosure, the plurality of focusing points are determined in the view finding range, each focusing point is focused respectively to obtain the plurality of original images, and the clear part in each original image is further segmented, extracted and synthesized into the complete image with the best definition, so that the overall definition of the image is effectively improved.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as static random access memory (Static Random Access Memory, SRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. An image generation method, the method comprising:
determining a preset number of focusing points in a picture in the range of the view finding frame;
focusing each focusing point in the preset number of focusing points in a fusion focusing mode to obtain a preset number of original images corresponding to the preset number of focusing points one by one;
Extracting image blocks with definition higher than a preset definition threshold value from each original image to obtain a plurality of sub-images;
and synthesizing the plurality of sub-images into one image to obtain an image generation result of the picture in the view-finding frame range.
2. The method of claim 1, wherein determining a preset number of pairs of focus points in the frame within the viewfinder comprises:
dividing the picture in the view-finding frame range into a preset number of view-finding areas on average, and determining the central point of each view-finding area as a focusing point; or,
and determining a preset number of target objects contained in the picture in the view frame range, and determining the center point of each target object as a focusing point.
3. The method according to claim 1, wherein for each of the preset number of focus points, focusing is performed by a fused focusing manner, and obtaining a preset number of original images corresponding to the preset number of focus points one to one includes:
focusing each focusing point in the preset number of focusing points in a first focusing mode to acquire a first focusing image and the confidence coefficient of the first focusing image, wherein the first focusing mode is any one of phase focusing and contrast focusing;
If the confidence coefficient of the first focusing image is lower than a preset confidence coefficient threshold value, focusing is carried out in a second focusing mode to obtain a second focusing image, wherein the second focusing mode is one of phase focusing and contrast focusing except the first focusing mode;
and taking the second focusing image as an original image corresponding to the focusing point.
4. A method according to claim 3, wherein after said focusing by the first focusing means for each of said preset number of focus points, obtaining a first focused image and a confidence level of said first focused image, the method further comprises:
and if the confidence coefficient of the first focusing image is not lower than a preset confidence coefficient threshold value, taking the first focusing image as an original image corresponding to the focusing point.
5. The method according to claim 1, wherein before extracting image blocks with a sharpness higher than a preset sharpness threshold in each original image to obtain a plurality of sub-images, the method further comprises:
for each original image, determining the definition of each image block in the original image based on the gradient value of each pixel point in the original image.
6. The method of claim 5, wherein determining, for each original image, sharpness of each image block in the original image based on the gradient value of each pixel in the original image comprises:
calculating a gradient value of each pixel point in the original image;
and calculating the average value of gradient values of pixel points contained in the image blocks aiming at each image block in the original image to obtain the definition of the image blocks.
7. The method according to claim 1, wherein the synthesizing the plurality of sub-images into one image to obtain an image generation result of the frame within the view-finder frame range includes:
determining the position information of each sub-image in an original image corresponding to the sub-image;
and according to the position information of each sub-image, synthesizing the plurality of sub-images into one image to obtain an image generation result of the picture in the view-finding frame range.
8. An image generating apparatus, comprising:
a first determining module, configured to determine a preset number of focal points in a frame within a range of a viewfinder;
the focusing module is used for focusing each focusing point in the preset number of focusing points in a fusion focusing mode to obtain a preset number of original images corresponding to the preset number of focusing points one by one;
The extraction module is used for extracting image blocks with the definition higher than a preset definition threshold value in each original image to obtain a plurality of sub-images;
and the synthesis module is used for synthesizing the plurality of sub-images into one image to obtain an image generation result of the picture in the view-finding frame range.
9. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202310374384.2A 2023-04-06 2023-04-06 Image generation method, device, equipment and computer readable storage medium Pending CN116456191A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310374384.2A CN116456191A (en) 2023-04-06 2023-04-06 Image generation method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310374384.2A CN116456191A (en) 2023-04-06 2023-04-06 Image generation method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116456191A true CN116456191A (en) 2023-07-18

Family

ID=87129659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310374384.2A Pending CN116456191A (en) 2023-04-06 2023-04-06 Image generation method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116456191A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630220A (en) * 2023-07-25 2023-08-22 江苏美克医学技术有限公司 Fluorescent image depth-of-field fusion imaging method, device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630220A (en) * 2023-07-25 2023-08-22 江苏美克医学技术有限公司 Fluorescent image depth-of-field fusion imaging method, device and storage medium
CN116630220B (en) * 2023-07-25 2023-11-21 江苏美克医学技术有限公司 Fluorescent image depth-of-field fusion imaging method, device and storage medium

Similar Documents

Publication Publication Date Title
KR102278776B1 (en) Image processing method, apparatus, and apparatus
CN110536057B (en) Image processing method and device, electronic equipment and computer readable storage medium
US20170111582A1 (en) Wide-Area Image Acquiring Method and Apparatus
CN106899781B (en) Image processing method and electronic equipment
JP5592006B2 (en) 3D image processing
KR102229811B1 (en) Filming method and terminal for terminal
US8860816B2 (en) Scene enhancements in off-center peripheral regions for nonlinear lens geometries
CN113129241B (en) Image processing method and device, computer readable medium and electronic equipment
KR20110078175A (en) Method and apparatus for generating of image data
JP6308748B2 (en) Image processing apparatus, imaging apparatus, and image processing method
CN107749944A (en) A kind of image pickup method and device
CN110213498B (en) Image generation method and device, electronic equipment and computer readable storage medium
CN110650288B (en) Focusing control method and device, electronic equipment and computer readable storage medium
CN112087571A (en) Image acquisition method and device, electronic equipment and computer readable storage medium
WO2019037038A1 (en) Image processing method and device, and server
CN112019734B (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN110177212B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112261292B (en) Image acquisition method, terminal, chip and storage medium
JP2009047497A (en) Stereoscopic imaging device, control method of stereoscopic imaging device, and program
CN112991245A (en) Double-shot blurring processing method and device, electronic equipment and readable storage medium
CN110278366B (en) Panoramic image blurring method, terminal and computer readable storage medium
US20130083169A1 (en) Image capturing apparatus, image processing apparatus, image processing method and program
CN116456191A (en) Image generation method, device, equipment and computer readable storage medium
JP2009047498A (en) Stereoscopic imaging device, control method of stereoscopic imaging device, and program
CN108810326B (en) Photographing method and device and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination