CN116916151A - Shooting method, electronic device and storage medium - Google Patents

Shooting method, electronic device and storage medium Download PDF

Info

Publication number
CN116916151A
CN116916151A CN202311172977.7A CN202311172977A CN116916151A CN 116916151 A CN116916151 A CN 116916151A CN 202311172977 A CN202311172977 A CN 202311172977A CN 116916151 A CN116916151 A CN 116916151A
Authority
CN
China
Prior art keywords
image
preview
instruction
shooting
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311172977.7A
Other languages
Chinese (zh)
Other versions
CN116916151B (en
Inventor
张帅勇
丁大钧
邵涛
朱聪超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311172977.7A priority Critical patent/CN116916151B/en
Publication of CN116916151A publication Critical patent/CN116916151A/en
Application granted granted Critical
Publication of CN116916151B publication Critical patent/CN116916151B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Abstract

The application relates to the technical field of intelligent terminals, in particular to a shooting method, electronic equipment and a computer readable storage medium, wherein the method comprises the steps of displaying a shooting preview picture of a first application; detecting a first shooting instruction of a user, and collecting an instruction image corresponding to the first shooting instruction; fusing the instruction image and at least one stored preview image to obtain a fused image; selecting a first image meeting quality conditions from the stored preview image, instruction image and fusion image, and obtaining a target image corresponding to a first shooting instruction based on the first image; and displaying the target image. Therefore, the image quality of the result image is not lower than that of the preview image, and the shooting experience of the user is improved.

Description

Shooting method, electronic device and storage medium
Technical Field
The application relates to the technical field of intelligent terminals, in particular to a shooting method, electronic equipment and a computer readable storage medium.
Background
During the process of taking a picture by a user, the electronic device typically provides the user with a succession of multiple frames of preview images (e.g., a preview video stream) for the user to take a satisfactory image. In the process, the user can observe the change of the preview image in the preview area of the photographing application, and after the user clicks the photographing control, the photographed target image can be obtained.
It should be appreciated that the target image is typically determined based on a single frame image captured by a camera of the electronic device when the user clicks the capture control as a reference image in combination with at least one frame preview image captured by the camera of the electronic device before the user clicks the capture control. For example, the electronic device may fuse a clearer image of three preview images acquired by a camera of the electronic device before a user clicks a shooting control with a reference image, so as to obtain a clearer target image than the preview image.
However, in some scenes, there may be cases where the image quality of the target image is lower than that of the preview image, affecting the shooting experience of the user. For example, when a camera fails to focus due to camera shake while a user clicks a photographing control, a reference image may be blurred, and a resulting target image may be unclear even if fused with a preview image. For another example, when a user shoots an object moving at a high speed or when the electronic device moves at a high speed, in a preview image before the user clicks a shooting control, the position of the shooting object moving at a high speed with respect to the electronic device is different in a plurality of consecutive preview images. Fusing the preview images and the reference images into the target image can lead to the existence of residual shadows formed by overlapping the shooting objects in the target image.
Disclosure of Invention
Some embodiments of the present application provide a photographing method. The application is described in terms of several aspects, embodiments and advantages of which can be referenced to one another.
In a first aspect, the present application proposes a photographing method, applied to an electronic device, the method comprising: displaying a shooting preview picture of the first application; detecting a first shooting instruction of a user, and collecting an instruction image corresponding to the first shooting instruction; fusing the instruction image and at least one stored preview image to obtain a fused image; selecting a first image meeting quality conditions from the stored preview image, instruction image and fusion image, and obtaining a target image corresponding to a first shooting instruction based on the first image; and displaying the target image.
That is, the shooting preview picture displays continuous multi-frame preview images, a first shooting instruction (such as clicking operation on a shooting control) of a user is detected, and a corresponding instruction image (i.e. a reference image) is acquired. And then fusing the reference image and at least one stored preview image to obtain a fused image, selecting a first image meeting quality conditions from the stored preview image, the reference image and the fused image, further obtaining a target image (namely a result image) corresponding to the first shooting instruction based on the first image, and displaying the result image. Therefore, the image quality of the result image is not lower than that of the preview image, and the shooting experience of the user is improved.
In one possible implementation manner of the first aspect, obtaining, based on the first image, a target image corresponding to the first shooting instruction includes: and carrying out first image enhancement processing on the stored preview image corresponding to the first image as the stored preview image to obtain a target image, wherein the first image enhancement processing at least comprises any one of the following steps: sharpening processing, contrast enhancement processing and super-resolution algorithm processing.
If the first image is a stored preview image, at least one of sharpening, contrast enhancement and super-resolution algorithm processing can be performed on the first image, so as to improve the image quality of the preview image and further improve the shooting experience of a user.
In a possible implementation manner of the first aspect, performing a first image enhancement process on the stored preview image to obtain the target image includes: acquiring a current focal length multiplying power value corresponding to a first application; corresponding to the fact that the current focal length multiplying power value is smaller than or equal to a preset multiplying power threshold value, sharpening and/or contrast enhancement processing is carried out on the stored preview image to obtain a target image; and carrying out image enhancement processing on the stored preview image by using a super-resolution algorithm to obtain a target image corresponding to the fact that the current focal length multiplying power value is larger than a preset multiplying power threshold.
The image loss degree is determined according to the magnification of the current focal length, the image loss degree obtained when the magnification is large is larger, the stored preview image is required to be subjected to image enhancement processing through a super-resolution algorithm, the resolution is increased, and then the target image with high image quality is obtained. And if the magnification is small, the acquired image has smaller loss degree, and the stored preview image can be subjected to sharpening processing and/or contrast enhancement processing to obtain a target image.
In one possible implementation manner of the first aspect, obtaining, based on the first image, a target image corresponding to the first shooting instruction includes: and performing second image enhancement processing on the instruction image based on the fusion image to obtain a target image, wherein the first image is corresponding to the instruction image.
That is, when the first image is an instruction image (i.e., a reference image), the reference image is subjected to corresponding image enhancement processing based on the fused image, for example, the reference image is subjected to image enhancement based on the degree of difference between the fused image and the reference image.
In a possible implementation manner of the first aspect, performing a second image enhancement process on the instruction image based on the fused image to obtain the target image includes: determining the degree of difference between the content of a first image area in the instruction image and the content of a second image area corresponding to the first image area in the fused image; the content of the first image area is taken as the content of a third image area corresponding to the first image area in the target image, wherein the content corresponds to the difference degree being larger than or equal to a first threshold value or the difference degree being smaller than a second threshold value; and fusing the content of the first image area and the content of the second image area corresponding to the difference degree being smaller than a first threshold value and being larger than or equal to a second threshold value, so as to obtain the content of a third image area corresponding to the first image area in the target image, wherein the first threshold value is larger than or equal to the second threshold value.
I.e. based on the degree of difference (e.g. the difference in pixel values) between the image content of a first image area in the instruction image (i.e. the reference image), e.g. the pixel values of the first image area, and the content of a second image area in the fusion image, e.g. the pixel values of the second image area, corresponding to the first image area. And if the difference degree is larger than or equal to the first threshold value or if the difference degree is smaller than the second threshold value, the content of the second image area of the fusion image is replaced by the content of the first image area of the reference image as the corresponding image content in the target image (namely, the result image). And if the difference degree is smaller than the first threshold value and larger than or equal to the second threshold value, fusing the content of the first image area and the content of the second image area to obtain the content of a third image area corresponding to the first image area in the result image. For example, the weight of the content of the first image area is set to be 0.5, the weight of the content of the second image area is set to be 0.5, and the content of the third image area is fused.
In one possible implementation manner of the first aspect, obtaining, based on the first image, a target image corresponding to the first shooting instruction includes: and taking the fused image as a target image corresponding to the first image as the fused image.
That is, when the highest quality image is a fused image, the fused image is directly used as a target image (i.e., a result image).
In a possible implementation of the first aspect, selecting the first image that meets the quality condition from the stored preview image, the instruction image, and the fused image includes: determining the stored image quality scoring values corresponding to each image in the preview image, the instruction image and the fusion image through an image quality evaluation technology; and taking the stored image with the largest image quality score value in the preview image, the instruction image and the fusion image as a first image.
The stored image quality grading value corresponding to each image in the preview image, the instruction image and the fusion image can be determined through an image quality evaluation technology, and the image with the highest image quality is determined to be the first image based on the image quality grading value.
In a possible implementation of the first aspect, the stored preview images include preview images acquired during a first time period before the first photographing instruction is detected, and the stored at least one preview image includes at least a portion of preview images acquired during a second time period before the first photographing instruction is detected, wherein the first time period is longer than the second time period.
That is, the acquisition time of the preview image used for selecting the image with the largest quality score can be longer than the time of the preview image fused by the fused images, and a sufficient number of images are provided for selecting the image with the largest quality score, so that the image quality of the result image is improved as much as possible.
In a possible implementation of the first aspect, the stored at least one preview image further includes at least one preview image acquired within a third time period after the detection of the first photographing instruction.
That is, at least one preview image acquired in the third time period after the shooting instruction can correspond to the shooting mode corresponding to the shooting parameter during the acquisition, so that a fused image more conforming to the shooting mode can be conveniently obtained.
In a second aspect, the present application also provides an electronic device, including: one or more processors; one or more memories; the one or more memories store one or more instructions that, when executed by the one or more processors, cause the electronic device to perform any of the possible photographing methods of the first aspect described above.
In a third aspect, the present application also provides a computer readable storage medium having stored thereon instructions that, when executed on a computer, cause the computer to perform any one of the possible photographing methods of the first aspect described above.
In a fourth aspect, embodiments of the present application disclose a computer program product comprising instructions which, when run on a computer, cause the computer to perform any of the possible shooting methods of the first aspect described above.
The advantages of the second to fourth aspects may be referred to above and related descriptions of the various possible implementations of the first aspect, and are not described here in detail.
Drawings
FIG. 1A shows a preview schematic of a photographic scene;
FIG. 1B shows a shooting schematic in a shooting scene;
fig. 1C shows a schematic diagram of a synthetic target image in a shooting scene;
FIG. 2A shows a schematic view of a scene of capturing an object moving at high speed;
FIG. 2B shows a schematic view of a target image of an object moving at high speed;
fig. 2C is a schematic diagram illustrating a principle of synthesizing a target image in a high-speed moving object shooting scene;
FIG. 3 is a schematic flow chart of an embodiment of a shooting method according to the present application;
fig. 4 shows a schematic diagram of a photographing method according to an embodiment of the present application;
FIG. 5A is a flowchart of another photographing method according to an embodiment of the present application;
FIG. 5B is a schematic diagram showing selection principle of a preview image according to an embodiment of the present application;
FIG. 6 is a schematic diagram showing an implementation of image enhancement according to an embodiment of the present application;
fig. 7 is a schematic diagram showing a hardware structure of the electronic device 100 according to the embodiment of the present application;
fig. 8 shows a software architecture block diagram of an electronic device 100 according to an embodiment of the application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be described in detail below with reference to the accompanying drawings and specific embodiments of the present application.
Illustrative embodiments of the application include, but are not limited to, shooting methods, electronic devices, and computer-readable storage media.
It will be appreciated that the electronic devices to which the present application applies may be cell phones, tablet computers, desktop computers, laptops, handheld computers, netbooks, augmented reality (VR) devices, smart televisions, smart watches, etc., as well as electronic devices having one or more processors embedded or coupled therein, without limitation.
As described above, in some scenes, there may be cases where the image quality of the target image is lower than that of the preview image. For example, camera shake when a user clicks a photographing control causes a camera of an electronic device to fail to focus, or there is an object moving at a high speed relative to the camera of the electronic device in a scene photographed by the user.
A shooting scene in which a camera cannot focus due to camera shake when a user clicks a shooting control is described in detail below with reference to fig. 1A to 1C.
Fig. 1A shows a preview schematic of a shooting scene. Fig. 1B shows a shooting diagram in a shooting scene. Fig. 1C shows a schematic diagram of a synthetic target image in a photographed scene.
Referring to fig. 1A, a camera application installed in a mobile phone 00 may provide a user with a photographing interface 101 including various photographing controls to meet the photographing needs of the user. The photographing interface 101 includes at least a photographing control 011 for photographing, a zoom control 012 for controlling a focus magnification, and a preview area 013 for providing a preview image.
When the user does not click on the shooting control 011, the preview area 013 provides the user with a succession of multi-frame preview images, such as the preview image 013a shown in fig. 1A. Referring to fig. 1B, the mobile phone 00 may perform shooting when detecting the click operation 10 of the shooting control 011 by the user, and collect the reference image 013B. Since the user sets the zoom control 012 to a standard focal length of 50 times, a photographic subject of 50 times magnification is displayed in the preview area of the mobile phone 00 in this case. At this time, a small displacement of the mobile phone 00 causes a large relative displacement of the subject in the preview area 013. A slight hand shake of the user causes blurring of the subject image in the reference image 013b. When the user clicks the shooting control 011, the reference image is a frame of image acquired by the camera of the mobile phone 00, and may be used to generate a target image.
Referring to fig. 1C, in order to obtain a target image, at least one frame of preview image before clicking the shooting control 011 is fused with a reference image 013b, for example, a plurality of frames of preview images before clicking the shooting control 011 are fused with the reference image 013b, so that a fused target image X can be obtained. The reference image 013b is blurred, and thus the fused target image X is blurred.
In some embodiments, at least one frame of preview image before clicking the shooting control 011, the reference image 013b, and at least one frame of preview image shot within a period of time after clicking the shooting control 011 may be fused together into the target image X. Here, at least one frame of preview image captured within a period of time after the shooting control 011 is clicked and the preview image before the shooting control 011 is clicked may have different shooting parameters, and may be used to correspondingly implement different shooting modes. For example, in a high dynamic range imaging (high dynamic range imaging, HDR) shooting mode, a preview image is shot based on shooting parameters in the shooting mode, which may enable an exposure parameter value of at least one frame of preview image shot in a period of time after the shooting control 011 is clicked to be larger than a shooting parameter corresponding to a preview image before the shooting control 011 is clicked, so as to fuse a target image X conforming to the current shooting mode.
A shooting scene in which a user shoots a high-speed moving object is described in detail below with reference to fig. 2A to 2C.
Fig. 2A shows a schematic view of a scene of capturing an object moving at a high speed. Fig. 2B shows a schematic view of a target image of an object moving at a high speed. Fig. 2C shows a schematic diagram of a synthetic target image in a high-speed moving object shooting scene.
Referring to fig. 2A, a camera application installed in a mobile phone 00 may provide a user with a photographing interface 102 including various photographing controls to meet the user's photographing needs. At least a photographing control 021 for photographing, a zoom control 022 for controlling a magnification of a focus, and a preview area 023 for providing a preview image are included in the photographing interface 102.
When the user does not click on the capture control 021, the preview area 023 may provide the user with a continuous multi-frame preview image, for example, a multi-frame preview image of a boat that continuously oscillates up and down with waves. Further, referring to fig. 2A, when the mobile phone 00 detects the click operation 20 of the photographing control 021 by the user, a frame of reference image 023a can be acquired. However, since the boat which is continuously rocked in the preview image is continuously displaced in the preview region, the target image obtained after fusing at least one frame of the preview image and the reference image 023a contains an afterimage. For example, referring to fig. 2B, the target image 023B includes an afterimage formed by superimposing the bow edges of the boat.
Referring to fig. 2C, it can be appreciated that each of the plurality of frames of preview images can be sharp, as can the reference image 023 a. However, since the positions of the boat heads in the multi-frame preview image in the preview area 023 are different, and the positions of the boat heads in the multi-frame preview image and the boat heads in the clear reference image 023a are also different, the target image obtained by fusing at least one frame of preview image and the reference image contains the boat head afterimages in different positions, so that afterimage P is formed, and the image quality of the target image 023b is affected.
In some embodiments, the target image 023b may be merged together with at least one frame of preview image before clicking the shooting control 021, the reference image 023a, and at least one frame of preview image captured within a period of time after clicking the shooting control 021.
Combining the two shooting scenes shown in fig. 1A to 2C, the image quality of the target image is lower than that of the preview image presented in the preview area, resulting in a lower user experience.
In view of this, the embodiment of the present application provides a photographing method, where an electronic device may store a preview image collected in a last first duration during a process of collecting the preview image. After the electronic equipment detects shooting operation of a user, a frame of reference image can be acquired, and the acquired reference image is fused with at least one frame of preview image in the stored preview images to obtain a fused image. Then, the electronic device may use the fusion image, the reference image, and one frame image with the highest image quality among the stored preview images as a result image. Therefore, the image quality of the result image is not lower than that of the preview image, and the shooting experience of the user is improved.
For example, in a scene where the camera of the electronic device cannot focus due to hand shake when the user clicks the shooting control, the image quality of the stored preview image is higher than that of the reference image and the fusion image acquired during shooting, and one preview image with the highest image quality is presented to the user as a result image, so that the blurred reference image or fusion image is prevented from being presented to the user.
For another example, in the aforementioned scene of capturing an object moving at a high speed relative to a camera of the electronic device, the stored preview image has a lower image quality than the reference image captured at the time of capturing, and the reference image has a higher image quality than the fused image, so that the reference image can be presented to the user as a result image, avoiding presentation of the fused image with a ghost to the user.
It should be understood that the above image quality may be the degree of distortion of the image, the higher the image quality, the smaller the degree of distortion.
In some embodiments, the electronic device may collect and store multiple preview images in the preview stream, thereby obtaining multiple frames of preview images before shooting and a single frame of reference image at shooting. And then, obtaining a fused image according to the fusion of the reference image and at least one frame of preview image. Therefore, one image with highest image quality in the plurality of preview images and the reference image can be determined, and the determined image with higher image quality in the image and the fusion image can be presented to a user as a shot result image. Therefore, the selection efficiency of the result image with the highest image quality can be improved, and the shooting application can efficiently display the result image.
In some embodiments, at least one frame of preview image before clicking the shooting control, the reference image, and at least one frame of preview image shot within a period of time after clicking the shooting control may be fused together into a fused image.
In some embodiments, evaluating the quality of an image may calculate a quality score value for the image to be evaluated by an image quality evaluation technique, the quality score value reflecting the degree of distortion of the image, the quality score value being greater the smaller the degree of distortion, and therefore the higher the quality score value, the higher the image quality. For example, the electronic device 100 may perform a characteristic analysis study on an image based on an image quality evaluation, and then evaluate the quality of the image (i.e., the degree of distortion of the image), and characterize the quality of the image with a quality score value.
The following describes a detailed flow of an implementation of a photographing method according to an embodiment of the present application with reference to fig. 3. It can be understood that the execution subject of each step in the flowchart shown in fig. 3 may be the electronic device 100, and the description of the execution subject of a single step is omitted.
S301, displaying a shooting interface comprising a preview area, and collecting and displaying a plurality of preview images.
It will be appreciated that, upon detecting a user's start-up operation of a photographing application, the electronic device 100 may display a photographing interface including a preview area, and collect and display a plurality of consecutive preview images (i.e., a plurality of preview images) within the preview area. In this process, a plurality of preview images may be acquired and stored within the electronic device 100.
In some embodiments, the electronic device 100 may store preview images corresponding to preview streams within a most recent first time period, e.g., may store multiple preview images corresponding to the first time period into a memory of the electronic device 100. Furthermore, when the electronic device 100 detects that the user clicks the shooting control, a plurality of preview images included in the first duration before the shooting of the reference image may be obtained from the memory. This can make the number of preview images sufficient, and easily select a preview image with higher image quality.
S302, detecting shooting operation of a user, and collecting a reference image.
The shooting operation described above may be, for example, a click/long press/drag operation of a shooting control by a user. When the electronic device 100 detects a photographing operation of a user, an image of one frame may be acquired by the camera and stored in the electronic device 100 as a reference image.
It should be appreciated that in other embodiments, the electronic device 100 may acquire the reference image in other situations as well, for example, the electronic device 100 may initiate the acquisition process of the reference image (i.e., capture the reference image) according to predefined rules. In some embodiments, the electronic device 100 may acquire the reference image upon receiving voice information characterizing a user's willingness to capture, or detecting a preset gesture operation for initiating capture (e.g., detecting a user tapping on a screen), or detecting that a predefined target object is included in the preview image (e.g., detecting a person-specific expression or a person-specific gesture).
It will be appreciated that the reference image may be used as a fusion basis for a fused image that characterizes position information of a photographing object that a user wishes to photograph in a preview area, an angle, and the like.
And S303, determining a fusion image according to the reference image and at least one frame of preview image.
It will be appreciated that the reference image and the preview image each include a photographic subject that the user wishes to photograph, and thus the electronic device 100 may fuse the reference image and the at least one frame of preview image to obtain a fused image that includes the photographic subject. It will be appreciated that the fusion criterion for the fused image is a reference image, and thus when the reference image is blurred, the fused image may be caused to blur.
In some embodiments, the reference image and the at least one frame of preview image may be pixel-wise fused, such as by a spatial domain algorithm or a transform domain algorithm, to form a fused image. The spatial domain algorithm comprises a logic filtering method, a gray weighted average method and a contrast modulation method; transform domain algorithms include, but are not limited to, pyramid decomposition fusion and wavelet transform. It should be noted that, the embodiment of the present application does not specifically limit the above-mentioned fusion manner.
In some embodiments, the preview image includes at least one frame of the preview image before the reference image is captured and/or at least one frame of the preview image captured during a period of time when the reference image is captured.
S304, taking the image with the largest image quality score among the plurality of preview images, the reference image and the fusion image as a result image.
For example, the electronic device 100 may perform image quality evaluation on each of the plurality of preview images, the reference image, and the fused image by using an image quality evaluation algorithm to obtain a quality score value of each image, and determine, as the result image, the image having the highest image quality based on the quality score value.
It will be appreciated that the quality score value reflects the degree of distortion of the image, with a smaller degree of distortion being a greater quality score value and therefore a greater quality score value being a greater image quality. Thus, the electronic device 100 may present the image with the largest quality score value as the image with the highest image quality to the user as the resultant image. Therefore, the image quality of the shot result image is not lower than that of the preview image, and the shooting experience of a user is improved.
It should be understood that image quality evaluation (image quality assessment, IQA) is one of the basic technologies in image processing, and is mainly performed by performing a characteristic analysis study on an image, and then evaluating the quality of the image (i.e., the degree of image distortion). Image quality evaluation can be classified into three types of full-reference (FR), partial-reference (RR), and no-reference (NR).
The full-reference (FR) image quality evaluation refers to comparing the difference between an image to be evaluated and a reference image in the case of selecting an ideal image as the reference image, and analyzing the distortion degree of the image to be evaluated, thereby obtaining the quality evaluation of the image to be evaluated. The common objective evaluation of the quality of the full reference image is mainly based on three aspects of pixel statistics (such as measuring the quality of the image to be evaluated from a statistical angle by calculating the difference between gray values of corresponding pixel points of the image to be evaluated and the reference image), information theory (such as measuring the quality of the image to be evaluated by calculating mutual information between the image to be evaluated and the reference image), and structural information (such as measuring the quality of the image to be evaluated by constructing structural similarity between the reference image and the image to be evaluated according to the correlation between pixels of the image).
The partial-reference (RR) image quality evaluation may use the partial feature information of the ideal image as a reference, and perform comparison analysis on the image to be evaluated, thereby obtaining an image quality evaluation result.
No-reference (NR) image quality assessment, also known as a first assessment method, is generally based on image statistics such as mean (i.e. average of image pixels), standard deviation (i.e. degree of dispersion of gray values of image pixels relative to mean), average gradient (e.g. entropy, i.e. average information content of an image). It can be understood that the reference-free image quality evaluation method generally makes a certain assumption on the characteristics of an ideal image, establishes a corresponding mathematical analysis model for the assumption, and finally obtains the quality evaluation result of the image to be evaluated by calculating the performance characteristics of the image to be evaluated under the model.
In some embodiments, referring to fig. 4, three frames of preview images acquired before capturing a reference image may be fused with the reference image to obtain a fused image. And then selecting an image with highest image quality from the multi-frame preview image, the fusion image and the single-frame reference image at the first time before shooting, and taking the image with highest image quality as a result image shot by a user. Here, the first time period is longer than three frames of time periods for obtaining the fused images, so that the number of preview images is sufficient, and the preview images with higher image quality can be easily selected.
In other embodiments, three frames of preview images acquired before the reference image is captured, the reference image, and a plurality of preview images acquired after the reference image is captured may be fused to obtain a fused image.
It can be understood that in steps S301 to S304 in the embodiment of the present application, by taking the fusion image, the reference image, and the stored preview image, which are the frame image with the highest image quality, as the capturing result, so that the image quality of the capturing result is not lower than that of the preview image, and the capturing experience of the user is improved.
The following describes a specific implementation procedure of a photographing mode in other embodiments of the present application in detail with reference to fig. 5A and 5B.
Fig. 5A shows a flowchart of another photographing method according to an embodiment of the present application. Fig. 5B shows a schematic diagram of selection principle of a preview image according to an embodiment of the present application.
It will be appreciated that the embodiment illustrated in fig. 5A may further enter the judging mode of the two result images by comparing the image quality of the Zhang Yulan image and the reference image. The image with the highest image quality is selected from the plurality of preview images and the single reference image, and further judgment is carried out on the image with the fusion image under different judging modes. If the result image is one preview image of the plurality of preview images, image enhancement processing is required for the result image, so as to improve the image quality of the preview image and improve the shooting experience of the user. If the resulting image is a fused image, no further processing is required. If the result image is the reference image, the fusion image and the reference image are required to be subjected to self-adaptive fusion processing, namely, pixels with quality scores larger than a preset threshold value (namely, clear parts in the image) in the fusion image are fused with the reference image, so that the result image is obtained, the image quality of the reference image is improved, and the shooting experience of a user is improved.
It can be understood that the execution subject of each step in the flowchart shown in fig. 5A may be the electronic device 100, and the description of the execution subject of a single step is omitted.
S501, quality evaluation is performed on a plurality of preview images and a single reference image.
For example, after detecting the shooting operation of the user, the electronic device 100 may collect a single frame reference image during shooting, obtain a plurality of stored preview images in a first period before shooting, perform quality evaluation on the single frame reference image and the plurality of preview images in the first period, and determine an image quality score value corresponding to each image, so as to facilitate further image quality comparison with the fused image.
In some embodiments, the first duration may be 1 second or 2 seconds.
S502, judging whether the image with the highest quality is a preview image. If the image with the highest image quality is the preview image, the first mode is entered, and S503 is executed to compare the preview image with the highest image quality with the fused image. If the highest image is the reference image, the second mode is entered, and step S507 is executed to compare the reference image with the fusion image for further image quality.
It will be appreciated that the specific implementation process of determining the image quality of the reference image and the plurality of preview images (e.g. determining the image scoring value of each image) in the step S502 may refer to the step S304 above, and will not be described herein.
In some embodiments, referring to fig. 5B, when a single frame reference image Fn is acquired during shooting, n-1 preview images F1 to F (n-1) in a first period before shooting are acquired, and quality evaluation is performed on the single frame reference image Fn and the n-1 preview images F1 to F (n-1) in the first period before shooting, so as to obtain a quality score value corresponding to each image. And taking the image with the largest quality score value as a selected clear image Fx, and further performing further image quality comparison by utilizing the clear image Fx and the fusion image.
And S503, performing quality evaluation on the fusion image and the selected preview image.
It can be appreciated that, the specific implementation process of determining the image scoring value corresponding to the fused image and the preview image with the highest quality in the step S503 may refer to the step S304 above, which is not described herein.
It should be appreciated that images with larger quality score values are images with higher image quality.
S504, judging whether the preview image with highest quality is higher than the image quality of the fusion image. If the highest quality preview image is higher than the image quality of the fused image, steps S505 and S506 are performed to take the enhanced preview image as a result image. If the fused image is higher in image quality than the preview image with the highest quality, S511 is executed, and the fused image is used as a result image.
It will be appreciated that a higher quality image may be determined by comparing the quality score value of the highest quality preview image with the quality score value of the fused image.
S505, performing image enhancement on the preview image with the highest quality.
It will be appreciated that when the focal length of the camera is the standard focal length, the preview image may include more image information, for example, more objects. When the focal length of the camera is a high focal length, such as 50 times the standard focal length illustrated in fig. 1A above, there is more information loss in the preview image (e.g., the subject imaging is blurred). Therefore, the electronic device 100 may perform image enhancement on the preview image with the highest quality to obtain an enhanced preview image, thereby improving the image quality of using the preview image as a result image.
In some embodiments, the electronic device 100 may implement image enhancement processing of the preview image using a sharpening algorithm, a contrast enhancement algorithm, or a super resolution algorithm.
And S506, taking the enhanced preview image as a result image.
It is understood that the electronic device 100 may display the enhanced preview image to the user through the camera application, provided to the user as a photographed result image.
And S507, performing quality evaluation on the fusion image and the reference image.
It can be appreciated that the specific implementation process of determining the image scoring values corresponding to the fused image and the reference image in the step S507 can refer to the step S304 above, and will not be described herein.
S508, judging whether the reference image is higher than the image quality of the fusion image. If the reference image is higher than the fusion image, steps S509 to S510 are executed to adaptively fuse the reference image with the fusion image, so as to improve the image quality of the reference image, and the fused reference image is provided as a result image to the user.
It will be appreciated that higher quality images may be determined by comparing the quality score value of the reference image with the quality score value of the fused image.
S509, performing self-adaptive fusion on the reference image and the fusion image to obtain a fused reference image.
It can be understood that the adaptive fusion is to fuse the reference image with a clearer part in the fused image, so that the clearer part in the reference image can enhance the image quality, so as to improve the image quality of the reference image as a result image.
In some embodiments, the electronic device 100 may divide the reference image into a plurality of image areas, each image area being a×a pixels. The electronic device 100 may divide the fused image into a plurality of image areas, each image area also being a×a pixels. And then carrying out image registration on the reference image and the fusion image, and carrying out quality evaluation on the image area at the corresponding position in the two images. When the quality score value of a certain image area corresponding to the reference image and the fusion image is larger than the preset score threshold value, determining that the image area of the reference image and the corresponding image area in the fusion image are clear areas, fusing the two image areas to obtain the fused reference image, so that the image quality of the reference image is improved, and further the shooting experience of a user is improved.
In other embodiments, the reference image and the fused image may be subtracted, i.e., a subtraction is performed between corresponding pixels of the two images, to detect difference information of the two images. Further, a pixel value difference between the reference image and the fusion image for each image region may be obtained to determine an image enhancement processing manner for each image region.
For example, assume that a pixel value of a certain image area H of the reference image is different from a pixel value of an image area H' corresponding to the fusion image.
For the case that the pixel value difference Q is greater than the first difference threshold Q1, it may be characterized that the difference between the image area H of the reference image and the image area H 'of the fusion image is too large, and the image area H of the reference image may be used to replace the image area H' corresponding to the fusion image to obtain the image area of the enhanced result image.
For the case that the pixel value difference Q is smaller than the first difference threshold Q1 and larger than the second difference threshold Q2 (where Q2 is smaller than Q1 and Q2 is larger than 0), the image area H of the reference image and the image area H 'of the fusion image may be characterized as having a certain similarity but still having a certain difference, and the image area H of the reference image and the image area H' of the fusion image may be fused into a corresponding image area in the result image. For example, pixels within both corresponding image regions may be weighted and fused into a new image region. In some embodiments, in the process of the weighted fusion of the two image areas, the pixel weight of the reference image and the pixel weight of the fused image may be set to 0.5.
For the case that the pixel value difference Q is smaller than the second difference threshold Q2, the image area H of the reference image may be characterized as being relatively similar to the image area H' of the fusion image, and the image area H of the reference image may be taken as a corresponding image area in the result image.
S510, taking the reference image after the self-adaptive fusion processing as a result image.
It is understood that the specific implementation process of the step S510 may refer to the step S506 above, and will not be described herein.
In some embodiments, the electronic device 100 may not perform the above steps S509 and S510, and the electronic device 100 may directly provide the reference image as the result image to the user, corresponding to a case where the image quality of the reference image is higher than that of the fused image.
S511, taking the fusion image as a result image.
It is understood that the specific implementation process of the step S511 may refer to the step S506 above, and will not be described herein.
The implementation of step S505 is described in detail below with reference to fig. 6.
Fig. 6 shows a schematic diagram of an implementation of image enhancement according to an embodiment of the present application.
It will be appreciated that the image quality of the preview image will vary with the focal length. When the focal length becomes larger, the position occupied by the subject in the preview area becomes larger, and the image quality of the preview image is degraded (for example, blurred or noise increases) by enlarging the subject. Thus, in some embodiments, different image enhancement schemes may be determined based on different focal lengths.
It can be understood that the execution subject of each step in the flowchart shown in fig. 6 may be the electronic device 100, and the description of the execution subject of a single step is omitted.
S601, judging whether the current multiplying power is larger than a preset multiplying power threshold. If the current magnification is greater than the preset magnification threshold, step S603 is executed, and the super resolution algorithm is used for image enhancement. If the current magnification is less than or equal to the preset magnification threshold, step S602 is executed, and the image enhancement is performed by using the sharpening or contrast enhancement mode.
It can be appreciated that the current magnification is the current focal length magnification of the camera of the electronic device 100, which characterizes the magnification of the standard focal length.
S602, sharpening and/or contrast enhancement are carried out on the preview image with the highest image quality so as to realize image enhancement.
It is understood that contrast refers to the measurement of the different brightness levels between the brightest white and darkest black of a bright-dark region in an image, with a larger range of differences representing a larger contrast and a smaller range of differences representing a smaller contrast. Sharpening is to compensate the outline of the image, enhance the edge of the image and the part of gray jump, and make the image clear. Therefore, the electronic device 100 may enhance the image by sharpening and/or increasing the contrast, thereby improving the shooting experience of the user.
S603, enhancing the preview image with the highest image quality through a super-resolution algorithm.
It can be appreciated that a super-resolution (SR) algorithm can restore image details and other data information according to known image information through optics and related optical knowledge thereof, that is, can be used to increase the resolution of the preview image with the highest image quality, thereby enhancing the image quality of the preview image with the highest image quality.
The super-resolution algorithm may include interpolation (such as nearest neighbor interpolation, bilinear interpolation and bicubic interpolation), reconstruction (such as image reconstruction based on frequency domain or image reconstruction based on spatial domain), and shallow learning (such as machine learning, sample learning and sparse coding). In addition, the super-resolution algorithm may also include deep learning approaches such as convolutional neural networks (convolutional neural networks, CNN), residual networks (Res-Net), and antagonistic neural networks (generative adversarial network, GAN). Through the above manner, the electronic device 100 can increase the resolution of the preview image with the highest image quality, enhance the image quality thereof, further ensure the image quality of the result image, and promote the shooting experience of the user.
The following describes the software and hardware structure of an electronic device 100 in the embodiment of the present application in detail with reference to the related drawings.
The following describes in detail the hardware structure of an electronic device according to an embodiment of the present application with reference to fig. 7.
Fig. 7 shows a schematic hardware structure of the electronic device 100 according to an embodiment of the present application.
As shown in fig. 7, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and the like. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components, without limitation.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, and the like. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
In some embodiments, the controller generates an operation control signal according to the instruction operation code and the timing signal of the processor 110, and completes the control of instruction fetching and instruction execution to execute the shooting method provided by the embodiment of the application.
In some embodiments, the processor 110 may include one or more interfaces. In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulsecode modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriberidentity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others. The USB interface 130 may be used to connect a charger to charge the electronic device 100, or may be used to transfer data between the electronic device 100 and a peripheral device.
In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the acceleration sensor 180E, charger, flash, camera 193, etc., respectively, through different I2C bus interfaces.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interface may include a universal serial bus (universal serial bus, USB) interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, or may be used to transfer data between the electronic device 100 and a peripheral device.
The charge management module 140 is configured to receive a charge input from a charger. The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The modem processor may include a modulator and a demodulator.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (globalnavigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., applied to the electronic device 100.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light emittingdiode (AMOLED), a flexible light-emitting diode (FLED), a Mini-LED, a Micro-OLED, a quantum dot light-emitting diode (quantumdot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing or taking a video, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, so that the electrical signal is converted into an image visible to the naked eye. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
Video codecs are used to compress or decompress digital video.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100.
The internal memory 121 may be used to store computer executable program code that includes instructions. The internal memory 121 may include a storage program area and a storage data area. The storage data area may store data created during use of the electronic device 100 (e.g., a captured preview image, a reference image, a fused image, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, a nonvolatile memory, and the like. The processor 110 executes various functional applications of the electronic device 100 and photographing methods by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an application processor, and the like. Such as music playing, recording, etc.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194.
The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The method can also be used for identifying the gesture of the electronic equipment and is applied to applications such as switching of a transverse screen and a vertical screen, pedometers and the like.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector.
The ambient light sensor 180L is used to sense ambient light level.
The fingerprint sensor 180H is used to collect a fingerprint.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by temperature sensor 180J exceeds a threshold, electronic device 100 performs a reduction in the performance of a processor located in the vicinity of temperature sensor 180J in order to reduce power consumption to implement thermal protection.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194.
Fig. 8 shows a software architecture block diagram of an electronic device 100 according to an embodiment of the application.
It is appreciated that the software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Illustratively, the layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
It is to be understood that the components included in the electronic device 100 shown in fig. 8 do not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, the electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components.
As shown in fig. 8, the application layer may include a series of application packages. The application package may include camera, gallery, calendar, talk, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
The camera may be used to provide a photographing function to capture images that the user wishes to photograph. For example, multiple frames of preview images (e.g., multiple frames of preview images within a first time period before a reference image is captured), a single frame of reference image (i.e., an image captured when the electronic device 100 detects a user capture operation), and a fused image based on the multiple frames of preview images and the single frame of reference image may be captured and stored. And the image with the highest image quality in the multi-frame preview image, the single-frame reference image and the fusion image can be used as a shot result image to be presented to a user. Therefore, the image quality of the result image is always higher than that of any one preview image, and the shooting experience of a user is improved.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer, including various components and services to support the android development of the developer.
The application framework layer includes a number of predefined functions. The application framework layer may include a view system, a window manager, a resource manager, a content provider, a notification manager, a camera service, a multimedia management module, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, include WeChat TM The display interface of the application icon may include a view for displaying text and a view for displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
The camera service is used for calling a camera driver (including a front camera and/or a rear camera) in response to a request of an application.
The multimedia management module is configured to process the image based on the configuration of the camera service, and a specific process will be described in detail in the following embodiments.
Android runtimes include core libraries and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media library (media library), three-dimensional graphics processing library (e.g., openGL ES), 2D graphics engine (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between the hardware and the software layers described above. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver. The hardware may include a camera, a display screen, a microphone, a processor, a memory, and the like.
In an embodiment of the present application, a display in hardware may display an application interface that includes a preview area at the time of shooting. A camera in hardware may be used to capture the image.
The workflow of the electronic device 100 software and hardware is illustrated below in connection with a scenario where a user is using the electronic device 100.
When touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including information such as touch coordinates of the touch operation, time stamp of the touch operation, etc.). The original input event is stored at the kernel layer. The application framework layer acquires an original input event from the kernel layer, and identifies a control corresponding to the input event. When the touch operation is a clicking operation and the control corresponding to the clicking operation is, for example, a shooting control used for shooting in an interface of a camera application, the application can call the interface of an application framework layer, and then a camera process is started through a camera driver of a kernel layer. Further, the above embodiments illustrated in fig. 3, 5A and/or 6 are executed, and for brevity, they will not be described in detail herein.
Illustratively, the camera application is launched and invokes the media service to cause the media service to create a corresponding instance. When an application is started, a shooting instance is created in the application framework layer through an interface with the application framework layer to start a camera process. For example, the camera service may invoke a camera process in response to a user's start operation request for instructing the electronic device 100 to run the camera application. The multimedia management module may respond to a user operation request for a shooting control, and may invoke the camera to acquire at least one frame of preview image (for example, a plurality of continuous frames of preview images within a first duration), where the operation request for the target control is used to indicate to open a shooting function of the target application. Further, the above embodiments illustrated in fig. 3, 5A and/or 6 are executed, and for brevity, they will not be described in detail herein.
It should be noted that "instance" in the embodiments of the present application may also be understood as a program code or a process code running in a process, for performing corresponding processing on received data (e.g., an image stream). It should be noted that, in the description of the embodiments of the present application, the application that can call the camera process is described as an example, and in other embodiments, the application may also be other applications with shooting functions, for example, shopping applications with image recognition functions, etc., and the present application is not limited thereto.
Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one example implementation or technique disclosed in accordance with embodiments of the application. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
The disclosure of the embodiments of the present application also relates to an operating device for executing the text. The apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random Access Memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application Specific Integrated Circuits (ASICs), or any type of media suitable for storing electronic instructions, and each may be coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processors for increased computing power.
Additionally, the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the disclosed subject matter. Accordingly, the present disclosure of embodiments is intended to be illustrative, but not limiting, of the scope of the concepts discussed herein.

Claims (11)

1. A photographing method applied to an electronic device, the method comprising:
displaying a shooting preview picture of the first application;
detecting a first shooting instruction of a user, and collecting an instruction image corresponding to the first shooting instruction;
fusing the instruction image and at least one stored preview image to obtain a fused image;
selecting a first image meeting quality conditions from the stored preview image, the instruction image and the fusion image, and obtaining a target image corresponding to the first shooting instruction based on the first image;
and displaying the target image.
2. The method according to claim 1, wherein the obtaining a target image corresponding to the first photographing instruction based on the first image includes:
corresponding to the first image being the stored preview image, performing first image enhancement processing on the stored preview image to obtain the target image, wherein,
The first image enhancement processing at least comprises any one of the following: sharpening processing, contrast enhancement processing and super-resolution algorithm processing.
3. The method according to claim 2, wherein said performing a first image enhancement process on said stored preview image to obtain said target image comprises:
acquiring a current focal length multiplying power value corresponding to a first application;
corresponding to the current focal length multiplying power value being smaller than or equal to a preset multiplying power threshold value, sharpening and/or contrast enhancement is carried out on the stored preview image to obtain the target image;
and carrying out image enhancement processing on the stored preview image by a super-resolution algorithm to obtain the target image corresponding to the current focal length multiplying power value being larger than a preset multiplying power threshold.
4. The method according to claim 1, wherein the obtaining a target image corresponding to the first photographing instruction based on the first image includes:
and performing second image enhancement processing on the instruction image based on the fusion image to obtain the target image, wherein the first image is the instruction image.
5. The method according to claim 4, wherein performing a second image enhancement process on the instruction image based on the fused image to obtain the target image includes:
Determining the difference degree between the content of a first image area in the instruction image and the content of a second image area corresponding to the first image area in the fused image;
the content of the first image area is taken as the content of a third image area corresponding to the first image area in the target image, wherein the content corresponds to the difference degree being larger than or equal to a first threshold value or the difference degree being smaller than a second threshold value;
and fusing the content of the first image area and the content of the second image area corresponding to the difference degree being smaller than a first threshold value and being larger than or equal to a second threshold value, so as to obtain the content of a third image area corresponding to the first image area in the target image, wherein the first threshold value is larger than or equal to the second threshold value.
6. The method according to claim 1, wherein the obtaining a target image corresponding to the first photographing instruction based on the first image includes:
and taking the fusion image as the target image corresponding to the first image as the fusion image.
7. The method of claim 1, wherein selecting a first image satisfying a quality condition from the stored preview image, the instruction image, and the fused image comprises:
Determining the stored image quality scoring values corresponding to each image in the preview image, the instruction image and the fusion image through an image quality evaluation technology;
and taking the stored preview image, the instruction image and the image with the largest image quality scoring value in the fused image as the first image.
8. The method of claim 7, wherein the stored preview images comprise preview images captured during a first time period prior to detection of the first photographing instruction, and wherein the stored at least one preview image comprises at least a portion of preview images captured during a second time period prior to detection of the first photographing instruction, wherein the first time period is greater than the second time period.
9. The method of claim 8, wherein the stored at least one preview image further comprises at least one preview image captured during a third time period after the first capture instruction is detected.
10. An electronic device, comprising: one or more processors; one or more memories; the one or more memories store one or more instructions that, when executed by the one or more processors, cause the electronic device to perform the shooting method of any of claims 1 to 9.
11. A computer-readable storage medium, characterized in that the storage medium has stored thereon instructions that, when executed on a computer, cause the computer to perform the shooting method of any one of claims 1 to 9.
CN202311172977.7A 2023-09-12 2023-09-12 Shooting method, electronic device and storage medium Active CN116916151B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311172977.7A CN116916151B (en) 2023-09-12 2023-09-12 Shooting method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311172977.7A CN116916151B (en) 2023-09-12 2023-09-12 Shooting method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN116916151A true CN116916151A (en) 2023-10-20
CN116916151B CN116916151B (en) 2023-12-08

Family

ID=88367203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311172977.7A Active CN116916151B (en) 2023-09-12 2023-09-12 Shooting method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN116916151B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117729445A (en) * 2024-02-07 2024-03-19 荣耀终端有限公司 Image processing method, electronic device and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112019739A (en) * 2020-08-03 2020-12-01 RealMe重庆移动通信有限公司 Shooting control method and device, electronic equipment and storage medium
CN113395440A (en) * 2020-03-13 2021-09-14 华为技术有限公司 Image processing method and electronic equipment
CN113810622A (en) * 2021-08-12 2021-12-17 荣耀终端有限公司 Image processing method and device
CN115499579A (en) * 2022-08-08 2022-12-20 荣耀终端有限公司 Processing method and device based on zero-second delay ZSL

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113395440A (en) * 2020-03-13 2021-09-14 华为技术有限公司 Image processing method and electronic equipment
CN112019739A (en) * 2020-08-03 2020-12-01 RealMe重庆移动通信有限公司 Shooting control method and device, electronic equipment and storage medium
CN113810622A (en) * 2021-08-12 2021-12-17 荣耀终端有限公司 Image processing method and device
CN115499579A (en) * 2022-08-08 2022-12-20 荣耀终端有限公司 Processing method and device based on zero-second delay ZSL

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117729445A (en) * 2024-02-07 2024-03-19 荣耀终端有限公司 Image processing method, electronic device and computer readable storage medium

Also Published As

Publication number Publication date
CN116916151B (en) 2023-12-08

Similar Documents

Publication Publication Date Title
CN113592887B (en) Video shooting method, electronic device and computer-readable storage medium
US11949978B2 (en) Image content removal method and related apparatus
JP2023511581A (en) Long focus imaging method and electronic device
WO2021078001A1 (en) Image enhancement method and apparatus
US20230043815A1 (en) Image Processing Method and Electronic Device
CN116916151B (en) Shooting method, electronic device and storage medium
CN113538227B (en) Image processing method based on semantic segmentation and related equipment
CN113709355B (en) Sliding zoom shooting method and electronic equipment
CN115689963A (en) Image processing method and electronic equipment
CN113099146B (en) Video generation method and device and related equipment
CN117201930B (en) Photographing method and electronic equipment
EP4284009A1 (en) Method for acquiring image, and electronic device
US20230014272A1 (en) Image processing method and apparatus
CN115580690B (en) Image processing method and electronic equipment
CN113891008B (en) Exposure intensity adjusting method and related equipment
CN115633255B (en) Video processing method and electronic equipment
CN115601842B (en) Automatic snapshot method, electronic equipment and storage medium
CN116723383B (en) Shooting method and related equipment
CN116757963B (en) Image processing method, electronic device, chip system and readable storage medium
CN116363017B (en) Image processing method and device
CN115460343A (en) Image processing method, apparatus and storage medium
CN116740777A (en) Training method of face quality detection model and related equipment thereof
CN117152022A (en) Image processing method and electronic equipment
CN117395496A (en) Shooting method and related equipment
CN116664701A (en) Illumination estimation method and related equipment thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant