CN111479059B - Photographing processing method and device, electronic equipment and storage medium - Google Patents

Photographing processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111479059B
CN111479059B CN202010293627.6A CN202010293627A CN111479059B CN 111479059 B CN111479059 B CN 111479059B CN 202010293627 A CN202010293627 A CN 202010293627A CN 111479059 B CN111479059 B CN 111479059B
Authority
CN
China
Prior art keywords
image
camera module
acquired
photographing
preprocessed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010293627.6A
Other languages
Chinese (zh)
Other versions
CN111479059A (en
Inventor
李逸超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010293627.6A priority Critical patent/CN111479059B/en
Publication of CN111479059A publication Critical patent/CN111479059A/en
Application granted granted Critical
Publication of CN111479059B publication Critical patent/CN111479059B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors

Abstract

The application provides a photographing processing method and device, electronic equipment and a storage medium, and belongs to the technical field of imaging. Wherein, the method comprises the following steps: monitoring whether the distance between the current position and the position of the camera module at the previous moment is greater than a first threshold value or not in the previewing process; if not, storing the image frame currently acquired by the camera module; judging whether the number of the image frames stored currently is larger than a second threshold value; if yes, synthesizing the stored image frames to generate a preprocessed image; when a photographing instruction is acquired, judging whether the preprocessed image contains a current target region of interest; and if so, cutting a target image corresponding to the target region of interest from the preprocessed image. Therefore, by the photographing processing method, the target image is directly cut from the preprocessed image when the photographing instruction is acquired, so that the image quality is improved, the photographing speed is increased, and the user experience is improved.

Description

Photographing processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of imaging technologies, and in particular, to a photographing processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of science and technology, electronic devices such as smart phones and tablet computers with built-in cameras are increasingly popularized. With the enhancement of the processing capability of the electronic equipment and the development of the camera technology, the performance of the built-in camera is stronger and stronger, and the quality of the shot image is higher and higher. People in daily life use electronic devices such as smart phones and tablet computers to take pictures becomes a normal state.
In the related art, it is common that after a photographing command is acquired (i.e., after a user presses a photographing button), Image Signal Processing (ISP), multi-frame synthesis, color Processing, High-Dynamic Range (HDR) Processing, and the like are performed on RAW Image data (RAW) acquired by an Image sensor to generate a target Image. However, the photographing mode has low processing efficiency and long time consumption, and affects the user experience.
Disclosure of Invention
The photographing processing method, the photographing processing device, the electronic equipment and the storage medium are used for solving the problems that in the related art, after a photographing command is obtained, processing such as ISP (Internet service provider), multi-frame synthesis, facial beautification and HDR (high resolution) is carried out on RAW data obtained by an image sensor to generate a photographing mode of a target image, processing efficiency is low, time consumption is long, and user experience is influenced.
An embodiment of an aspect of the present application provides a photographing processing method, including: monitoring whether the distance between the current position and the position at the previous moment of the camera module is greater than a first threshold value or not in the previewing process; if not, storing the image frame currently acquired by the camera module; judging whether the number of the image frames stored currently is smaller than a second threshold value; if not, synthesizing the stored image frames to generate a preprocessed image; when a photographing instruction is acquired, judging whether the preprocessed image contains a current target region of interest; and if so, cutting a target image corresponding to the target region of interest from the preprocessed image.
The processing apparatus that shoots that this application another aspect embodiment provided includes: the first monitoring module is used for monitoring whether the distance between the current position and the position at the previous moment of the camera module is greater than a first threshold value or not in the previewing process; the storage module is used for storing the image frame currently acquired by the camera module if the image frame is not acquired by the camera module; the first judgment module is used for judging whether the number of the image frames stored at present is less than a second threshold value; the synthesis module is used for carrying out synthesis processing on the stored image frames to generate a preprocessed image if the image frames are not in the stored image frames; the second judgment module is used for judging whether the preprocessed image contains the current target region of interest or not when the photographing instruction is obtained; and if the target image corresponding to the target interesting region is contained in the preprocessed image, the clipping module is used for clipping the target image corresponding to the target interesting region from the preprocessed image.
An embodiment of another aspect of the present application provides an electronic device, which includes: the system comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the photographing processing method.
In another aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the photographing processing method as described above.
In another aspect of the present application, a computer program is provided, which is executed by a processor to implement the photographing processing method according to the embodiment of the present application.
According to the photographing processing method, the photographing processing device, the electronic device, the computer readable storage medium and the computer program provided by the embodiment of the application, in the previewing process, when the distance between the current position and the position at the previous moment of the photographing module is monitored to be smaller than or equal to a first threshold value, the image frames currently acquired by the photographing module are stored, when the number of the currently stored image frames is larger than or equal to a second threshold value, the stored image frames are subjected to synthesis processing to generate a preprocessed image, and then when a photographing instruction is obtained and the preprocessed image contains the current target region of interest, the target image corresponding to the target region of interest is cut from the preprocessed image. Therefore, before the photographing instruction is acquired, the image frames acquired during previewing are synthesized to generate the preprocessed image, so that the target image is directly cut from the preprocessed image when the photographing instruction is acquired, the image quality is improved, the photographing speed is increased, and the user experience is improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a photographing processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an image frame acquired by a camera module in a steady state during a preview process;
fig. 3 is a schematic flowchart of another photographing processing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a photographing processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the like or similar elements throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The embodiment of the application provides a photographing processing method aiming at the problems that in the related art, after a photographing command is obtained, processing such as ISP (internet service provider), multi-frame synthesis, facial beautification and HDR (high-resolution) is carried out on RAW data obtained by an image sensor to generate a photographing mode of a target image, the processing efficiency is low, the time consumption is long, and the user experience is influenced.
According to the photographing processing method provided by the embodiment of the application, in the previewing process, when the distance between the current position and the position at the previous moment of the camera module is monitored to be smaller than or equal to a first threshold value, the image frames currently acquired by the camera module are stored, when the number of the currently stored image frames is larger than or equal to a second threshold value, the stored image frames are subjected to synthesis processing to generate a preprocessed image, and then when a photographing instruction is obtained and the preprocessed image contains the current target region of interest, the target image corresponding to the target region of interest is cut from the preprocessed image. Therefore, before the photographing instruction is acquired, the image frames acquired during previewing are synthesized to generate the preprocessed image, so that the target image is directly cut from the preprocessed image when the photographing instruction is acquired, the image quality is improved, the photographing speed is increased, and the user experience is improved.
The photographing processing method, apparatus, electronic device, storage medium, and computer program provided by the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a photographing processing method according to an embodiment of the present application.
As shown in fig. 1, the photographing processing method includes the following steps:
step 101, in the previewing process, monitoring whether the distance between the current position and the position at the previous moment of the camera module is greater than a first threshold value.
It should be noted that, after the photographing command is obtained, the processing such as ISP, multi-frame composition, beauty, HDR, etc. is performed on the RAW data obtained by the image sensor to generate the photographing mode of the target image, which is low in processing efficiency and takes a long time. Because the user usually aims at an object to be shot for a period of time before pressing the shooting button, the image content acquired by the camera module after acquiring the shooting instruction is usually similar to the image content acquired by the camera module during previewing, so that the acquired image frame can be processed in advance in the previewing process to improve the shooting speed.
In the embodiment of the application, whether the current aiming and stabilizing process is carried out by the user can be judged according to the distance between the current position of the camera shooting module and the position at the previous moment in the previewing process. Specifically, the position of the camera module can be determined according to the preset frequency, and when the distance between the current position of the camera module and the position at the previous moment is smaller, the current user can be considered to be aiming and stable.
It should be noted that, in actual use, the preset frequency may be preset according to actual needs, and this is not limited in the embodiment of the present application. For example, the preset frequency may be a frequency at which the image is acquired by the camera module in the preview process.
As a possible implementation manner, if the device in which the camera module is located includes a sensor that can detect a motion state of the device, such as a gyroscope and a hall sensor, output values of one or more sensors may be recorded according to a preset frequency, and attitude information, such as a direction and a height of the camera module at each time, is determined according to the output value of the one or more sensors recorded at each time, so as to determine a distance between a current position of the camera module and a previous position of the camera module.
As a possible implementation manner, if there is no sensor capable of checking the motion state of the device, such as a gyroscope or a hall sensor, in the device where the camera module is located, the distance between the current position of the camera module and the position at the previous time may be determined according to the difference between the image content of the image frame currently acquired by the camera module in the preview process and the image content of the image frame acquired at the previous time. Optionally, the difference between the image content of the currently acquired image frame and the image content of the image frame acquired at the previous time may be determined by parameters such as a brightness difference between the currently acquired image frame and the image frame acquired at the previous time, an object difference included in the image frame, and an inter-frame difference.
If the image frames acquired during the preview process include a moving object, the image content may also change when the position of the moving object changes. Therefore, when the distance between the current position and the position at the previous moment of the camera module is determined according to the interframe difference in the previewing process, whether a fixed-position reference object is contained in the image acquired in the previewing process can be firstly judged, and whether the camera module moves between the two moments and the moving distance of the camera module between the two moments can be determined according to the position difference between the current acquired image frame and the image frame acquired at the previous moment of the fixed-position reference object.
In the embodiment of the application, after the distance between the current position of the camera module and the position at the previous moment is determined, whether the distance between the current position of the camera module and the position at the previous moment is greater than a first threshold value or not can be judged.
The manner of determining the distance between the current position and the position at the previous time of the camera module may include, but is not limited to, the above-mentioned cases. During practical use, a suitable mode can be selected according to actual needs and specific application scenarios to determine the distance between the current position and the position at the previous moment of the camera module, and a specific value of the first threshold is set according to the selected mode, which is not limited in the embodiment of the application.
And 102, if not, storing the image frame currently acquired by the camera module.
In this embodiment of the application, if it is determined that the distance between the current position of the camera module and the position at the previous time is less than or equal to the first threshold, that is, the camera module is currently in a relatively stable state, it may be determined that the current user is performing the aiming and stabilizing operations, so that the image frame currently acquired by the camera module may be stored for generating the preprocessed image.
Step 103, judging whether the number of the image frames stored currently is less than a second threshold value.
In the embodiment of the present application, because the number of images used when image frames are synthesized is too small, the quality of the synthesized image is not high, and therefore, the number of image frames subjected to synthesis processing each time can be preset to ensure the quality of the synthesized image. Therefore, after the image frames currently acquired by the camera module are stored, whether the number of the currently stored image frames is smaller than the second threshold value or not can be judged firstly to determine whether the stored image frames can be subjected to synthesis processing or not.
Specifically, if the number of the currently stored image frames is smaller than the second threshold, it may be determined that the number of the currently stored image frames is small, so that the currently stored image frames does not need to be combined, and the operation of monitoring whether the distance between the position of the camera module and the position at the previous time is greater than the first threshold is performed again until the distance is greater than the first threshold, or the number of the stored image frames is greater than the second threshold, and the stored image frames are combined.
Further, the second threshold value may be determined according to the hardware configuration of the device in which the camera module is located. That is, in a possible implementation form of the embodiment of the present application, before the step 103, the method may further include:
and determining a second threshold value according to the size of the image frame collected by the camera module and the storage area corresponding to the camera module.
The storage area corresponding to the camera module is a storage area used for storing image frames acquired by the camera module in the previewing process in the equipment where the camera module is located. It should be noted that the size of the storage area corresponding to the camera module can be determined according to the hardware configuration of the device in which the camera module is located.
As a possible implementation manner, the second threshold may be determined according to the size of the image frame acquired by the camera module and a storage area corresponding to the camera module. For example, the size of the image frames acquired by the camera module is 1M, the capacity of the storage area corresponding to the camera module is 100M, and at most 10 times of combining processing operations are allowed in the preview process, so that 10 image frames can be combined each time, and the second threshold value can be determined to be 10.
As a possible implementation, a control allowing the user to set the number of times of the composition processing operation and/or the second threshold value by himself/herself may also be provided in the preview interface. The number of times of the combining processing operation and the adjustment range allowed by the second threshold value can be set according to the capacity of the storage area corresponding to the camera module and the size of the image frame collected by the camera module.
Optionally, if only a control allowing the user to set the second threshold by himself is provided in the preview interface, the adjustable range of the second threshold is smaller than or equal to the ratio of the capacity of the storage area corresponding to the camera module to the size of the image frame acquired by the camera module, and the number of times of the combining processing operation is determined according to the second threshold, the capacity of the storage area, and the size of the image frame set by the user. For example, the size of the image frame acquired by the camera module is 1M, the capacity of the storage area corresponding to the camera module is 100M, and the adjustable range of the second threshold is 1-100; if the second threshold set by the user is 20, the number of times of the combining processing operation is 100/20 — 5 times.
Optionally, if only a control allowing the user to set the number of times of combining processing operations by himself/herself is provided in the preview interface, the adjustable range of the number of times of combining processing operations is smaller than the ratio of the capacity of the storage area corresponding to the camera module to the size of the image frame acquired by the camera module, and then the second threshold is determined according to the number of times of combining processing operations, the capacity of the storage area, and the size of the image frame set by the user. For example, the size of the image frame collected by the camera module is 1M, the capacity of the storage area corresponding to the camera module is 100M, and the adjustable range of the number of times of the synthesis processing operation is 1-100; if the number of times of the combining processing operation set by the user is 10, the second threshold value is 100/10-10.
Optionally, if a control allowing the user to set the number of times of the combining processing operation and the second threshold is provided in the preview interface at the same time, the adjustable range of the number of times of the combining processing operation is smaller than the ratio of the capacity of the storage region corresponding to the camera module to the size of the image frame acquired by the camera module, the adjustable range of the second threshold is smaller than or equal to the ratio of the capacity of the storage region corresponding to the camera module to the size of the image frame acquired by the camera module, and the product of the number of times of the combining processing operation, the second threshold and the size of the image frame is smaller than or equal to. For example, the size of the image frame collected by the camera module is 1M, the capacity of the storage area corresponding to the camera module is 100M, and if the user sets the second threshold to be 5, the number of times of the combining operation can be selected within a range of 1-20.
As a possible implementation manner, the number of image frames used in each synthesis process, that is, the second threshold, may also be determined according to the synthesis processing algorithm used, and then the number of times that the synthesis processing operation is allowed to be performed in the preview process is determined according to the size of the image frame acquired by the camera module and the size of the storage area corresponding to the camera module.
And step 104, if not, synthesizing the stored image frames to generate a preprocessed image.
In the embodiment of the present application, if it is determined that the number of currently stored image frames is not less than the second threshold, that is, the number of currently stored image frames is the second threshold, a preset multi-frame image synthesis algorithm may be used to perform synthesis processing on the stored image frames to generate a high-quality preprocessed image.
As a possible implementation manner, if other preprocessed images are generated before the preprocessed image is generated, the currently generated preprocessed image and the previously generated preprocessed image may be synthesized and spliced to update the generated preprocessed image, so that a large-format image including the region of interest corresponding to all the preprocessed images may be generated. It should be noted that the region of interest of the updated preprocessed image includes the regions of interest of all the preprocessed images generated before.
For example, if the second threshold is 10, if it is determined that the number of currently stored image frames is 10, the stored 10 image frames may be subjected to synthesis processing to generate the preprocessed image 1; for another example, after the pre-processed image 1 is generated, the photographing instruction is not obtained yet, and 10 image frames are obtained, the newly stored 10 image frames may be subjected to synthesis processing to generate a temporary image 2, and the pre-processed image 1 and the temporary image 2 are subjected to synthesis and stitching processing to generate the pre-processed image 2.
As shown in fig. 2, it is assumed that the scene shown in fig. 2 is a region visible to human eyes, and a region in each block in fig. 2 is an image content of an image frame acquired by the camera module at each image acquisition time in the preview process. The first group of image frames 110, the second group of image frames 120, and the third group of image frames 130 are three groups of image frames, the number of which reaches a second threshold value, acquired when the camera module is in a stable state during the preview process. After the camera module collects the first group of image frames 110, the stored first group of image frames 110 can be synthesized to generate a preprocessed image 1; if the photographing instruction is not obtained after the pre-processed image 1 is generated and the second group of image frames 120 is obtained, the second group of image frames 120 can be synthesized to generate a temporary image 2, and the pre-processed image 1 and the temporary image 2 are synthesized and spliced to generate a pre-processed image 2; if the photographing instruction is not obtained after the generation of the preprocessed image 2 and the third group of image frames 130 is obtained, the third group of image frames 130 may be synthesized to generate the temporary image 3, and the preprocessed image 2 and the temporary image 3 are synthesized and spliced to generate the preprocessed image 3, so that the preprocessed image 3 includes image contents in all the blocks shown in fig. 2.
Further, after the pre-processed image is generated, the stored image frame may also be deleted. That is, in a possible implementation form of the embodiment of the present application, after the step 104, the method may further include:
deleting the stored image frames.
In the embodiment of the application, after determining that the number of the currently stored image frames is not less than the second threshold, and performing synthesis processing on the stored image frames to generate the preprocessed image, the stored image frames may be deleted to save storage space.
For example, as shown in fig. 2, after the second group of image frames 120 are subjected to synthesis processing to generate a temporary image 2, and the preprocessed image 1 and the temporary image 2 are subjected to synthesis and splicing processing to generate the preprocessed image 2, the preprocessed image 1 may be deleted; after the third group of image frames 130 is subjected to the combining process to generate the temporary image 3, and the preprocessed image 2 is combined and spliced with the temporary image 3 to generate the preprocessed image 3, the preprocessed image 2 may be deleted.
And 105, judging whether the preprocessed image contains the current target region of interest or not when the photographing instruction is acquired.
The current target region of interest refers to image content included in image frames acquired by the camera module when the photographing instruction is acquired.
In the embodiment of the application, the image frames acquired in the preview process are synthesized in advance, so that a high-quality and large-frame preprocessed image is generated. Therefore, when the photographing instruction is acquired, it may be first determined whether the preprocessed image includes the current target region of interest.
And step 106, if the target image corresponding to the target interesting area is included, cutting the target image from the preprocessed image.
In the embodiment of the application, the preprocessed image is a high-quality image obtained by multi-frame synthesis, so that after the preprocessed image is determined to contain the current target region of interest, a region corresponding to the target region of interest can be directly cut out from the preprocessed image to be used as the target image, thereby not only ensuring the image quality of the target image, but also avoiding complex image processing operation after the photographing instruction is obtained, and improving the photographing speed.
For example, as shown in fig. 2, after the large-format preprocessed image 3 is generated from the first group of image frames 110, the second group of image frames 120, and the third group of image frames 130, and the photographing instruction is obtained, the region corresponding to the target region of interest can be directly cut out from the large-format preprocessed image 3 as the target image.
Optionally, in order to meet diversified requirements of the user, a plurality of images can be cut out and displayed, so that the user can select a proper image as a target image according to the requirement of the user.
Optionally, the generated preprocessed image may also be directly displayed, so that a user autonomously selects a suitable region in the preprocessed image as the target image, and thus, when an operation instruction of the user on the preprocessed image is received, the region selected by the user in the preprocessed image may be determined and cropped to generate the target image.
According to the photographing processing method provided by the embodiment of the application, in the previewing process, when the distance between the current position and the position at the previous moment of the camera module is monitored to be smaller than or equal to a first threshold value, the image frames currently acquired by the camera module are stored, when the number of the currently stored image frames is larger than or equal to a second threshold value, the stored image frames are subjected to synthesis processing to generate a preprocessed image, and then when a photographing instruction is obtained and the preprocessed image contains the current target region of interest, the target image corresponding to the target region of interest is cut from the preprocessed image. Therefore, before the photographing instruction is acquired, the image frames acquired during previewing are synthesized to generate the preprocessed image, so that the target image is directly cut from the preprocessed image when the photographing instruction is acquired, the image quality is improved, the photographing speed is increased, and the user experience is improved.
In a possible implementation form of the present application, if the camera module is still in a stable state after the target image is generated, the image currently acquired by the camera module may be continuously utilized to perform optimization processing on the target image, so as to further improve the image quality.
The photographing processing method provided in the embodiment of the present application is further described below with reference to fig. 3.
Fig. 3 is a schematic flowchart of another photographing processing method according to an embodiment of the present application.
As shown in fig. 3, the photographing processing method includes the following steps:
step 201, in the previewing process, monitoring whether the distance between the current position and the position at the previous moment of the camera module is greater than a first threshold value, if so, executing step 205; otherwise, step 202 is performed.
Step 202, storing the image frame currently collected by the camera module.
Step 203, judging whether the number of the image frames stored currently is smaller than a second threshold value, if so, returning to execute step 201; otherwise, step 204 is performed.
And step 204, synthesizing the stored image frames to generate a preprocessed image.
The detailed implementation process and principle of the steps 201-204 can refer to the detailed description of the above embodiments, and are not described herein again.
Step 205, updating the generated preprocessed image according to the currently stored image frame.
In the embodiment of the application, if it is monitored that the distance between the current position of the camera module and the position at the previous moment is greater than the first threshold, it can be determined that the camera module is changed greatly from the current position at the previous moment, that is, the camera module is currently changed from a stable state to a moving state, so that the currently stored image frames can be synthesized to generate the temporary image. Specifically, if the preprocessed image has been generated before the current time, the temporary image and the generated preprocessed image may be synthesized and stitched to update the preprocessed image; if the pre-processed image is not generated before the current time, the temporary image can be directly determined as the pre-processed image.
As a possible implementation manner, in order to avoid that the quality of the generated preprocessed image is poor due to too small number of stored image frames, when the distance between the current position of the camera module and the position at the previous moment is monitored to be greater than a first threshold, whether the number of the currently stored image frames is smaller than a second threshold or not can be judged, and when the number of the currently stored image frames is determined to be smaller than the second threshold, the stored image frames are discarded to ensure the quality of the generated preprocessed image; and when the number of the currently stored image frames is determined to be the second threshold value, synthesizing the stored image frames to generate a preprocessed image, or updating the generated preprocessed image by using the stored image frames.
Step 206, when the photographing instruction is obtained, judging whether the preprocessed image contains the current target region of interest, if so, executing step 207; otherwise, step 208 is performed.
Step 207, a target image corresponding to the target region of interest is cropped from the preprocessed image.
The detailed implementation process and principle of the steps 206-207 may refer to the detailed description of the above embodiments, and are not described herein again.
And step 208, processing the image currently acquired by the camera module to generate a target image.
The image currently acquired by the camera module refers to an image acquired by the camera module when the photographing instruction is acquired.
In the embodiment of the present application, if the pre-processed image does not include the target region of interest, the target image corresponding to the target region of interest cannot be directly cut out from the pre-processed image. Specifically, if the preprocessed image is not overlapped with the target region of interest, the image currently acquired by the camera module can be processed to generate a target image; if the preprocessed image contains a partial region of the target region of interest, a superposed region of the preprocessed image and the target region of interest can be cut out, the image frame currently acquired by the camera module is optimized by the aid of the cut superposed region, and a target image is generated.
Step 209, monitoring whether the distance between the current position of the camera module and the position of the camera module when the photographing instruction is obtained is greater than a first threshold value, if so, executing step 211; otherwise, step 210 is performed.
And step 210, optimizing the target image by using the image currently acquired by the camera module.
In the embodiment of the application, after the photographing instruction is acquired and the target image is generated, whether the camera module is still in a stable state or not can be continuously monitored, and if the camera module is still in a stable state or not, the image can be continuously collected through the camera module after the target image is generated. Specifically, if the distance between the current position of the monitoring camera module and the position of the camera module when the photographing instruction is obtained is greater than a first threshold value, it can be determined that the camera module is currently in a moving state, and the photographing processing flow can be ended; if the distance between the current position of the camera module and the position of the camera module when the photographing instruction is obtained is smaller than or equal to the first threshold value, the camera module can be determined to be in a stable state at present, and an image acquired by the camera module at present and a target image can be synthesized to optimize the target image, so that the image quality is further improved.
Step 211, storing the target image.
In the embodiment of the application, after the photographing instruction is acquired and the target image is generated, if it is determined that the distance between the current position of the camera module and the position of the camera module when the photographing instruction is acquired is greater than a first threshold value, that is, the camera module is no longer in a stable state, the generated target image can be stored, or the target image can be displayed on a display screen of a device where the camera module is located, so that a user can browse the target image. If the camera module is still in a stable state after the photographing instruction is obtained and the target image is generated, the target image optimized in step 210 may be stored, or the optimized target image may be displayed on the display screen of the device where the camera module is located.
In the photographing processing method provided by the embodiment of the application, when the distance between the current position of the camera module and the position of the camera module at the previous moment is monitored to be smaller than or equal to a first threshold value in the previewing process, the image frames currently acquired by the camera module are stored, when the number of the currently stored image frames is larger than or equal to a second threshold value, the stored image frames are synthesized to generate a preprocessed image, then when a photographing instruction is obtained and the preprocessed image contains the current target interesting region, a target image corresponding to the target interesting region is cut from the preprocessed image, when the preprocessed image does not contain the current target interesting region, the image currently acquired by the camera module is processed to generate the target image, and further when the distance between the current position of the camera module and the position of the camera module when the photographing instruction is obtained is smaller than or equal to the first threshold value, and optimizing the target image by using the image currently acquired by the camera module. Therefore, the target image is directly cut from the preprocessed image when the photographing instruction is acquired, and when the camera module is still in a stable state after the photographing instruction is acquired, the target image is further optimized, so that the photographing speed is improved, the image quality is further improved, and the user experience is improved.
In order to implement the above embodiment, the present application further provides a photographing processing apparatus.
Fig. 4 is a schematic structural diagram of a photographing processing apparatus according to an embodiment of the present application.
As shown in fig. 4, the photographing processing apparatus 30 includes:
the first monitoring module 31 is configured to monitor whether a distance between a current position and a position at a previous time of the camera module is greater than a first threshold in the previewing process;
the storage module 32 is used for storing the image frame currently acquired by the camera module if the image frame is not acquired by the camera module;
a first judging module 33, configured to judge whether the number of currently stored image frames is smaller than a second threshold;
a synthesizing module 34, configured to perform synthesizing processing on the stored image frames to generate a preprocessed image if the image frames are not in the stored image frames;
the second judging module 35 is configured to judge whether the preprocessed image includes the current target region of interest when the photographing instruction is obtained;
and a cropping module 36, configured to crop, if included, a target image corresponding to the target region of interest from the preprocessed image.
In practical use, the photographing processing apparatus provided in the embodiment of the present application may be configured in any electronic device to execute the photographing processing method.
According to the photographing processing device provided by the embodiment of the application, in the previewing process, when the distance between the current position and the position at the previous moment of the photographing module is monitored to be smaller than or equal to a first threshold value, the image frames currently acquired by the photographing module are stored, when the number of the currently stored image frames is larger than or equal to a second threshold value, the stored image frames are subjected to synthesis processing to generate a preprocessed image, and then when a photographing instruction is obtained and the preprocessed image contains the current target region of interest, the target image corresponding to the target region of interest is cut from the preprocessed image. Therefore, before the photographing instruction is acquired, the image frames acquired during previewing are synthesized to generate the preprocessed image, so that the target image is directly cut from the preprocessed image when the photographing instruction is acquired, the image quality is improved, the photographing speed is increased, and the user experience is improved.
In a possible implementation form of the present application, the photographing processing apparatus 30 further includes:
and the updating module is used for updating the generated preprocessed image according to the currently stored image frame if the distance between the current position and the position at the previous moment of the camera module is greater than a first threshold value.
Further, in another possible implementation form of the present application, the photographing processing apparatus 30 further includes:
and the returning module is used for returning to continue to execute the operation of monitoring whether the distance between the position of the camera module and the position at the previous moment is greater than the first threshold value or not if the number of the currently stored image frames is not less than the second threshold value, and synthesizing the stored image frames until the distance is greater than the first threshold value or the number of the stored image frames is greater than the second threshold value.
Further, in another possible implementation form of the present application, the photographing processing apparatus 30 further includes:
and the deleting module is used for deleting the stored image frames.
Further, in another possible implementation form of the present application, the photographing processing apparatus 30 further includes:
and the processing module is used for processing the image currently acquired by the camera module to generate a target image if the preprocessed image does not contain the current target region of interest.
Further, in another possible implementation form of the present application, the photographing processing apparatus 30 further includes:
the second monitoring module is used for monitoring whether the distance between the current position of the camera module and the position of the camera module when the photographing instruction is obtained is larger than a first threshold value or not;
and the optimization module is used for optimizing the target image by using the image currently acquired by the camera module if the target image is not acquired by the camera module.
Further, in another possible implementation form of the present application, the photographing processing apparatus 30 further includes:
and the determining module is used for determining a second threshold value according to the size of the image frame acquired by the camera module and the storage area corresponding to the camera module.
It should be noted that the foregoing explanation of the embodiment of the photographing processing method shown in fig. 1 and fig. 3 is also applicable to the photographing processing device 30 of this embodiment, and is not repeated herein.
The photographing processing device provided by the embodiment of the application, the photographing processing method provided by the embodiment of the application, when monitoring that the distance between the current position of the camera module and the position at the previous moment is smaller than or equal to a first threshold value in the previewing process, storing the image frames currently acquired by the camera module, and when the number of the image frames currently stored is larger than or equal to a second threshold value, synthesizing the stored image frames to generate a preprocessed image, then when a photographing instruction is obtained and the preprocessed image contains the current target interesting region, cutting a target image corresponding to the target interesting region from the preprocessed image, when the preprocessed image does not contain the current target interesting region, processing the image currently acquired by the camera module to generate the target image, and further when the distance between the current position of the camera module and the position of the camera module when the photographing instruction is obtained is smaller than or equal to the first threshold value, and optimizing the target image by using the image currently acquired by the camera module. Therefore, the target image is directly cut from the preprocessed image when the photographing instruction is acquired, and when the camera module is still in a stable state after the photographing instruction is acquired, the target image is further optimized, so that the photographing speed is improved, the image quality is further improved, and the user experience is improved.
In order to implement the above embodiments, the present application further provides an electronic device.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
As shown in fig. 5, the electronic device 200 includes:
a memory 210 and a processor 220, a bus 230 connecting different components (including the memory 210 and the processor 220), wherein the memory 210 stores a computer program, and when the processor 220 executes the program, the photographing processing method according to the embodiment of the present application is implemented.
Bus 230 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 200 typically includes a variety of electronic device readable media. Such media may be any available media that is accessible by electronic device 200 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 210 may also include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)240 and/or cache memory 250. The electronic device 200 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 260 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 230 by one or more data media interfaces. Memory 210 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility 280 having a set (at least one) of program modules 270, including but not limited to an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may comprise an implementation of a network environment, may be stored in, for example, the memory 210. The program modules 270 generally perform the functions and/or methodologies of the embodiments described herein.
Electronic device 200 may also communicate with one or more external devices 290 (e.g., keyboard, pointing device, display 291, etc.), with one or more devices that enable a user to interact with electronic device 200, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 200 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 292. Also, the electronic device 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 293. As shown, the network adapter 293 communicates with the other modules of the electronic device 200 via the bus 230. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 200, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 220 executes various functional applications and data processing by executing programs stored in the memory 210.
It should be noted that, for the implementation process and the technical principle of the electronic device of this embodiment, reference is made to the foregoing explanation of the photographing processing method of the embodiment of the present application, and details are not described here again.
The electronic device provided by the embodiment of the application can execute the photographing processing method, and the photographing processing method includes storing image frames currently acquired by the camera module when it is monitored that the distance between the current position of the camera module and the position at the previous moment is smaller than or equal to a first threshold value in a previewing process, synthesizing the stored image frames when the number of the currently stored image frames is larger than or equal to a second threshold value to generate a preprocessed image, and then cutting a target image corresponding to a target region of interest from the preprocessed image when a photographing instruction is acquired and the preprocessed image includes the current target region of interest. Therefore, before the photographing instruction is acquired, the image frames acquired during previewing are synthesized to generate the preprocessed image, so that the target image is directly cut from the preprocessed image when the photographing instruction is acquired, the image quality is improved, the photographing speed is increased, and the user experience is improved.
In order to implement the above embodiments, the present application also proposes a computer-readable storage medium.
The computer readable storage medium stores thereon a computer program, and the computer program is executed by a processor to implement the photographing processing method according to the embodiment of the present application.
In order to implement the foregoing embodiments, a further embodiment of the present application provides a computer program, which when executed by a processor, implements the photographing processing method according to the embodiments of the present application.
In an alternative implementation, the embodiments may be implemented in any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the consumer electronic device, partly on the consumer electronic device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic devices may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., through the internet using an internet service provider).
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (9)

1. A photographing processing method, comprising:
monitoring whether the distance between the current position and the position at the previous moment of the camera module is greater than a first threshold value or not in the previewing process;
if not, storing the image frame currently acquired by the camera module;
judging whether the number of the image frames stored currently is smaller than a second threshold value;
if not, synthesizing the stored image frames to generate a preprocessed image;
when a photographing instruction is acquired, judging whether the preprocessed image contains a current target region of interest, wherein the current target region of interest refers to image content included in an image frame acquired by the camera module when the photographing instruction is acquired;
if yes, cutting a target image corresponding to the target region of interest from the preprocessed image;
after the photographing instruction is acquired, the method further comprises:
monitoring whether the distance between the current position of the camera module and the position of the camera module when the photographing instruction is obtained is larger than a first threshold value or not;
if not, combining the image currently acquired by the camera module with the target image by using the image currently acquired by the camera module so as to optimize the target image.
2. The method of claim 1, wherein after monitoring whether a distance between a current position of the camera module and a position at a previous time is greater than a first threshold, the method further comprises:
and if so, updating the generated preprocessed image according to the currently stored image frame.
3. The method of claim 1, wherein after determining whether the number of currently stored image frames is less than a second threshold, further comprising:
and if so, returning to continue to perform the operation of monitoring whether the distance between the position of the camera module and the position at the previous moment is greater than a first threshold value or not until the distance is greater than the first threshold value or the number of the stored image frames is greater than a second threshold value, and performing synthesis processing on the stored image frames.
4. The method of claim 1, wherein after generating the pre-processed image, further comprising:
deleting the stored image frames.
5. The method according to any one of claims 1-4, wherein after determining whether the preprocessed image contains the current target region of interest, further comprising:
and if not, processing the image currently acquired by the camera module to generate a target image.
6. The method of any of claims 1-4, wherein before determining whether the number of currently stored image frames is less than the second threshold, further comprising:
and determining the second threshold value according to the size of the image frame acquired by the camera module and the storage area corresponding to the camera module.
7. A photographing processing apparatus, comprising:
the first monitoring module is used for monitoring whether the distance between the current position and the position at the previous moment of the camera module is greater than a first threshold value or not in the previewing process;
the storage module is used for storing the image frame currently acquired by the camera module if the image frame is not acquired by the camera module;
the first judgment module is used for judging whether the number of the image frames stored at present is less than a second threshold value;
the synthesis module is used for carrying out synthesis processing on the stored image frames to generate a preprocessed image if the image frames are not in the stored image frames;
the second judging module is used for judging whether the preprocessed image contains a current target region of interest when a photographing instruction is obtained, wherein the current target region of interest refers to image content included in an image frame acquired by the camera module when the photographing instruction is obtained;
the cutting module is used for cutting a target image corresponding to the target interesting region from the preprocessed image if the target interesting region is included in the preprocessed image;
the second monitoring module is used for monitoring whether the distance between the current position of the camera module and the position of the camera module when the photographing instruction is obtained is larger than a first threshold value or not;
and the optimization module is used for synthesizing the image currently acquired by the camera module with the target image by using the image currently acquired by the camera module if the target image is not acquired by the camera module, so as to optimize the target image.
8. An electronic device, comprising: memory, processor and program stored on the memory and executable on the processor, characterized in that the processor implements the photographing processing method according to any one of claims 1-6 when executing the program.
9. A computer-readable storage medium on which a computer program is stored, the program, when being executed by a processor, implementing the photographing processing method according to any one of claims 1-6.
CN202010293627.6A 2020-04-15 2020-04-15 Photographing processing method and device, electronic equipment and storage medium Active CN111479059B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010293627.6A CN111479059B (en) 2020-04-15 2020-04-15 Photographing processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010293627.6A CN111479059B (en) 2020-04-15 2020-04-15 Photographing processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111479059A CN111479059A (en) 2020-07-31
CN111479059B true CN111479059B (en) 2021-08-13

Family

ID=71752497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010293627.6A Active CN111479059B (en) 2020-04-15 2020-04-15 Photographing processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111479059B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438421B (en) * 2021-06-30 2023-02-17 Oppo广东移动通信有限公司 Image processing method, device, terminal and readable storage medium
CN116701501B (en) * 2022-11-21 2024-04-12 荣耀终端有限公司 Camera calibration data loading method and device and terminal equipment
CN117676331A (en) * 2024-02-01 2024-03-08 荣耀终端有限公司 Automatic focusing method and electronic equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017046301A (en) * 2015-08-28 2017-03-02 オリンパス株式会社 Imaging apparatus
CN105657247A (en) * 2015-11-20 2016-06-08 乐视移动智能信息技术(北京)有限公司 Secondary exposure photographing method and apparatus for electronic device
CN106331510B (en) * 2016-10-31 2019-10-15 维沃移动通信有限公司 A kind of backlight photographic method and mobile terminal
CN107231522A (en) * 2017-05-04 2017-10-03 广东欧珀移动通信有限公司 A kind of mobile terminal and its photographic method, computer-readable recording medium
CN108184051A (en) * 2017-12-22 2018-06-19 努比亚技术有限公司 A kind of main body image pickup method, equipment and computer readable storage medium
CN110062159A (en) * 2019-04-09 2019-07-26 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment based on multiple image
CN110266965B (en) * 2019-06-28 2021-06-01 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN110198418B (en) * 2019-06-28 2020-12-22 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN110198417A (en) * 2019-06-28 2019-09-03 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN111479059A (en) 2020-07-31

Similar Documents

Publication Publication Date Title
CN109218628B (en) Image processing method, image processing device, electronic equipment and storage medium
CN111479059B (en) Photographing processing method and device, electronic equipment and storage medium
EP2991339B1 (en) Photographing method and electronic device
US8564679B2 (en) Image processing apparatus, image processing method and program
CN109729274B (en) Image processing method, image processing device, electronic equipment and storage medium
US20120057786A1 (en) Image processing apparatus, image processing method, image pickup apparatus, and storage medium storing image processing program
KR20180011539A (en) Electronic device for processing image
CN111953899B (en) Image generation method, image generation device, storage medium, and electronic apparatus
JP2023551037A (en) Image capture methods, devices, electronic equipment and readable storage media
WO2023160285A9 (en) Video processing method and apparatus
CN115176455A (en) Power efficient dynamic electronic image stabilization
CN115086567A (en) Time-delay shooting method and device
CN112422798A (en) Photographing method and device, electronic equipment and storage medium
CN113099146B (en) Video generation method and device and related equipment
JP6230386B2 (en) Image processing apparatus, image processing method, and image processing program
CN117479000A (en) Video recording method and related device
CN113918766B (en) Video thumbnail display method, apparatus and storage medium
US11200653B2 (en) Local histogram matching with global regularization and motion exclusion for multi-exposure image fusion
CN110033421B (en) Image processing method, image processing device, storage medium and electronic equipment
US20140111678A1 (en) Method and system for capturing, storing and displaying animated photographs
CN111510629A (en) Data display method, image processor, photographing device and electronic equipment
CN116074620B (en) Shooting method and electronic equipment
CN115278047A (en) Shooting method, shooting device, electronic equipment and storage medium
KR20210115185A (en) Image processing method and appratus
CN112399092A (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant