CN109167893B - Shot image processing method and device, storage medium and mobile terminal - Google Patents

Shot image processing method and device, storage medium and mobile terminal Download PDF

Info

Publication number
CN109167893B
CN109167893B CN201811238342.1A CN201811238342A CN109167893B CN 109167893 B CN109167893 B CN 109167893B CN 201811238342 A CN201811238342 A CN 201811238342A CN 109167893 B CN109167893 B CN 109167893B
Authority
CN
China
Prior art keywords
image
image frames
interferent
moving
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811238342.1A
Other languages
Chinese (zh)
Other versions
CN109167893A (en
Inventor
王宇鹭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811238342.1A priority Critical patent/CN109167893B/en
Publication of CN109167893A publication Critical patent/CN109167893A/en
Application granted granted Critical
Publication of CN109167893B publication Critical patent/CN109167893B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/147Scene change detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses a processing method, a device, a storage medium and a mobile terminal for shot images, wherein the method comprises the following steps: if the mobile interferent is detected to exist in the preview picture, continuously shooting at least two image frames; denoising the moving interferent in the at least two image frames according to the time domain relevance of the image frames; and generating a target shooting image according to the at least two processed image frames. The influence of the moving interference object on the definition of the shot image can be avoided, and the shooting quality is improved.

Description

Shot image processing method and device, storage medium and mobile terminal
Technical Field
The embodiment of the application relates to the technical field of mobile terminals, in particular to a shot image processing method and device, a storage medium and a mobile terminal.
Background
At present, the photographing function becomes a standard configuration of most mobile terminals, and a terminal user can easily and quickly realize photographing operation through a portable mobile terminal.
When a terminal user uses the mobile terminal to take a picture, if an interfering object suddenly breaks into the lens, an artifact corresponding to the interfering object appears in the taken picture, which causes the taken picture to be unclear, and therefore, the image preprocessing function of the mobile terminal still needs to be improved.
Disclosure of Invention
The embodiment of the application provides a shot image processing method and device, a storage medium and a mobile terminal, which can improve the shooting quality.
In a first aspect, an embodiment of the present application provides a captured image processing method, including:
if the mobile interferent is detected to exist in the preview picture, continuously shooting at least two image frames;
denoising the moving interferent in the at least two image frames according to the time domain relevance of the image frames;
and generating a target shooting image according to the at least two processed image frames.
In a second aspect, an embodiment of the present application provides a processing apparatus for capturing an image, including:
the image shooting module is used for continuously shooting at least two image frames if the mobile interferent is detected to exist in the preview picture;
the interference object processing module is used for denoising the interference objects in at least two image frames shot by the image shooting module according to the time domain relevance of the image frames;
and the image generation module is used for generating a target shooting image according to the at least two image frames processed by the interfering object processing module.
In a third aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the processing method of the captured image according to the present application.
In a fourth aspect, an embodiment of the present application provides a mobile terminal, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the method for processing a captured image according to the embodiment of the present application.
According to the processing scheme for shooting the image, firstly, if a mobile interference object exists in a preview picture, at least two image frames are continuously shot; secondly, denoising the moving interferents in the at least two image frames according to the time domain relevance of the image frames; and finally, generating a target shooting image according to the at least two processed image frames. The influence of the moving interference object on the definition of the shot image can be avoided, and the shooting quality is improved.
Drawings
Fig. 1 is a schematic flowchart of a processing method for shooting an image according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of another captured image processing method according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of another captured image processing method according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of another captured image processing method according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of another captured image processing method according to an embodiment of the present disclosure;
fig. 6 is a block diagram of a processing apparatus for capturing an image according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
Detailed Description
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
At present, the photographing function becomes a standard configuration of most mobile terminals, and a terminal user can easily and quickly realize photographing operation through a portable mobile terminal. However, when a terminal user uses the mobile terminal to take a picture, if an interfering object suddenly breaks into the lens, an artifact corresponding to the interfering object may appear in the taken picture, which may cause the taken picture to be unclear.
The embodiment of the application provides a shot image processing method, which can be used for continuously shooting a plurality of image frames to perform denoising processing on a moving interfering object when the moving interfering object exists in a preview image, so that a target shot image is generated, further the influence of the moving interfering object on the definition of the shot image is avoided, and the shooting quality is improved. The specific scheme is as follows:
fig. 1 is a schematic flowchart of a processing method for shooting an image according to an embodiment of the present disclosure, where the method is suitable for a situation where a moving interfering object exists in a shooting field of view during a shooting process. The method is particularly applied to the condition that the photographing is carried out in a portrait photographing mode and when a mobile interfering object exists, the method can be executed by a mobile terminal with a photographing function, the mobile terminal can be a smart phone, a tablet personal computer, wearable equipment (a smart watch or smart glasses) and the like, and the method specifically comprises the following steps:
step 101, if a moving interfering object is detected to exist in a preview picture, continuously shooting at least two image frames.
The preview screen may be a screen displayed on a display screen of the mobile terminal after the shooting function is started. The picture can be an image which is displayed in real time by the camera after the shooting function is started. The mobile interferent may be a non-shooting target in the preview image and an object in a moving state, for example, a terminal user shoots in a portrait mode, the portrait mode may locate a plurality of faces, but some faces are not a shooting target required by the user, such as a face suddenly rushing into a shooting range, and at this time, the face rushing into the shooting range is the mobile interferent.
Optionally, detecting whether a mobile interfering object exists in the preview picture, which may be detecting whether the mobile interfering object exists in the preview picture in real time or at preset time intervals (e.g., 1 second) after the shooting function of the mobile terminal is started, and updating the detection result in real time; or when the terminal user clicks a trigger to take a picture (such as clicking a shooting button trigger, a voice trigger, a gesture or an expression trigger, etc.), the system starts to detect whether a moving interfering object exists in the preview picture.
Optionally, the moving interfering object is detected in the preview screen, where the moving interfering object is detected in the preview screen if there is a change in the captured content between at least two consecutive image frames or periodically extracted image frames (e.g., image frames extracted every 1 second). Specifically, at least two adjacent image frames or periodically extracted image frames may be acquired for comparison, and when at least one of increase, decrease, displacement movement, blur, or the like of the shot content occurs between the image frames, it is indicated that a moving interfering object exists in the preview picture; or performing difference operation between adjacent image frames or periodically extracted image frames, and if the operation result is greater than a preset threshold, indicating that a moving interfering object exists. If no moving interferent exists, the content between the adjacent image frames or the periodically extracted image frames is basically kept unchanged, so that the difference operation result is smaller; when a moving interfering object occurs, the content acquired by the camera changes due to the occurrence of the moving interfering object, and the difference calculation result is larger.
For example, if a moving interfering object is detected to exist in the preview screen, the continuously capturing the at least two image frames may be performed by starting a multi-frame continuous capturing mode and continuously capturing the at least two image frames if a moving interfering object is detected to exist in the preview screen. Optionally, the shooting frequency when continuously shooting the at least two image frames may be a sampling frequency of a camera configured by the mobile terminal, or may be preset according to a requirement, which is not limited in this application.
102, denoising the moving interferents in at least two image frames according to the time domain relevance of the image frames.
The time domain correlation of the image frames may be time correlation between adjacent continuously shot image frames, position correlation between characteristic pixel points between image frames, and the like. For example, when at least two image frames are continuously captured at a capturing frequency of 10 image frames per second, the time of two adjacent image frames differs by one tenth of a second, and the moving interfering object moves in the one tenth of a second. Therefore, the positions of the moving interferent in each two adjacent image frames have a certain correlation. For example, to perform denoising processing on the moving interferent in at least two image frames, first, a region where the moving interferent is located in each image frame is determined, and the region where the moving interferent is located in the image frame may be determined according to the time domain correlation of the image frames and the position relationship of the shot content in any two image frames. For example, the positions of other shot contents in at least two image frames which are continuously shot are basically unchanged except for the moving interfering object, and only the moving interfering object changes along with the change of time, so that the shot contents which change can be used as the moving interfering object by comparing whether the positions of the shot contents between the two image frames change or not, and the area where the changed shot contents are located is determined as the area where the moving interfering object is located. And determining the region where the mobile interferent in each acquired image frame is located, and then carrying out denoising processing on the region, so that the mobile interferent does not exist in each processed image frame.
Optionally, in this embodiment of the application, the moving interferent in at least two image frames is denoised, and an area where the interferent is located in each image frame may be used as a noise area to remove the area where the interferent is located in the image frame. Specifically, the pixel value of the pixel point in the area where the interference object is located in the image frame may be set to a fixed gray value (e.g. 0); or determining original shooting content corresponding to the area where the moving interfering object is located in each image frame according to the time domain correlation between the image frames, and replacing the area where the moving interfering object is located in the image frame with the original shooting content. For example, if the location of the area where the moving interfering object is located in the first image frame is S1, the content in the corresponding S1 area in any other captured image except the first image frame may be selected to be the original captured content corresponding to the moving interfering object in the first image frame, and at this time, the content in the corresponding S1 area in any other captured image except the first image frame is substituted for the area where the moving interfering object is located in the first image frame.
Optionally, when denoising the moving interferent in each image frame, in order to improve the processing efficiency and the accuracy of the finally generated target captured image, a part of the at least two captured image frames may be selected for denoising, for example, an image frame with high definition, an image frame with a small interferent area, or an image that does not block the captured target may be selected for denoising.
And 103, generating a target shooting image according to the at least two processed image frames.
Optionally, when the target captured image is generated according to the at least two processed image frames, the at least two processed image frames in step 102 may be fused to generate the target captured image. When at least two image frames are fused, the processed at least two image frames may be fused based on a pixel level fusion algorithm and/or a feature level fusion algorithm to generate a target shot image.
The pixel-level fusion is also called data-level fusion, and may be a process of directly processing image data which is subjected to denoising processing and does not contain a moving interfering object to obtain a fused image. Specifically, the pixel level fusion algorithm may include a spatial domain algorithm, a transform domain algorithm, and the like, and the spatial domain algorithm may further include a plurality of fusion rule methods, such as a logic filtering method, a gray-scale weighted average method, a contrast modulation method, and the like; the transform domain algorithm may also include a pyramid decomposition fusion method, a wavelet transform method, and the like. The pixel level fusion algorithm has the advantage that the detailed information in the image, such as the extraction of edges and textures, can be restored as much as possible. The method can well judge the pixel point information of the image, and keep the image information of the image non-moving interferent as much as possible, so that the definition of the fused image is high.
The feature level image fusion algorithm can be used for extracting feature information of a shot target from an image which is subjected to denoising processing and does not contain a moving interfering object, wherein the feature information is feature information of an area where the target which a terminal user wants to shoot is located, and then analyzing, processing and integrating the feature information to obtain fused image features. The method has the advantages that the display effect of the shot target can be improved by adopting the feature level fusion algorithm, the image information is compressed by the feature level fusion and then analyzed and processed by a computer, the consumed memory and time are relatively small, and the real-time performance of the shot image processing is improved.
Optionally, if the moving interferent in the at least two image frames is denoised in step 102, the pixel value of the pixel point in the area where the interferent is located in the image frames is set to be the fixed gray value, and then the at least two processed image frames are fused to generate the target shooting image, which may be the area without considering the fixed gray value, so as to avoid the fixed gray value from affecting the final fusion effect.
According to the processing method for the shot images, firstly, if a mobile interference object is detected to exist in a preview picture, at least two image frames are continuously shot; secondly, denoising the moving interferents in the at least two image frames according to the time domain relevance of the image frames; and finally, generating a target shooting image according to the at least two processed image frames. Compared with the prior art, the method has the advantages that in the shooting process, if the moving interference object exists in the shooting visual field, the shot image is fuzzy and poor in quality. According to the embodiment of the application, when the moving interferent exists in the preview picture, the multiple image frames are continuously shot to perform denoising processing on the moving interferent, so that a target shot image is generated, the influence of the moving interferent on the definition of the shot image is avoided, and the shooting quality is improved.
Fig. 2 is a schematic flow chart of another captured image processing method provided in an embodiment of the present application, and as a further description of the foregoing embodiment, the method includes the following steps:
step 201, if the moving interfering object exists in the preview picture, continuously shooting at least two image frames.
Step 202, shooting content detection is carried out on at least two continuously shot image frames.
For example, the detection of the shooting contents of at least two image frames which are continuously shot may be the detection of a subject included in each image frame, including a target subject, an interfering subject, and the like. Specific detection method this is not limited in the examples of the present application. For example, the detection of the shot content may be performed by detecting the outline of the shot in each image frame by using an edge detection algorithm (e.g., Canny algorithm, Sobel algorithm, Robert algorithm, etc.); or, a feature point detection algorithm (such as blob feature detection) may be adopted to extract each feature point in the shot content, and different feature points may correspond to different shot objects, so that each shot object in the image frame may be selected to complete the detection of the shot object in the shot content.
Optionally, because at least two continuously captured image frames have a certain correlation, after the content detection of the first image frame is completed, the content detection of the second image frame may be performed with the detection result of the first image frame as a reference, thereby improving the detection efficiency. For example, three shot objects are detected in the detection result of the first image frame, and when the content of the second image frame is detected, the three shot objects in the first image frame can be found in the second image frame, and then whether a newly added shot object exists or not can be checked, so that the detection efficiency of the same shot object is improved.
Step 203, determining the area information of the mobile interferent in each image frame according to the time domain relevance of the image frames and the detection result of the shooting content of at least two image frames.
The information of the area where the mobile interference object is located may be some specific information of the mobile interference object in each image frame, and may include position information, area information, proportion information of the mobile interference object in each image frame, feature point information of the area where the mobile interference object is located, whether the shooting target is blocked, and the like.
Optionally, to determine the region information where the moving interferent is located in each image frame, the moving interferent in each image frame is determined first. There are many methods for determining a moving interfering object in an image frame, and this is not limited in the embodiments of the present application. For example, the area where each object in the detection result of the image content of each image frame is located in the corresponding image frame may be determined, the feature point of each object may be extracted, then the area where each object corresponding to two adjacent image frames is located and the feature point may be compared according to the time domain correlation of the image frames, whether there is an object whose feature point is the same but the area where the object is located is changed may be detected, if there is an object whose feature point is the same but the area where the object is located, the object may be used as a moving interfering object, and the area where the object is located may be used as the area where the moving interfering object is located in the image frame. In order to prevent the erroneous determination of the area change due to the calculation error, it may be determined that the area where the photographic subject is located is changed when the position change of the area where the photographic subject is located is greater than a preset change threshold (for example, the number of moving pixels exceeds 10 pixels).
Optionally, the determining of the moving interfering object in the image frame may also be extracting a feature point of the moving interfering object when the moving interfering object is detected in the preview picture in step 201, and since there is a time-domain correlation between image frames, when detecting the area where the moving interfering object is located in each image frame, a captured object having the feature point in the detection result of the captured content of each image frame may be searched as the moving interfering object, extracting the contour of the moving interfering object, and taking the area where the contour of the moving interfering object is located in the image frame as the area where the moving interfering object is located in the image frame.
For example, after the moving interferent in each image frame is determined, the area information of the moving interferent in the corresponding image frame may be determined, for example, the position information of the area occupied by the moving interferent in each image frame may be determined by calculating the position of a pixel point in the area where the moving interferent is located; determining the area information of the area occupied by the mobile interferent in each image frame by calculating the number of pixel points in the area where the mobile interferent is located; determining the proportion information of the moving interferent in the whole image frame by calculating the percentage of the area occupied by the moving interferent in the total area of the image frame; extracting the characteristic points of the area where the mobile interferent is located to obtain the characteristic point information of the area where the mobile interferent is located; and judging the position relation of the mobile interference object and the area where the shooting target is located, and determining whether the mobile interference object shields the shooting target or not.
And 204, denoising the mobile interferent in each image frame according to the region information of the mobile interferent in each image frame.
The specific information of the area where the moving interferent is located is different, and the method for denoising the moving interferent in each image frame is also different. For example, when the information of the area where the mobile interferent is located is whether the mobile interferent blocks the shooting target, the mobile interferent which blocks the shooting target may be denoised, and the mobile interferent which does not block the shooting target may not be processed; the image frames corresponding to the moving interfering objects that block the shooting target may be discarded, and the moving interfering objects may be removed from the image frames corresponding to the moving interfering objects that do not block the shooting target. When the information of the area where the mobile interferent is located is the proportion information of the mobile interferent in the whole shooting image frame, denoising the image with the proportion larger than a preset threshold value, or not denoising the image; or discarding the image frames with the proportion larger than the preset threshold, and performing denoising processing only on the moving interferent in the image frames with the proportion smaller than the preset threshold, and the like.
Optionally, when the information of the area where the mobile interferent is located is area information of an area occupied by the mobile interferent in each image frame, performing denoising processing on the mobile interferent in each image frame according to the information of the area where the mobile interferent is located in each image frame, including: removing the image frames with the area of the area where the moving interference object is located larger than a preset area threshold value from at least two image frames to obtain residual image frames; and eliminating the moving interferents in the residual image frames according to the time domain correlation of the image frames.
Specifically, when the area of the area where the moving interfering object is located in the image frame is large, which is equivalent to a large noise area in the image frame, at this time, a large amount of detail information in the image frame is lost, and the image frame may cause interference when the target shot image is subsequently fused, thereby affecting the final fusion effect. Therefore, whether the area of the section where the moving interferent is located in each image frame is larger than a preset area threshold value or not can be judged, the image frames larger than the preset area threshold value are discarded, and only the residual image frames are denoised according to the time domain relevance of the image frames. The method has the advantages that the denoising processing efficiency of the image frame moving interferents is improved, and the quality of the finally generated target shooting image can be improved.
And step 205, generating a target shooting image according to the processed at least two image frames.
The shot image processing method provided by the embodiment of the application can detect the shot content of each image frame when the moving interferent in a plurality of continuously shot image frames is subjected to denoising processing, determine the region information where the moving interferent is located, and perform denoising processing on the moving interferent in each image frame according to the region information, so that a target shot image is generated, the denoising processing precision of the moving interferent in each image frame is improved, and the quality of the generated target shot image is further improved.
Fig. 3 is a schematic flow chart of another captured image processing method provided in an embodiment of the present application, and as a further description of the foregoing embodiment, the method includes the following steps:
step 301, if a moving interfering object is detected to exist in the preview picture, continuously shooting at least two image frames.
Step 302, judging whether a moving interfering object exists in at least two image frames to shield the image frame of the shooting target according to the time domain relevance of the image frames.
For example, after at least two image frames are continuously shot, the embodiment of the application judges whether a situation that a moving interfering object blocks a shooting target exists in the at least two image frames, and performs denoising processing on the moving interfering object in the image frame which blocks the shooting target to generate a target shooting image, so that the shooting quality is ensured, and the shooting efficiency is improved.
Optionally, in this embodiment of the application, when it is determined whether a moving interfering object exists in at least two image frames to block the image frame of the shooting target, position information of an area where the shooting target is located in each image frame and position information of an area where the moving interfering object is located in each image frame may be determined, the two position information may be compared to see whether there is an overlapping area, and if there is an overlapping area, it is determined that the moving interfering object blocks the shooting target in the image frame. Or judging whether the area where the shooting target is located in the image frame is complete compared with the areas where the shooting targets are located in other image frames according to the time domain relevance between the image frames, and if so, indicating that the area where the shooting target is located in the image frame is not shielded by the moving interference object. Optionally, when it is determined whether the area where the shooting target is located in the image frame is complete compared with the areas where the shooting target is located in other image frames, the image frame may be compared with all the other continuous shooting image frames except for the image frame itself, and the comparison result corresponding to most of the image frames is selected as the final determination result for determining whether the shooting target in the image frame is blocked.
And 303, if the moving interferent blocks the image frame of the shooting target, denoising the moving interferent in at least two image frames.
For example, if there is a moving interfering object that blocks the image frame of the shooting target, it indicates that the moving interfering object blocks the shooting target when at least two image frames are continuously shot, and the shooting effect is seriously affected. For this, at least two acquired image frames need to be denoised. Specifically, in the embodiment of the present application, if there is a moving interfering object that blocks an image frame of a shooting target, there are many methods for performing denoising processing on the moving interfering object in at least two image frames, which are not limited in this embodiment. For example, the image frames of at least two continuously shot images in which the moving interfering object blocks the shooting target may be discarded, and only the image frames in which the shooting target is not blocked may be subjected to the denoising processing. Or judging whether the moving interference object in the image frame blocks the important position of the shooting target, if so, discarding the image frame, and only carrying out denoising processing on the image frame of which the shooting target is not blocked and the important position is not blocked. For example, in the portrait shooting mode, if a moving interfering object blocks a face region, which is an important region in the portrait mode, the frame image needs to be discarded, and only the remaining image frames obtained continuously need to be denoised.
Optionally, if the image frame of the shooting target is not blocked by the moving interferent, the influence on the shooting target is not great even if the moving interferent exists, and therefore, for such a case, an image with the highest scoring quality is selected from the at least two acquired image frames to perform a simple preprocessing operation (such as beautifying, denoising, brightening, and the like), and then the image is taken as the target shooting image, or whether the area of the area occupied by the moving interferent is larger than a preset area threshold is determined, if the area is larger than the preset area threshold, the moving interferent in the at least two image frames is denoised, and if the area is larger than the preset area threshold, the image with the highest scoring quality is selected from the at least two acquired image frames to perform the simple preprocessing operation, and then the image is taken as the target shooting image; the method can further comprise the steps of judging whether the moving distance of the moving interference object is larger than a preset distance or not according to the time domain relevance of the image frames, if so, denoising the moving interference object in at least two image frames, and if so, selecting an image with the highest grade quality from the at least two acquired image frames, and taking the image as a target shooting image after simple preprocessing operation. The embodiments of the present application are not limited thereto.
And step 304, generating a target shooting image according to the processed at least two image frames.
The shot image processing method provided by the embodiment of the application can be used for denoising the mobile interferent in at least two frames of images when the mobile interferent shields the shooting target when denoising the mobile interferent in a plurality of continuously shot image frames, so that the quality of the generated target shooting image is ensured, the power consumption of denoising is greatly reduced, and the shooting efficiency is improved.
Fig. 4 is a schematic flowchart of another processing method for captured images according to an embodiment of the present application, and as a further description of the foregoing embodiment, the method includes the following steps:
step 401, if it is detected that a moving interfering object exists in the preview screen, continuously shooting at least two image frames.
Step 402, denoising the moving interferent in at least two image frames according to the time domain relevance of the image frames.
And step 403, scoring the at least two processed image frames according to at least one scoring parameter, and determining a target image frame and a candidate image frame from the at least two processed image frames.
Wherein the scoring parameters comprise one or more of a combination of a definition parameter, a color parameter, an exposure parameter, a size parameter of a moving interfering object, or an integrity parameter of a photographic target. Optionally, a scoring system corresponding to each scoring parameter may be set in advance for each scoring parameter, for example, the scoring system of the definition parameter in the scoring parameters may divide the definition into at least two grades in advance according to the display effect, for example, the definition may be divided into two grades of definition and unclear definition, or may be divided into three grades of upper, middle, and lower, and the like.
Optionally, in this embodiment of the application, when at least two processed image frames are scored according to at least one scoring parameter, if the scoring parameter is one, the scoring result is used as the scoring result of the image frame. If the number of the scoring parameters is multiple, the scoring results corresponding to the scoring parameters of the image frame can be averaged to obtain the scoring results of the image frame; or setting a weight value for each scoring parameter of the image frame, and calculating the weighted score of each scoring parameter to obtain the scoring result of the image frame. Optionally, the weight value of each scoring parameter may be set by the mobile terminal according to the current shooting mode or shooting scene; or the user can manually set the setting according to the requirement of the user.
Optionally, when determining the target image frame and the candidate image frame from the processed at least two image frames, determining an image frame with the highest score from the at least two continuously-captured image frames as the target image frame, and fusing the content of the image frame based on the target image frame; and taking each image frame with the score lower than that of the target image frame but higher than the lowest score threshold value in at least two continuously-shot image frames as a candidate image frame, wherein the candidate image frame can be other image frames used for carrying out fusion processing on the target image frame, and the candidate image frame can be one or a plurality of image frames.
And step 404, fusing the denoising area of the target image frame according to the candidate image frame to generate a target shooting image.
The score of the target image frame is the highest score of the obtained at least two image frames, so the quality of the image frame should be the best, and in order to improve the fusion efficiency, when the target image frame is fused according to the candidate image frame, the whole target image frame is not fused, and only the denoising area of the target image frame is fused. In the embodiment of the present application, the process of denoising the image frame is substantially a process of removing a region where an interfering object is located in the image frame, and the denoising region of the target image frame may be a region with a fixed pixel gray value or a related region added by other image frames, where an error between the region and the actually captured image is the largest.
Optionally, when the target image frame denoising region is fused according to the candidate image frames, the position of the denoising region in the target image frame may be determined, then image content corresponding to the position in each candidate image frame is obtained, image fusion is performed through a preset fusion algorithm, and the fused image is fused to the position corresponding to the denoising region in the target image frame, so as to generate the target shot image.
The shot image processing method provided by the embodiment of the application can grade at least two images subjected to denoising processing, select the target image frame and the candidate image frame, fuse the denoising region of the target image frame, generate the target shot image, greatly shorten the power consumption of image fusion processing while ensuring the quality of the generated target shot image, and improve the shooting efficiency.
Fig. 5 is a schematic flow chart of another captured image processing method provided in an embodiment of the present application, and as a preferred example of the foregoing embodiments, the method includes the following steps:
step 501, start.
Step 502, judging whether the terminal is in a static state, if so, executing step 503, and if not, continuing to execute step 502.
Optionally, in this embodiment of the present application, the determining whether the terminal is in the stationary state may be determining whether the terminal is in the stationary state according to a gyroscope and positioning information configured in the terminal. The gyroscope is also called as an angular velocity sensor, and can measure state information of rotation, deflection and the like of the terminal, so as to judge whether the terminal is in a static state. The Positioning information may be location information of the terminal obtained by a Positioning module (e.g., Global Positioning System (GPS)) of the terminal.
Specifically, when determining whether the terminal is in a stationary state according to the gyroscope and the positioning information, it may be determined whether the terminal is in a stationary state by first determining whether the terminal is in a stationary state through terminal rotation and deflection obtained by the gyroscope, for example, if the terminal does not generate rotation information and the deflection is also kept unchanged, it is determined that the terminal is in a stationary state, at this time, information of the position of the terminal is obtained through a positioning module in the terminal, it is determined whether the position of the terminal is changed, and if the position of the terminal is not changed, it is determined that the terminal is in a stationary state.
For example, by determining whether the terminal is in a stationary state, if the terminal is not in the stationary state, it cannot be accurately determined whether the moving object in the preview screen is caused by the movement of the terminal or is an interferer suddenly rushing into the lens, so that the shooting target may be mistaken as a moving interferer due to misjudgment. Therefore, when it is detected that the terminal is in a stationary state, step 503 may be executed to determine whether or not an interfering object exists in the preview screen, and if so, step 502 may be executed to wait for the next detection time to determine whether or not the terminal is in a stationary state.
Step 503, if the terminal is in a static state, determining that the change of the shooting content in at least two consecutive preview picture frames meets a preset interference rule, if yes, executing step 504, and if not, executing step 508.
The preset interference rule may be a criterion for determining whether a moving interfering object exists in the preview frame. The preset interference rule may be to determine whether a moving interfering object exists in at least two consecutive preview screen frames by two-step determination. The first step is to judge whether there is a moving object, and the second step is to further judge whether the moving object is a moving interfering object. When the shooting target slightly moves, the shooting target is mistakenly judged as a moving interfering object.
Optionally, when determining whether there is a moving object in the first step of the preset movement rule, it may be determined whether there is a change of the object in the content of at least two frames of images captured continuously, such as an increase of the object, a change of a position, or a blur of the image. The second step in the preset moving rule may be to determine whether the area size of the changed shot object is larger than a preset area threshold, whether the moving distance is larger than a preset distance threshold, whether the moving speed is larger than a preset speed threshold, whether the moving speed decreases with the approach of the shooting target, or the like, when determining whether the moving shot object is a moving interfering object.
For example, if the mobile terminal is still and the change of the captured content in at least two consecutive preview screen frames also meets the preset interference rule, it may be determined that a moving interferent exists in at least two consecutive preview screen frames, at this time, step 504 is executed to indicate that a moving interferent exists in the preview screen; if the judgment result does not meet any one step of the preset interference rules, it indicates that no mobile interferers exist in the preview picture, and step 508 is executed to acquire the target shooting image by adopting a conventional shooting mode. For example, if the first step of the preset interference rule is not satisfied, it is indicated that no moving shot exists in the preview screen, and therefore no moving interfering object exists; if the first step of the preset interference rule is satisfied but the second step is not satisfied, it is described that although the moving subject exists in the preview screen, the moving subject is found not to be a moving interfering object by analysis. For example, when the captured image is a group image, some of the group participants may shake slightly, but the magnitude of the shake is not large, and thus the group participant does not move the interfering object although moving. Or the user B outside the shot wants to take a picture with the user a inside the shot, the user B quickly goes around the user a and stops, and the moving speed of the user B decreases as the user B approaches the user a, so that the user B does not move the interfering object at this time although the user B also moves.
Step 504, if the change of the shot content in at least two consecutive preview image frames meets a preset interference rule, a moving interfering object exists in the preview image.
At step 505, at least two image frames are continuously captured.
Step 506, denoising the moving interferent in at least two image frames according to the time domain relevance of the image frames.
And 507, generating a target shooting image according to the at least two processed image frames.
And step 508, acquiring a target shooting image.
It should be noted that, in the embodiment of the present application, when detecting whether a moving interfering object exists in the preview image, the detection may be started in real time or once every preset time interval after the shooting function of the mobile terminal is started; the system may start detection when the terminal user clicks the trigger photographing instruction, so that if detecting whether a mobile interfering object exists in the preview image is performed before the terminal user triggers the photographing instruction after the terminal user starts the photographing function, the detection result needs to be updated in real time, and when the mobile terminal receives the photographing instruction triggered by the user and the detection result still does not exist the mobile interfering object, the target photographed image is acquired. If it is detected that the mobile interfering object exists in the preview screen after the terminal user triggers the photographing instruction, step 508 may be directly performed to obtain the target photographed image after detecting that the mobile interfering object does not exist in the preview screen.
According to the processing method for the shot image, whether the preview image has the moving interference object or not is judged through two preset interference rules when the terminal is in a static state, if the moving interference object exists, the multiple image frames are continuously shot to perform denoising processing on the moving interference object, and therefore the target shot image is generated. The situation that the shooting target is judged as the mobile interference object by mistake can be avoided, the real mobile interference object is guaranteed to be processed, and the shooting quality is improved.
Fig. 6 is a block diagram of a processing apparatus for capturing images, which may be implemented by software and/or hardware, and is generally integrated in a mobile terminal having a photographing function, and may execute the processing method for capturing images according to the foregoing embodiments. As shown in fig. 6, the apparatus includes: an image capturing module 601, a interferent processing module 602, and an image generating module 603.
An image capturing module 601, configured to continuously capture at least two image frames if a moving interfering object is detected to be present in the preview image;
the interferent processing module 602 is configured to perform denoising processing on interferents in at least two image frames captured by the image capturing module 601 according to the time domain relevance of the image frames;
an image generating module 603, configured to generate a target captured image according to the at least two image frames processed by the interferent processing module 602.
Further, the image capturing module 601 is configured to detect that a moving interfering object exists in the preview screen, and includes:
if the change of the shooting content in at least two continuous preview image frames meets a preset interference rule, a mobile interference object exists in the preview image.
Further, the interferent processing module 602 is configured to:
performing shooting content detection on at least two continuously shot image frames;
determining the area information of the mobile interferent in each image frame according to the time domain relevance of the image frames and the detection result of the shooting content of the at least two image frames;
and denoising the mobile interferent in each image frame according to the information of the area where the mobile interferent is located in each image frame.
Further, the interferent processing module 602 is configured to perform denoising processing on the mobile interferent in each image frame according to the information of the area where the mobile interferent in each image frame is located, and includes:
removing the image frames with the area of the area where the moving interference object is located larger than a preset area threshold value from the at least two image frames to obtain residual image frames; and eliminating the moving interferents in the residual image frames according to the time domain correlation of the image frames.
Further, the interferent processing module 602 is configured to:
judging whether a moving interfering object exists in the at least two image frames to shield the image frame of the shooting target according to the time domain relevance of the image frames;
and if the moving interferent exists to shield the image frame of the shooting target, denoising the moving interferent in the at least two image frames.
Further, the image generation module 603 is configured to:
scoring the at least two processed image frames according to at least one scoring parameter, and determining a target image frame and a candidate image frame from the at least two processed image frames;
and fusing the denoising area of the target image frame according to the candidate image frame to generate a target shooting image.
Further, the above apparatus further comprises:
the static judgment module is used for judging whether the terminal is in a static state or not; if the terminal is in a still state, the image capturing module 601 detects whether a moving interfering object exists in the preview screen.
According to the processing device for shooting the image, firstly, if the image shooting module 601 detects that the moving interferent exists in the preview picture, at least two image frames are continuously shot; secondly, the interferent processing module 602 performs denoising processing on the moving interferent in the at least two image frames according to the time domain relevance of the image frames; finally, the image generation module 603 generates a target captured image from the processed at least two image frames. Compared with the prior art, the method has the advantages that in the shooting process, if the moving interference object exists in the shooting visual field, the shot image is fuzzy and poor in quality. According to the embodiment of the application, when the moving interferent exists in the preview picture, the multiple image frames are continuously shot to perform denoising processing on the moving interferent, so that a target shot image is generated, the influence of the moving interferent on the definition of the shot image is avoided, and the shooting quality is improved.
The device can execute the methods provided by all the embodiments of the application, and has corresponding functional modules and beneficial effects for executing the methods. For details of the technology not described in detail in this embodiment, reference may be made to the methods provided in all the foregoing embodiments of the present application.
Fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 7, the terminal may include: a housing (not shown), a memory 701, a Central Processing Unit (CPU) 702 (also called a processor, hereinafter referred to as CPU), a computer program stored in the memory 701 and operable on the processor 702, a circuit board (not shown), and a power circuit (not shown). The circuit board is arranged in a space enclosed by the shell; the CPU702 and the memory 701 are provided on the circuit board; the power supply circuit is used for supplying power to each circuit or device of the terminal; the memory 701 is used for storing executable program codes; the CPU702 executes a program corresponding to the executable program code by reading the executable program code stored in the memory 701.
The terminal further comprises: peripheral interfaces 703, RF (Radio Frequency) circuitry 705, audio circuitry 706, speakers 711, power management chip 708, input/output (I/O) subsystems 709, touch screen 712, other input/control devices 710, and external port 704, which communicate over one or more communication buses or signal lines 707.
It should be understood that the illustrated terminal device 700 is merely one example of a terminal, and that the terminal device 700 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The following describes in detail a terminal device provided in this embodiment, where the terminal device is a smart phone as an example.
A memory 701, the memory 701 being accessible by the CPU702, the peripheral interface 703, and the like, the memory 701 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other volatile solid state storage devices.
A peripheral interface 703, said peripheral interface 703 may connect input and output peripherals of the device to the CPU702 and the memory 701.
An I/O subsystem 709, which I/O subsystem 709 may connect input and output peripherals on the device, such as a touch screen 712 and other input/control devices 710, to the peripheral interface 703. The I/O subsystem 709 may include a display controller 7091 and one or more input controllers 7092 for controlling other input/control devices 710. Where one or more input controllers 7092 receive electrical signals from or transmit electrical signals to other input/control devices 710, the other input/control devices 710 may include physical buttons (push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels. It is worth noting that the input controller 7092 may be connected to any one of the following: a keyboard, an infrared port, a USB interface, and a pointing device such as a mouse.
The touch screen 712 may be a resistive type, a capacitive type, an infrared type, or a surface acoustic wave type, according to the operation principle of the touch screen and the classification of a medium for transmitting information. Classified by the installation method, the touch screen 712 may be: external hanging, internal or integral. Classified according to technical principles, the touch screen 712 may be: a vector pressure sensing technology touch screen, a resistive technology touch screen, a capacitive technology touch screen, an infrared technology touch screen, or a surface acoustic wave technology touch screen.
A touch screen 712, the touch screen 712 being an input interface and an output interface between the user terminal and the user, displaying visual output to the user, which may include graphics, text, icons, video, and the like. Optionally, the touch screen 712 sends an electrical signal (e.g., an electrical signal of the touch surface) triggered by the user on the touch screen to the processor 702.
The display controller 7091 in the I/O subsystem 709 receives electrical signals from the touch screen 712 or transmits electrical signals to the touch screen 712. The touch screen 712 detects a contact on the touch screen, and the display controller 7091 converts the detected contact into an interaction with a user interface object displayed on the touch screen 712, i.e., implements a human-computer interaction, and the user interface object displayed on the touch screen 712 may be an icon for running a game, an icon networked to a corresponding network, or the like. It is worth mentioning that the device may also comprise a light mouse, which is a touch sensitive surface that does not show visual output, or an extension of the touch sensitive surface formed by the touch screen.
The RF circuit 705 is mainly used to establish communication between the smart speaker and a wireless network (i.e., a network side), and implement data reception and transmission between the smart speaker and the wireless network. Such as sending and receiving short messages, e-mails, etc.
The audio circuit 706 is mainly used to receive audio data from the peripheral interface 703, convert the audio data into an electric signal, and transmit the electric signal to the speaker 711.
And the loudspeaker 711 is used for reducing the voice signal received by the intelligent sound box from the wireless network through the RF circuit 705 into sound and playing the sound to the user.
And a power management chip 708 for supplying power and managing power to the hardware connected to the CPU702, the I/O subsystem, and the peripheral interface.
In this embodiment, the central processor 702 is configured to:
if the mobile interferent is detected to exist in the preview picture, continuously shooting at least two image frames;
denoising the moving interferent in the at least two image frames according to the time domain relevance of the image frames;
and generating a target shooting image according to the at least two processed image frames.
Further, the detecting that the moving interfering object exists in the preview screen includes:
if the change of the shooting content in at least two continuous preview image frames meets a preset interference rule, a mobile interference object exists in the preview image.
Further, the denoising processing of the moving interferent in the at least two image frames according to the time domain correlation of the image frames includes:
performing shooting content detection on at least two continuously shot image frames;
determining the area information of the mobile interferent in each image frame according to the time domain relevance of the image frames and the detection result of the shooting content of the at least two image frames;
and denoising the mobile interferent in each image frame according to the information of the area where the mobile interferent is located in each image frame.
Further, the denoising processing of the mobile interferent in each image frame according to the information of the area where the mobile interferent in each image frame is located includes:
removing the image frames with the area of the area where the moving interference object is located larger than a preset area threshold value from the at least two image frames to obtain residual image frames; and eliminating the moving interferents in the residual image frames according to the time domain correlation of the image frames.
Further, the denoising processing of the moving interferent in the at least two image frames according to the time domain correlation of the image frames includes:
judging whether a moving interfering object exists in the at least two image frames to shield the image frame of the shooting target according to the time domain relevance of the image frames;
and if the moving interferent exists to shield the image frame of the shooting target, denoising the moving interferent in the at least two image frames.
Further, the generating of the target shooting image according to the processed at least two image frames includes:
scoring the at least two processed image frames according to at least one scoring parameter, and determining a target image frame and a candidate image frame from the at least two processed image frames;
and fusing the denoising area of the target image frame according to the candidate image frame to generate a target shooting image.
Further, before the step of continuously capturing at least two image frames if the moving interfering object is detected to be present in the preview screen, the method further includes:
judging whether the terminal is in a static state or not;
and if the terminal is in a static state, detecting whether a mobile interference object exists in the preview picture.
The embodiment of the application also provides a storage medium containing terminal device executable instructions, and the terminal device executable instructions are used for executing a processing method for shooting the image when being executed by a terminal device processor, and the method comprises the following steps:
if the mobile interferent is detected to exist in the preview picture, continuously shooting at least two image frames;
denoising the moving interferent in the at least two image frames according to the time domain relevance of the image frames;
and generating a target shooting image according to the at least two processed image frames.
Further, the detecting that the moving interfering object exists in the preview screen includes:
if the change of the shooting content in at least two continuous preview image frames meets a preset interference rule, a mobile interference object exists in the preview image.
Further, the denoising processing of the moving interferent in the at least two image frames according to the time domain correlation of the image frames includes:
performing shooting content detection on at least two continuously shot image frames;
determining the area information of the mobile interferent in each image frame according to the time domain relevance of the image frames and the detection result of the shooting content of the at least two image frames;
and denoising the mobile interferent in each image frame according to the information of the area where the mobile interferent is located in each image frame.
Further, the denoising processing of the mobile interferent in each image frame according to the information of the area where the mobile interferent in each image frame is located includes:
removing the image frames with the area of the area where the moving interference object is located larger than a preset area threshold value from the at least two image frames to obtain residual image frames; and eliminating the moving interferents in the residual image frames according to the time domain correlation of the image frames.
Further, the denoising processing of the moving interferent in the at least two image frames according to the time domain correlation of the image frames includes:
judging whether a moving interfering object exists in the at least two image frames to shield the image frame of the shooting target according to the time domain relevance of the image frames;
and if the moving interferent exists to shield the image frame of the shooting target, denoising the moving interferent in the at least two image frames.
Further, the generating of the target shooting image according to the processed at least two image frames includes:
scoring the at least two processed image frames according to at least one scoring parameter, and determining a target image frame and a candidate image frame from the at least two processed image frames;
and fusing the denoising area of the target image frame according to the candidate image frame to generate a target shooting image.
Further, before the step of continuously capturing at least two image frames if the moving interfering object is detected to be present in the preview screen, the method further includes:
judging whether the terminal is in a static state or not;
and if the terminal is in a static state, detecting whether a mobile interference object exists in the preview picture.
The computer storage media of the embodiments of the present application may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Of course, the storage medium provided in the embodiments of the present application and containing computer-executable instructions is not limited to the application recommendation operation described above, and may also perform related operations in the method for processing a captured image provided in any embodiments of the present application.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (8)

1. A method of processing a captured image, comprising:
if the mobile interferent is detected to exist in the preview picture, continuously shooting at least two image frames;
according to the time domain relevance of the image frames, denoising the mobile interferent in the at least two image frames so that the denoised image frames do not have the mobile interferent;
scoring the at least two processed image frames according to at least one scoring parameter, and determining a target image frame and a candidate image frame from the at least two processed image frames;
fusing the denoising area of the target image frame according to the candidate image frame to generate a target shooting image;
the denoising processing of the moving interferent in the at least two image frames according to the time domain relevance of the image frames includes:
performing shooting content detection on at least two continuously shot image frames;
extracting the outline of the mobile interferent according to the time domain relevance of the image frames and the detection result of the shooting content of the at least two image frames, taking the area where the outline is located in each image frame as the area where the mobile interferent is located, and determining the area information where the mobile interferent is located in each image frame;
and denoising the mobile interferent in each image frame according to the information of the area where the mobile interferent is located in each image frame.
2. The method of claim 1, wherein the detecting that a moving interferer is present in the preview screen comprises:
if the change of the shooting content in at least two continuous preview image frames meets a preset interference rule, a mobile interference object exists in the preview image.
3. The method according to claim 1, wherein the denoising processing of the moving interferent in each image frame according to the information of the area where the moving interferent is located in each image frame comprises:
removing the image frames with the area of the area where the moving interference object is located larger than a preset area threshold value from the at least two image frames to obtain residual image frames; and eliminating the moving interferents in the residual image frames according to the time domain correlation of the image frames.
4. The method of claim 1, wherein the denoising the moving interferers in the at least two image frames according to the time-domain correlation of the image frames, further comprises:
judging whether a moving interfering object exists in the at least two image frames to shield the image frame of the shooting target according to the time domain relevance of the image frames;
and if the moving interferent exists to shield the image frame of the shooting target, denoising the moving interferent in the at least two image frames.
5. The method according to any one of claims 1-4, further comprising, before said continuously capturing at least two image frames if a moving interferent is detected in the preview screen:
judging whether the terminal is in a static state or not;
and if the terminal is in a static state, detecting whether a mobile interference object exists in the preview picture.
6. A captured image processing apparatus, comprising:
the image shooting module is used for continuously shooting at least two image frames if the mobile interferent is detected to exist in the preview picture;
the interference object processing module is used for denoising the interference objects in at least two image frames shot by the image shooting module according to the time domain relevance of the image frames so that each image frame after denoising does not have a moving interference object; the denoising processing of the interferent in at least two image frames shot by the image shooting module according to the time domain relevance of the image frames comprises:
performing shooting content detection on at least two continuously shot image frames;
extracting the outline of the mobile interferent according to the time domain relevance of the image frames and the detection result of the shooting content of the at least two image frames, taking the area where the outline is located in each image frame as the area where the mobile interferent is located, and determining the area information where the mobile interferent is located in each image frame;
denoising the interferents in each image frame according to the area information of the interferents in each image frame;
the image generation module is used for scoring the at least two processed image frames according to at least one scoring parameter and determining a target image frame and a candidate image frame from the at least two processed image frames; and fusing the denoising area of the target image frame according to the candidate image frame to generate a target shooting image.
7. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method of processing a captured image according to any one of claims 1 to 5.
8. A mobile terminal, characterized in that it comprises a memory, a processor and a computer program stored on the memory and executable on the processor, said processor implementing the method for processing a captured image according to any one of claims 1 to 5 when executing said computer program.
CN201811238342.1A 2018-10-23 2018-10-23 Shot image processing method and device, storage medium and mobile terminal Active CN109167893B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811238342.1A CN109167893B (en) 2018-10-23 2018-10-23 Shot image processing method and device, storage medium and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811238342.1A CN109167893B (en) 2018-10-23 2018-10-23 Shot image processing method and device, storage medium and mobile terminal

Publications (2)

Publication Number Publication Date
CN109167893A CN109167893A (en) 2019-01-08
CN109167893B true CN109167893B (en) 2021-04-27

Family

ID=64878831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811238342.1A Active CN109167893B (en) 2018-10-23 2018-10-23 Shot image processing method and device, storage medium and mobile terminal

Country Status (1)

Country Link
CN (1) CN109167893B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111247790A (en) * 2019-02-21 2020-06-05 深圳市大疆创新科技有限公司 Image processing method and device, image shooting and processing system and carrier
CN110062159A (en) * 2019-04-09 2019-07-26 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment based on multiple image
CN112466003B (en) * 2019-09-06 2023-11-28 顺丰科技有限公司 Vehicle state detection method, device, computer equipment and storage medium
CN113950705A (en) * 2020-08-26 2022-01-18 深圳市大疆创新科技有限公司 Image processing method and device and movable platform
CN112887611A (en) * 2021-01-27 2021-06-01 维沃移动通信有限公司 Image processing method, device, equipment and storage medium
CN113129229A (en) * 2021-03-29 2021-07-16 影石创新科技股份有限公司 Image processing method, image processing device, computer equipment and storage medium
CN114037096A (en) * 2021-10-29 2022-02-11 河南格林循环电子废弃物处置有限公司 Automatic household appliance identification and marking device and use method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101142814A (en) * 2005-03-15 2008-03-12 欧姆龙株式会社 Image processing device and method, program, and recording medium
CN101266685A (en) * 2007-03-14 2008-09-17 中国科学院自动化研究所 A method for removing unrelated images based on multiple photos
CN102592128A (en) * 2011-12-20 2012-07-18 Tcl集团股份有限公司 Method and device for detecting and processing dynamic image and display terminal
CN105744232A (en) * 2016-03-25 2016-07-06 南京第五十五所技术开发有限公司 Method for preventing power transmission line from being externally broken through video based on behaviour analysis technology
CN107872644A (en) * 2016-09-23 2018-04-03 亿阳信通股份有限公司 Video frequency monitoring method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8615111B2 (en) * 2009-10-30 2013-12-24 Csr Technology Inc. Method and apparatus for image detection with undesired object removal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101142814A (en) * 2005-03-15 2008-03-12 欧姆龙株式会社 Image processing device and method, program, and recording medium
CN101266685A (en) * 2007-03-14 2008-09-17 中国科学院自动化研究所 A method for removing unrelated images based on multiple photos
CN102592128A (en) * 2011-12-20 2012-07-18 Tcl集团股份有限公司 Method and device for detecting and processing dynamic image and display terminal
CN105744232A (en) * 2016-03-25 2016-07-06 南京第五十五所技术开发有限公司 Method for preventing power transmission line from being externally broken through video based on behaviour analysis technology
CN107872644A (en) * 2016-09-23 2018-04-03 亿阳信通股份有限公司 Video frequency monitoring method and device

Also Published As

Publication number Publication date
CN109167893A (en) 2019-01-08

Similar Documents

Publication Publication Date Title
CN109167893B (en) Shot image processing method and device, storage medium and mobile terminal
CN110992327A (en) Lens contamination state detection method and device, terminal and storage medium
CN109040523B (en) Artifact eliminating method and device, storage medium and terminal
CN110209273A (en) Gesture identification method, interaction control method, device, medium and electronic equipment
CN109040524B (en) Artifact eliminating method and device, storage medium and terminal
JP2018133019A (en) Information processing system, information processing method, and program
CN109327691B (en) Image shooting method and device, storage medium and mobile terminal
CN110442521B (en) Control unit detection method and device
CN108665510B (en) Rendering method and device of continuous shooting image, storage medium and terminal
CN112308797A (en) Corner detection method and device, electronic equipment and readable storage medium
CN111754386A (en) Image area shielding method, device, equipment and storage medium
CN112749590B (en) Object detection method, device, computer equipment and computer readable storage medium
CN111199169A (en) Image processing method and device
CN109302563B (en) Anti-shake processing method and device, storage medium and mobile terminal
CN115497082A (en) Method, apparatus and storage medium for determining subtitles in video
EP2888716B1 (en) Target object angle determination using multiple cameras
CN109218620B (en) Photographing method and device based on ambient brightness, storage medium and mobile terminal
CN111325701A (en) Image processing method, device and storage medium
CN104933688B (en) Data processing method and electronic equipment
CN108960213A (en) Method for tracking target, device, storage medium and terminal
CN109040604B (en) Shot image processing method and device, storage medium and mobile terminal
CN113642493B (en) Gesture recognition method, device, equipment and medium
CN108540726B (en) Method and device for processing continuous shooting image, storage medium and terminal
CN108647097B (en) Text image processing method and device, storage medium and terminal
CN112489006A (en) Image processing method, image processing device, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant