CN109040604B - Shot image processing method and device, storage medium and mobile terminal - Google Patents

Shot image processing method and device, storage medium and mobile terminal Download PDF

Info

Publication number
CN109040604B
CN109040604B CN201811238344.0A CN201811238344A CN109040604B CN 109040604 B CN109040604 B CN 109040604B CN 201811238344 A CN201811238344 A CN 201811238344A CN 109040604 B CN109040604 B CN 109040604B
Authority
CN
China
Prior art keywords
moving
image
interferent
shooting
interfering object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811238344.0A
Other languages
Chinese (zh)
Other versions
CN109040604A (en
Inventor
王宇鹭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811238344.0A priority Critical patent/CN109040604B/en
Publication of CN109040604A publication Critical patent/CN109040604A/en
Application granted granted Critical
Publication of CN109040604B publication Critical patent/CN109040604B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses a processing method, a device, a storage medium and a mobile terminal for shot images, wherein the method comprises the following steps: firstly, if a moving interfering object exists in a preview picture, continuously shooting at least two image frames; then, determining the movement information of the moving interferent in the at least two image frames; and finally, determining whether the moving interfering object is reserved in the generated target shooting image or not according to the moving information of the moving interfering object. The influence of the moving interference object on the definition of the shot image can be avoided, and the shooting quality is improved.

Description

Shot image processing method and device, storage medium and mobile terminal
Technical Field
The embodiment of the application relates to the technical field of mobile terminals, in particular to a shot image processing method and device, a storage medium and a mobile terminal.
Background
At present, the photographing function becomes a standard configuration of most mobile terminals, and a terminal user can easily and quickly realize photographing operation through a portable mobile terminal.
When a terminal user uses the mobile terminal to take a picture, if an interfering object suddenly breaks into the lens, an artifact corresponding to the interfering object appears in the taken picture, which causes the taken picture to be unclear, and therefore, the image preprocessing function of the mobile terminal still needs to be improved.
Disclosure of Invention
The embodiment of the application provides a processing method and device for shot images, a storage medium and a mobile terminal, which can improve the shooting quality.
In a first aspect, an embodiment of the present application provides a captured image processing method, including:
if the mobile interferent is detected to exist in the preview picture, continuously shooting at least two image frames;
determining movement information of a moving interferent in the at least two image frames;
and determining whether the mobile interferent is reserved in the generated target shooting image or not according to the movement information of the mobile interferent.
In a second aspect, an embodiment of the present application provides a processing apparatus for capturing an image, including:
the image shooting module is used for continuously shooting at least two image frames if the mobile interferent is detected to exist in the preview picture;
the information determining module is used for determining the movement information of the moving interferent in at least two image frames continuously shot by the image shooting module;
and the reservation determining module is used for determining whether the mobile interferent is reserved in the generated target shooting image or not according to the moving speed and/or the moving track information of the mobile interferent determined by the information determining module.
In a third aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the processing method of the captured image according to the present application.
In a fourth aspect, an embodiment of the present application provides a mobile terminal, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the method for processing a captured image according to the embodiment of the present application.
According to the processing scheme for shooting the image, firstly, if a mobile interference object is detected to exist in a preview picture, at least two image frames are continuously shot; then, determining the movement information of the moving interferent in the at least two image frames; and finally, determining whether the moving interfering object is reserved in the generated target shooting image or not according to the moving information of the moving interfering object. The influence of the moving interference object on the definition of the shot image can be avoided, and the shooting quality is improved.
Drawings
Fig. 1 is a schematic flowchart of a processing method for shooting an image according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of another captured image processing method according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of another captured image processing method according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of another captured image processing method according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of another captured image processing method according to an embodiment of the present disclosure;
fig. 6 is a block diagram of a processing apparatus for capturing an image according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
Detailed Description
The technical scheme of the application is further explained by the specific implementation mode in combination with the attached drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
At present, the photographing function becomes a standard configuration of most mobile terminals, and a terminal user can easily and quickly realize photographing operation through a portable mobile terminal. However, when a terminal user uses the mobile terminal to take a picture, if an interfering object suddenly breaks into the lens, an artifact corresponding to the interfering object may appear in the taken picture, which causes the taken picture to be unclear.
The embodiment of the application provides a method for processing a shot image, which can continuously shoot a plurality of image frames to determine the movement information of a moving interfering object when the moving interfering object exists in a preview image, so as to judge whether the moving interfering object is reserved in a generated target shot image, further avoid the influence of the moving interfering object on the definition of the shot image, and improve the shooting quality. The specific scheme is as follows:
fig. 1 is a schematic flowchart of a processing method for shooting an image according to an embodiment of the present disclosure, where the method is suitable for a situation where a moving interfering object exists in a shooting field of view during a shooting process. The method is particularly applied to the condition that the photographing is carried out in a portrait photographing mode and when a mobile interfering object exists, the method can be executed by a mobile terminal with a photographing function, the mobile terminal can be a smart phone, a tablet personal computer, wearable equipment (a smart watch or smart glasses) and the like, and the method specifically comprises the following steps:
step 101, if a moving interfering object is detected to exist in a preview picture, continuously shooting at least two image frames.
The preview screen may be a screen displayed on a display screen of the mobile terminal after the shooting function is started. The picture can be an image which is displayed in real time by the camera after the shooting function is started. The mobile interferent may be a non-shooting target in the preview image and an object in a moving state, for example, a terminal user shoots in a portrait mode, the portrait mode may locate a plurality of faces, but some faces are not a shooting target required by the user, such as a face suddenly rushing into a shooting range, and at this time, the face rushing into the shooting range is the mobile interferent.
Optionally, detecting whether a mobile interfering object exists in the preview picture, which may be detecting whether the mobile interfering object exists in the preview picture in real time or at preset time intervals (e.g., 1 second) after the shooting function of the mobile terminal is started, and updating the detection result in real time; or when the terminal user clicks a trigger to take a picture (such as clicking a shooting button trigger, a voice trigger, a gesture or an expression trigger, etc.), the system starts to detect whether a moving interfering object exists in the preview picture.
Optionally, the moving interfering object is detected in the preview screen, where the moving interfering object is detected in the preview screen if there is a change in the captured content between at least two consecutive image frames or periodically extracted image frames (e.g., image frames extracted every 1 second). Specifically, at least two adjacent image frames or periodically extracted image frames may be acquired for comparison, and when at least one of increase, decrease, displacement movement, blur, or the like of the shot content occurs between the image frames, it is indicated that a moving interfering object exists in the preview picture; or performing difference operation between adjacent image frames or periodically extracted image frames, and if the operation result is greater than a preset threshold, indicating that a moving interfering object exists. If no moving interferent exists, the content between the adjacent image frames or the periodically extracted image frames is basically kept unchanged, so that the difference operation result is smaller; when a moving interfering object occurs, the content acquired by the camera changes due to the occurrence of the moving interfering object, and the difference calculation result is larger.
For example, if a moving interfering object is detected to exist in the preview screen, the continuously capturing the at least two image frames may be performed by starting a multi-frame continuous capturing mode and continuously capturing the at least two image frames if a moving interfering object is detected to exist in the preview screen. Optionally, the shooting frequency when continuously shooting the at least two image frames may be a sampling frequency of a camera configured by the mobile terminal, or may be preset according to a requirement, which is not limited in this application.
Step 102, determining movement information of moving interferent in at least two image frames.
The moving information of the moving interfering object may be related information when the moving interfering object moves, and at least includes moving speed information and/or moving trajectory information, and optionally, the moving speed information may further include moving speed of adjacent frames, average moving speed, or moving speed variation trend, and the like. The movement track information may further include: the length of the movement trajectory, the range of the movement trajectory or the extending direction of the movement trajectory, and the like.
Optionally, in this embodiment of the present application, determining the movement information of the moving interferent in the at least two image frames may be determining the movement information of the moving interferent in the at least two image frames according to a time-domain correlation of the image frames, where the time-domain correlation of the image frames may be a time correlation between adjacent image frames that are continuously captured and a position correlation between feature pixel points between image frames. For example, when at least two image frames are continuously shot, the shooting frequency is 10 image frames shot in one second, the corresponding time of two adjacent image frames differs by one tenth of a second, and the moving interfering object moves in the one tenth of a second, so that the positions of the moving interfering object in each two adjacent image frames have a certain change relationship, and the moving interfering object in at least two image frames can be tracked, so that the moving track information of the moving interfering object is fitted; and determining the moving speed information of the moving interferent according to the moving displacement in the fitted moving track information of the moving interferent and the interval time between the image frames.
And 103, determining whether the mobile interferent is remained in the generated target shooting image or not according to the movement information of the mobile interferent.
Optionally, in this embodiment of the application, when determining whether to retain the mobile interferent in the generated target captured image according to the movement information of the mobile interferent, a determination rule may be set in advance according to the movement information of the mobile interferent, so as to determine whether to retain the mobile interferent in the generated target captured image. Optionally, when the determination rule is set, one of the pieces of movement information may be selected for determination, for example, the movement speed in the piece of movement information may be greater than a preset speed threshold; the determination may also be performed by combining multiple information in the movement information, for example, the movement speed of the moving interfering object is greater than a preset threshold and the variation trend of the movement speed is not reduced. When determining whether the mobile interferent is reserved according to the interpretation rule, selecting a judgment rule for determination; the determination of whether or not the mobile interfering object remains may be performed by combining a plurality of determination rules.
Optionally, when determining whether to retain the moving interfering object in the generated target captured image, determining whether to retain the moving interfering object in the generated target captured image according to the moving speed and/or the variation trend of the moving interfering object; and/or determining whether the moving interfering object is remained in the generated target shooting image according to the length and/or the track trend of the moving track of the moving interfering object. For example, it may be determined that the moving interfering object does not remain in the generated target captured image when the moving speed of the moving interfering object is greater than a preset speed threshold and/or the variation trend of the moving speed is not reduced; or when the length of the movement track of the movement interfering object is greater than a preset length threshold value and/or the change trend of the movement track extends towards the edge of the image, determining that the movement interfering object is not reserved in the generated target shooting image; it may be determined that the moving interfering object remains in the generated target captured image when the above two conditions are satisfied. The method has the advantages that the judgment rules corresponding to the multidimensional interferent movement information and the combination information thereof are selected to determine whether the moving interferent is reserved in the target shooting image, so that the accuracy of determining whether the moving interferent is reserved is improved, and the shooting quality is improved.
The moving speed of the moving interfering object may be an average moving speed of the moving speeds, or may be an instantaneous moving speed corresponding to every two image frames. When judging whether the moving speed is greater than the preset speed threshold, judging whether the average moving speed exceeds the preset speed threshold; it may be determined whether there is an instantaneous moving speed exceeding a preset speed threshold among a plurality of instantaneous moving speeds corresponding to every two image frames. This application is not limited thereto.
Optionally, the moving interferent is retained in the generated target captured image, and may be that each image frame including the moving interferent is subjected to fusion processing, an artifact of the moving interferent is removed, and a final clear target captured image including the moving interferent is generated. The moving interferent is not retained in the generated target captured image, and the moving interferent in each image frame may be removed first, and then the image frames from which the moving interferent is removed are subjected to fusion processing, so as to generate a final clear target captured image without the moving interferent.
According to the processing method for the shot images, firstly, if a mobile interference object is detected to exist in a preview picture, at least two image frames are continuously shot; then, determining the movement information of the moving interferent in the at least two image frames; and finally, determining whether the moving interfering object is reserved in the generated target shooting image or not according to the moving information of the moving interfering object. Compared with the prior art, the method has the advantages that in the shooting process, if the moving interference object exists in the shooting visual field, the shot image is fuzzy and poor in quality. According to the embodiment of the application, when the moving interference object exists in the preview picture, the plurality of image frames are continuously shot to determine the moving information of the moving interference object, so that whether the moving interference object is reserved in the generated target shot image or not is judged, the influence of the moving interference object on the definition of the shot image is avoided, and the shooting quality is improved.
Fig. 2 is a schematic flow chart of another captured image processing method provided in an embodiment of the present application, and as a further description of the foregoing embodiment, the method includes the following steps:
step 201, if the moving interfering object exists in the preview picture, continuously shooting at least two image frames.
Step 202, detecting the area of the moving interferent in at least two image frames.
The area where the moving interfering object is located may be a corresponding area of the moving interfering object in each image frame.
For example, in the embodiments of the present application, there are many methods for detecting the area where the moving interfering object is located in at least two image frames, and the present application does not limit this. For example, when a moving interfering object is detected to exist in the preview picture, the feature point of the moving interfering object is extracted, when the area where the moving interfering object exists in each image frame is detected, the moving interfering object with the feature point in each image frame is searched, the outline of the moving interfering object is extracted, and the area where the outline of the moving interfering object exists in the image is taken as the area where the moving interfering object exists in the image. Or determining the area where each object is located in the content of each image frame, extracting the feature points of each object, comparing the area where each object is located and the feature points corresponding to two adjacent image frames, detecting whether there is an object whose feature points are the same but the area where the object is located changes, if so, taking the object as a moving interfering object, and taking the area where the object is located as the area where the moving interfering object is located in the image frame. Optionally, in order to prevent the misjudgment of the area change caused by the calculation error, it may be determined that the area where the photographic subject is located changes when the position change of the area where the photographic subject is located is greater than a preset change threshold (for example, the moving pixels exceed 10 pixels).
And step 203, calculating the movement information of the mobile interferent according to the shooting parameters and the area where the mobile interferent is located in each image frame.
The shooting parameters may be setting parameters corresponding to a camera configured on the mobile terminal when continuously shooting images, and may include: sampling frequency, resolution, and shooting field size of the camera, and the like. When the movement information of the moving interfering object is calculated, different one or more shooting parameters can be selected for calculation according to different specific contents of the movement information.
Optionally, when the movement track information of the moving interferent is calculated according to the shooting parameters and the area where the moving interferent is located in each image frame, fitting may be performed according to the area position coordinates corresponding to the moving interferent in each image frame, so as to determine the movement track of the moving interferent. When calculating the length of the movement track, the track lengths corresponding to the position coordinates of the area of the moving interfering object in each two image frames may be calculated separately, and then the lengths of the movement tracks are obtained by summing up, for example, the coordinates of the center position of the area where the moving interfering object is located in the first image frame of the shooting are (a, b), the coordinates of the center position of the area where the moving interfering object is located in the second image frame are (c, d), and when calculating the track length s corresponding to the position coordinates of the moving interfering object in the two image frames, the track length s may be calculated according to a formula
Figure BDA0001838712880000071
And (6) performing calculation. When the direction of the moving track is calculated, the angle formed by the moving interfering object and a certain edge of the image can be calculated; or calculating the moving interfering objectIt should be noted that the track length calculated by the above method is a pixel distance, and in order to improve the accuracy of determining whether to retain the moving interfering object according to the embodiment of the present application, the calculated pixel distance may be converted into an actual track length according to a proportional relationship between a shooting field of view and a resolution of a camera during shooting, for example, if the proportional relationship between the shooting field of view and the resolution of the pixel is m/n, and the calculated pixel track length is s, the actual track length is L — s × m/n.
Optionally, when the moving speed information of the interfering object is calculated according to the shooting parameters and the area where the moving interfering object is located in each image frame, the time interval between two adjacent image frames can be determined according to the sampling frequency of the camera; therefore, the moving speed of the moving interferent can be determined according to the moving track length corresponding to the moving interferent in the two adjacent image frames and the time interval between the two adjacent image frames. Optionally, the average speed value and the change trend of the speed corresponding to the moving interfering object may also be determined by calculating the speed value corresponding to each two image frames.
And step 204, determining whether the mobile interferent is remained in the generated target shooting image or not according to the movement information of the mobile interferent.
According to the processing method for the shot images, after a plurality of images are continuously shot, the area of the moving interferent in each image frame is detected, and the moving information of the moving interferent is calculated by combining the current shooting parameters so as to judge whether the moving interferent is reserved in the generated target shot image, so that the accuracy of calculating the moving information of the moving interferent is improved, guarantee is provided for accurately judging whether the moving interferent is reserved in the generated target shot image, and further the shooting quality is improved.
Fig. 3 is a schematic flowchart of another processing method for captured images according to an embodiment of the present application, and as a preferred example of the foregoing embodiments, the method is suitable for processing captured images when capturing a multi-person group image in a person image capturing mode. The method comprises the following steps:
step 301, start.
Step 302, if the mobile interferent is detected to exist in the preview picture, judging whether the mobile interferent is the target interferent; if yes, go to step 303; if not, go to step 307.
The target interferent may be classified according to the type of interferent, for example, human interferent, animal interferent, vehicle interferent, and the like. The specific type of the target interfering object may be determined according to different photographing modes.
Optionally, since this embodiment is mainly applicable to the portrait shooting mode, the portrait shooting mode may locate faces of multiple people at the same time during group shooting, but not all people may be people who need group shooting, in this embodiment, the main purpose is to process people who move in the shooting process, and determine whether to keep the moving people in the finally generated target shooting image, so the target interferent in this embodiment may be a human interferent. Preferably, the human interferent may also be specific to a human face of the human. It should be noted that the type of the target interfering object may be changed according to the actual shooting mode and the shooting requirement, for example, if the shooting target is an animal, the target interfering object may be an animal interfering object determined by the target interfering object.
For example, when determining whether the moving interfering object is the target interfering object, the feature of the target interfering object may be extracted in advance, and it may be determined whether the moving interfering object in the detection screen has the feature, if yes, it is determined that the moving interfering object is the target interfering object, step 303 is executed, and if not, it is determined that the moving interfering object is not the target interfering object, step 307 is executed. For example, if the target interfering object is a human object, extracting facial features in advance, determining whether the mobile interfering object in the detected image has facial features, if so, indicating that the interfering object is a human being, step 303 may be executed to further determine whether the human being needs to be retained, and if not, indicating that the interfering object is a non-human being, and step 307 may be executed to directly not retain the mobile interfering object in the generated target captured image.
In the embodiment of the present application, if the mobile interfering object is not the target interfering object, it may be determined whether or not the mobile interfering object remains in the generated target captured image according to another rule, and the step 307 is not limited to be executed necessarily so as not to leave the mobile interfering object in the target captured image.
Step 303, continuously shooting at least two image frames if the moving interfering object is the target interfering object.
Step 304, determining the movement information of the moving interferent in at least two image frames.
Step 305, judging whether the length of the movement track of the mobile interference object is greater than a preset length threshold value or whether the movement speed is greater than a preset speed threshold value; if yes, go to step 307, otherwise go to step 306.
Exemplarily, in the process of crowd-sourced group photography, the situation that some people shake left and right or move slightly cannot be avoided, but in a normal situation, the movement of the group photographer is not the speed or the normal amplitude of the movement track is small, but the speed and the amplitude of the movement track of an interferer suddenly intruding into the lens are relatively large, so that the movement of the interferer can be judged whether the length of the movement track of the movement interferer is larger than a preset length threshold or whether the movement speed is larger than a preset speed threshold, if so, the movement interferer is not one of a plurality of group photographers but is an interferer suddenly intruding into the lens, and at this time, step 307 is executed, and the movement interferer is not retained in the generated target captured image; otherwise, step 306 is executed to retain the movement disturbing object in the generated target captured image.
Step 306, the movement disturbing object is retained in the generated target shot image.
Step 307, no moving interfering object remains in the generated target captured image.
According to the processing method for the shot images, when the shot images are the group photo images and the moving interferent is the target interferent, if the length of the moving track is larger than the preset length threshold or the moving speed is larger than the preset speed threshold, the moving interferent is not reserved in the generated target shot images, people moving in a small amplitude during group photo can be reserved, the interference of non-group photo people moving in a large amplitude is removed, and the shooting quality is improved.
Fig. 4 is a schematic flowchart of another processing method for a captured image according to an embodiment of the present disclosure, which is a preferred example of the foregoing embodiments, and the method is suitable for a case where when a shooting mode is a portrait mode, a moving interfering object is another shooting target and runs to an original shooting target during a shooting process to take a picture. The method comprises the following steps:
step 401, start.
Step 402, if a mobile interference object is detected to exist in the preview picture, judging whether the mobile interference object is a target interference object; if yes, go to step 403; if not, go to step 407.
And step 403, continuously shooting at least two image frames if the moving interfering object is the target interfering object.
Step 404, determining movement information of the moving interferent in at least two image frames.
Step 405, determining whether the moving speed of the moving interfering object is in a decreasing trend, and whether the moving track of the moving interfering object extends towards the shooting target direction, if so, executing step 406, and if not, executing step 407.
For example, the embodiment is suitable for the case where the moving interfering object is not an interfering object mistakenly entering the lens during the photographing process, but a group photo is to be taken together, for example, the user B outside the lens wants to take a group photo together with the user a and quickly runs to the side of the user a. For this case, a user running into the shot typically has two features: (1) the moving speed of the intrusion lens is gradually reduced, and when the intrusion lens reaches the side of the original shooting target, the speed is reduced to zero. (2) The moving track faces the original shooting target direction and finally stops around the original shooting target. Therefore, based on the above two features, when the moving speed of the moving interfering object is decreasing and the moving track of the moving interfering object extends toward the shooting target direction, it is determined that the moving interfering object is not an interfering object mistakenly entering the lens, step 406 is executed, and the moving interfering object is retained in the generated target shooting image; otherwise, the mobile interfering object is determined to be an interfering object which mistakenly enters the lens, and step 407 is executed to leave the mobile interfering object in the generated target captured image.
Optionally, when it is determined whether the moving track of the moving interfering object extends toward the shooting target direction, it may be determined whether an included angle between the moving track of the moving interfering object and a straight line from the moving starting point to the shooting target is gradually reduced, and whether a distance from the moving interfering object to the shooting target is gradually reduced, and if both are satisfied, it is determined that the moving track of the moving interfering object extends toward the shooting target direction.
Step 406 is to retain the movement disturbing object in the generated target captured image.
Step 407 is to leave no moving interfering object in the generated target captured image.
According to the processing method for the shot image, when the shot image is a human object image and the moving interfering object is the target interfering object, if the moving speed is in a decreasing trend and the moving track extends towards the shooting target direction, the moving interfering object is reserved in the generated target shot image, the situation that another shooting target running beside the original shooting target in the lens is mistakenly deleted can be avoided, and the shooting quality is improved.
Fig. 5 is a schematic flowchart of another captured image processing method provided in an embodiment of the present application, and as a further description of the foregoing embodiment, the method includes the following steps:
step 501, if a moving interfering object is detected in the preview picture, continuously shooting at least two image frames.
Step 502, determining movement information of moving interferers in at least two image frames.
Step 503, determining whether the mobile interferent is reserved in the generated target shooting image according to the movement information of the mobile interferent, if so, executing step 504, and if not, executing step 505.
And step 504, if the mobile interferent is reserved in the generated target shooting image, fusing at least two image frames containing the mobile interferent according to the time domain relevance of the image frames to generate the target shooting image.
For example, if a moving interfering object remains in the generated target captured image, it is necessary to perform fusion processing on each image frame including the moving interfering object, and a specific fusion processing procedure may be to determine the correlation between the area positions where the moving interfering object is located in each image frame according to the time domain correlation between the image frames. The method comprises the steps of firstly fusing images of areas where mobile interferents are located in each image frame, removing artifacts of the mobile interferents to obtain clear target interferent images, then judging the position of the fused mobile interferents in the shot images by combining with a specific shooting scene, for example, if the mobile interferents are slightly shaken among a plurality of group images, averaging the areas where the mobile interferents are located in each image frame, and placing the fused mobile interferents in the areas where the average interferents corresponding to each image frame are located. If the shot image is a shot target which needs to participate in shooting when the camera lens is run into, the fused mobile interference object can be placed in the area where the mobile interference object is located in the last image frame.
Optionally, when at least two image frames containing the moving interferent are fused, the at least two processed image frames may be fused based on a pixel level fusion algorithm and/or a feature level fusion algorithm to generate a target captured image.
The pixel-level fusion is also called data-level fusion, and may be a process of directly processing acquired image data containing a moving interfering object to obtain a fused image. Specifically, the pixel level fusion algorithm may include a spatial domain algorithm, a transform domain algorithm, and the like, and the spatial domain algorithm may further include a plurality of fusion rule methods, such as a logic filtering method, a gray-scale weighted average method, a contrast modulation method, and the like; the transform domain algorithm may also include a pyramid decomposition fusion method, a wavelet transform method, and the like. The pixel level fusion algorithm has the advantage that the detailed information in the image, such as the extraction of edges and textures, can be restored as much as possible. The pixel point information of the mobile interferent can be well judged, so that the fuzzy mobile interferent is restored. In addition, the image information of the image non-moving interferent can be kept as much as possible, so that the definition of the fused image can be ensured no matter the fused image is in the area where the moving interferent is located or in the area where the non-moving interferent is located.
The feature level image fusion algorithm can extract feature information from the collected images containing the moving interferent, wherein the feature information is feature information of the area where the moving interferent is located, and then the feature information is analyzed, processed and integrated to obtain fused image features. The method has the advantages that the definition of the fused mobile interferent is higher, the feature level fusion compresses image information, the image information is analyzed and processed by a computer, the consumed memory and time are relatively small, and the real-time performance of the shot image processing is improved.
Optionally, when at least two image frames containing moving interferents are fused, an image in which an unclear moving interferent containing an artifact is located may be removed from the at least two image frames, and then the image containing a clear moving interferent is fused based on the fusion method.
And 505, if the mobile interferent is not reserved in the generated target shooting image, removing the mobile interferent in at least two image frames according to the time domain relevance of the image frames to obtain a preprocessed image set.
For example, the pre-processed image set may be a set of image frames from which moving interferents are removed. According to the time domain correlation of the image frames, when the moving interferent in at least two image frames is removed, the original shooting content corresponding to the area where the moving interferent is located in each image frame may be determined according to the time domain correlation between the image frames, and the original shooting content is substituted for the area where the moving interferent is located in the image frame, for example, the position of the area where the moving interferent is located in the first image frame is S1, the content in the corresponding S1 area in any other captured image except the first image frame may be selected as the original shooting content corresponding to the moving interferent in the first image frame, and at this time, the content in the corresponding S1 area in any other captured image except the first image frame may be substituted for the area where the moving interferent is located in the first image frame.
And step 506, fusing the preprocessed image set to generate a target shooting image.
For example, when the preprocessed image set is fused, at least two processed image frames may be fused based on a pixel-level fusion algorithm and/or a feature-level fusion algorithm to generate a target captured image.
According to the processing method for the shot image, provided by the embodiment of the application, according to the movement information of the mobile interference object, when the mobile interference object is determined to be reserved, the image frames containing the mobile interference object are fused to generate the target shot image, and when the mobile interference object is determined not to be reserved, the image frames are fused to generate the target shot image after the mobile interference object is removed. The mobile interferent in the shooting process can be automatically analyzed and processed, and clear high-quality shooting images can be generated no matter whether the interferent is reserved or not.
Fig. 6 is a block diagram of a processing apparatus for capturing images, which may be implemented by software and/or hardware, and is generally integrated in a mobile terminal having a photographing function, and may execute the processing method for capturing images according to the foregoing embodiments. As shown in fig. 6, the apparatus includes: an image capturing module 601, an information determination module 602, and a reservation determination module 603.
An image capturing module 601, configured to continuously capture at least two image frames if a moving interfering object is detected to be present in the preview image;
an information determining module 602, configured to determine movement information of a moving interfering object in at least two image frames continuously captured by the image capturing module 601;
a retention determination module 603, configured to determine whether to retain the moving interfering object in the generated target captured image according to the movement information of the moving interfering object determined by the information determination module 602.
Further, the information determining module 602 is configured to:
detecting a region where a moving interferent is located in the at least two image frames;
and calculating the movement information of the moving interferent according to the shooting parameters and the area where the moving interferent is located in each image frame.
Further, the image capturing module 601 is configured to:
if the preview picture is detected to have the mobile interferent, judging whether the mobile interferent is a target interferent;
and if the mobile interference object is the target interference object, continuously shooting at least two image frames.
Further, the reservation determination module 603 is configured to:
if the shot image is a group photo image, when the length of the movement track of the movement interfering object is greater than a preset length threshold value or the movement speed is greater than a preset speed threshold value, the movement interfering object is not reserved in the generated target shot image.
Further, the reservation determination module 603 is configured to:
if the shooting mode is the portrait mode, when the moving speed of the moving interfering object is in a decreasing trend and the moving track of the moving interfering object extends towards the shooting target direction, the moving interfering object is kept in the generated target shooting image.
Further, the apparatus further includes an image fusion module, configured to, after determining whether the mobile interferent is retained in the generated target captured image, remove the mobile interferent in the at least two image frames according to a time domain correlation of the image frames to obtain a pre-processed image set if the mobile interferent is not retained in the generated target captured image; and fusing the preprocessed image set to generate a target shooting image.
Further, the image fusion module is further configured to, after determining whether the movement interferent is retained in the generated target captured image, fuse at least two image frames including the movement interferent according to the time domain correlation of the image frames to generate the target captured image if the movement interferent is retained in the generated target captured image.
According to the processing device for shooting the image, firstly, if the image shooting module 601 detects that the moving interferent exists in the preview picture, at least two image frames are continuously shot; then, the information determining module 602 determines movement information of the moving interfering object in at least two image frames continuously captured, and finally, the retention determining module 603 determines whether to retain the moving interfering object in the generated target captured image according to the determined movement information of the moving interfering object. Compared with the prior art, the method has the advantages that in the shooting process, if the moving interference object exists in the shooting visual field, the shot image is fuzzy and poor in quality. According to the embodiment of the application, when the moving interference object exists in the preview picture, the plurality of image frames are continuously shot to determine the moving information of the moving interference object, so that whether the moving interference object is reserved in the generated target shot image or not is judged, the influence of the moving interference object on the definition of the shot image is avoided, and the shooting quality is improved.
The device can execute the methods provided by all the embodiments of the application, and has corresponding functional modules and beneficial effects for executing the methods. For details of the technology not described in detail in this embodiment, reference may be made to the methods provided in all the foregoing embodiments of the present application.
Fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 7, the terminal may include: a housing (not shown), a memory 701, a Central Processing Unit (CPU) 702 (also called a processor, hereinafter referred to as CPU), a computer program stored in the memory 701 and operable on the processor 702, a circuit board (not shown), and a power circuit (not shown). The circuit board is arranged in a space enclosed by the shell; the CPU702 and the memory 701 are provided on the circuit board; the power supply circuit is used for supplying power to each circuit or device of the terminal; the memory 701 is used for storing executable program codes; the CPU702 executes a program corresponding to the executable program code by reading the executable program code stored in the memory 701.
The terminal further comprises: peripheral interfaces 703, RF (Radio Frequency) circuitry 705, audio circuitry 706, speakers 711, power management chip 708, input/output (I/O) subsystems 709, touch screen 712, other input/control devices 710, and external port 704, which communicate over one or more communication buses or signal lines 707.
It should be understood that the illustrated terminal device 700 is merely one example of a terminal, and that the terminal device 700 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The following describes in detail a terminal device provided in this embodiment, where the terminal device is a smart phone as an example.
A memory 701, the memory 701 being accessible by the CPU702, the peripheral interface 703, and the like, the memory 701 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other volatile solid state storage devices.
A peripheral interface 703, said peripheral interface 703 may connect input and output peripherals of the device to the CPU702 and the memory 701.
An I/O subsystem 709, which I/O subsystem 709 may connect input and output peripherals on the device, such as a touch screen 712 and other input/control devices 710, to the peripheral interface 703. The I/O subsystem 709 may include a display controller 7091 and one or more input controllers 7092 for controlling other input/control devices 710. Where one or more input controllers 7092 receive electrical signals from or transmit electrical signals to other input/control devices 710, the other input/control devices 710 may include physical buttons (push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels. It is worth noting that the input controller 7092 may be connected to any one of the following: a keyboard, an infrared port, a USB interface, and a pointing device such as a mouse.
The touch screen 712 may be a resistive type, a capacitive type, an infrared type, or a surface acoustic wave type, according to the operation principle of the touch screen and the classification of a medium for transmitting information. Classified by the installation method, the touch screen 712 may be: external hanging, internal or integral. Classified according to technical principles, the touch screen 712 may be: a vector pressure sensing technology touch screen, a resistive technology touch screen, a capacitive technology touch screen, an infrared technology touch screen, or a surface acoustic wave technology touch screen.
A touch screen 712, the touch screen 712 being an input interface and an output interface between the user terminal and the user, displaying visual output to the user, which may include graphics, text, icons, video, and the like. Optionally, the touch screen 712 sends an electrical signal (e.g., an electrical signal of the touch surface) triggered by the user on the touch screen to the processor 702.
The display controller 7091 in the I/O subsystem 709 receives electrical signals from the touch screen 712 or transmits electrical signals to the touch screen 712. The touch screen 712 detects a contact on the touch screen, and the display controller 7091 converts the detected contact into an interaction with a user interface object displayed on the touch screen 712, i.e., implements a human-computer interaction, and the user interface object displayed on the touch screen 712 may be an icon for running a game, an icon networked to a corresponding network, or the like. It is worth mentioning that the device may also comprise a light mouse, which is a touch sensitive surface that does not show visual output, or an extension of the touch sensitive surface formed by the touch screen.
The RF circuit 705 is mainly used to establish communication between the smart speaker and a wireless network (i.e., a network side), and implement data reception and transmission between the smart speaker and the wireless network. Such as sending and receiving short messages, e-mails, etc.
The audio circuit 706 is mainly used to receive audio data from the peripheral interface 703, convert the audio data into an electric signal, and transmit the electric signal to the speaker 711.
And the loudspeaker 711 is used for reducing the voice signal received by the intelligent sound box from the wireless network through the RF circuit 705 into sound and playing the sound to the user.
And a power management chip 708 for supplying power and managing power to the hardware connected to the CPU702, the I/O subsystem, and the peripheral interface.
In this embodiment, the central processor 702 is configured to:
if the mobile interferent is detected to exist in the preview picture, continuously shooting at least two image frames;
determining movement information of a moving interferent in the at least two image frames;
and determining whether the mobile interferent is reserved in the generated target shooting image or not according to the movement information of the mobile interferent.
Further, the determining the movement information of the moving interferent in the at least two image frames includes:
detecting a region where a moving interferent is located in the at least two image frames;
and calculating the movement information of the moving interferent according to the shooting parameters and the area where the moving interferent is located in each image frame.
Further, if it is detected that a moving interfering object exists in the preview screen, continuously capturing at least two image frames includes:
if the preview picture is detected to have the mobile interferent, judging whether the mobile interferent is a target interferent;
and if the mobile interference object is the target interference object, continuously shooting at least two image frames.
Further, the determining whether to retain the moving interfering object in the generated target captured image according to the movement information of the moving interfering object includes:
if the shot image is a group photo image, when the length of the movement track of the movement interfering object is greater than a preset length threshold value or the movement speed is greater than a preset speed threshold value, the movement interfering object is not reserved in the generated target shot image.
Further, the determining whether to retain the moving interfering object in the generated target captured image according to the movement information of the moving interfering object includes:
if the shooting mode is the portrait mode, when the moving speed of the moving interfering object is in a decreasing trend and the moving track of the moving interfering object extends towards the shooting target direction, the moving interfering object is kept in the generated target shooting image.
Further, after the determining whether the moving interfering object remains in the generated target captured image, the method further includes:
if no mobile interferent is reserved in the generated target shooting image, removing the mobile interferent in the at least two image frames according to the time domain relevance of the image frames to obtain a preprocessed image set;
and fusing the preprocessed image set to generate a target shooting image.
Further, after the determining whether the moving interfering object remains in the generated target captured image, the method further includes:
and if the mobile interferent is reserved in the generated target shooting image, fusing at least two image frames containing the mobile interferent according to the time domain relevance of the image frames to generate the target shooting image.
The embodiment of the application also provides a storage medium containing terminal device executable instructions, and the terminal device executable instructions are used for executing a processing method for shooting the image when being executed by a terminal device processor, and the method comprises the following steps:
if the mobile interferent is detected to exist in the preview picture, continuously shooting at least two image frames;
determining movement information of a moving interferent in the at least two image frames;
and determining whether the mobile interferent is reserved in the generated target shooting image or not according to the movement information of the mobile interferent.
Further, the determining the movement information of the moving interferent in the at least two image frames includes:
detecting a region where a moving interferent is located in the at least two image frames;
and calculating the movement information of the moving interferent according to the shooting parameters and the area where the moving interferent is located in each image frame.
Further, if it is detected that a moving interfering object exists in the preview screen, continuously capturing at least two image frames includes:
if the preview picture is detected to have the mobile interferent, judging whether the mobile interferent is a target interferent;
and if the mobile interference object is the target interference object, continuously shooting at least two image frames.
Further, the determining whether to retain the moving interfering object in the generated target captured image according to the movement information of the moving interfering object includes:
if the shot image is a group photo image, when the length of the movement track of the movement interfering object is greater than a preset length threshold value or the movement speed is greater than a preset speed threshold value, the movement interfering object is not reserved in the generated target shot image.
Further, the determining whether to retain the moving interfering object in the generated target captured image according to the movement information of the moving interfering object includes:
if the shooting mode is the portrait mode, when the moving speed of the moving interfering object is in a decreasing trend and the moving track of the moving interfering object extends towards the shooting target direction, the moving interfering object is kept in the generated target shooting image.
Further, after the determining whether the moving interfering object remains in the generated target captured image, the method further includes:
if no mobile interferent is reserved in the generated target shooting image, removing the mobile interferent in the at least two image frames according to the time domain relevance of the image frames to obtain a preprocessed image set;
and fusing the preprocessed image set to generate a target shooting image.
Further, after the determining whether the moving interfering object remains in the generated target captured image, the method further includes:
and if the mobile interferent is reserved in the generated target shooting image, fusing at least two image frames containing the mobile interferent according to the time domain relevance of the image frames to generate the target shooting image.
The computer storage media of the embodiments of the present application may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Of course, the storage medium provided in the embodiments of the present application and containing computer-executable instructions is not limited to the application recommendation operation described above, and may also perform related operations in the method for processing a captured image provided in any embodiments of the present application.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (9)

1. A method of processing a captured image, comprising:
if the mobile interferent is detected to exist in the preview picture, continuously shooting at least two image frames;
determining movement information of a moving interferent in the at least two image frames;
determining whether the moving interfering object is remained in the generated target shooting image or not according to the moving information of the moving interfering object, and the method comprises the following steps:
if the shooting mode is the portrait mode, when the moving speed of the moving interfering object is in a decreasing trend and the moving track of the moving interfering object extends towards the shooting target direction, the moving interfering object is kept in the generated target shooting image.
2. The method of claim 1, wherein the determining movement information for moving interferers in the at least two image frames comprises:
detecting a region where a moving interferent is located in the at least two image frames;
and calculating the movement information of the moving interferent according to the shooting parameters and the area where the moving interferent is located in each image frame.
3. The method of claim 1, wherein the continuously capturing at least two image frames if the presence of the moving interferers in the preview screen is detected comprises:
if the preview picture is detected to have the mobile interferent, judging whether the mobile interferent is a target interferent;
and if the mobile interference object is the target interference object, continuously shooting at least two image frames.
4. The method according to claim 1 or 3, wherein the determining whether or not the moving interfering object remains in the generated target captured image according to the movement information of the moving interfering object further comprises:
if the shot image is a group photo image, when the length of the movement track of the movement interfering object is greater than a preset length threshold value or the movement speed is greater than a preset speed threshold value, the movement interfering object is not reserved in the generated target shot image.
5. The method of claim 1, wherein after said determining whether to retain the moving interfering object in the generated captured image of the target, further comprising:
if no mobile interferent is reserved in the generated target shooting image, removing the mobile interferent in the at least two image frames according to the time domain relevance of the image frames to obtain a preprocessed image set;
and fusing the preprocessed image set to generate a target shooting image.
6. The method of claim 1, wherein after said determining whether to retain the moving interfering object in the generated captured image of the target, further comprising:
and if the mobile interferent is reserved in the generated target shooting image, fusing at least two image frames containing the mobile interferent according to the time domain relevance of the image frames to generate the target shooting image.
7. A captured image processing apparatus, comprising:
the image shooting module is used for continuously shooting at least two image frames if the mobile interferent is detected to exist in the preview picture;
the information determining module is used for determining the movement information of the moving interferent in at least two image frames continuously shot by the image shooting module;
a retention determination module configured to determine whether to retain the mobile interfering object in the generated target captured image according to the movement information of the mobile interfering object determined by the information determination module, including:
if the shooting mode is the portrait mode, when the moving speed of the moving interfering object is in a decreasing trend and the moving track of the moving interfering object extends towards the shooting target direction, the moving interfering object is kept in the generated target shooting image.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method of processing a captured image according to any one of claims 1 to 6.
9. A mobile terminal, characterized in that it comprises a memory, a processor and a computer program stored on the memory and executable on the processor, said processor implementing the method for processing a captured image according to any one of claims 1 to 6 when executing said computer program.
CN201811238344.0A 2018-10-23 2018-10-23 Shot image processing method and device, storage medium and mobile terminal Active CN109040604B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811238344.0A CN109040604B (en) 2018-10-23 2018-10-23 Shot image processing method and device, storage medium and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811238344.0A CN109040604B (en) 2018-10-23 2018-10-23 Shot image processing method and device, storage medium and mobile terminal

Publications (2)

Publication Number Publication Date
CN109040604A CN109040604A (en) 2018-12-18
CN109040604B true CN109040604B (en) 2020-09-15

Family

ID=64613854

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811238344.0A Active CN109040604B (en) 2018-10-23 2018-10-23 Shot image processing method and device, storage medium and mobile terminal

Country Status (1)

Country Link
CN (1) CN109040604B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111263071B (en) * 2020-02-26 2021-12-10 维沃移动通信有限公司 Shooting method and electronic equipment

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7094164B2 (en) * 2001-09-12 2006-08-22 Pillar Vision Corporation Trajectory detection and feedback system
CN102592128A (en) * 2011-12-20 2012-07-18 Tcl集团股份有限公司 Method and device for detecting and processing dynamic image and display terminal
CN105046719A (en) * 2015-07-03 2015-11-11 苏州科达科技股份有限公司 Method and system for video monitoring
CN105744232A (en) * 2016-03-25 2016-07-06 南京第五十五所技术开发有限公司 Method for preventing power transmission line from being externally broken through video based on behaviour analysis technology
CN106210542A (en) * 2016-08-16 2016-12-07 深圳市金立通信设备有限公司 The method of a kind of photo synthesis and terminal
WO2016018487A8 (en) * 2014-05-09 2016-12-08 Eyefluence, Inc. Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects
CN106598046A (en) * 2016-11-29 2017-04-26 北京智能管家科技有限公司 Robot avoidance controlling method and device
CN107087106A (en) * 2017-04-19 2017-08-22 深圳市金立通信设备有限公司 A kind of image pickup method and terminal
CN107193032A (en) * 2017-03-31 2017-09-22 长光卫星技术有限公司 Multiple mobile object based on satellite video quickly tracks speed-measuring method
CN107343149A (en) * 2017-07-31 2017-11-10 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN107566724A (en) * 2017-09-13 2018-01-09 维沃移动通信有限公司 A kind of panoramic picture image pickup method and mobile terminal
CN107563282A (en) * 2017-07-25 2018-01-09 大圣科技股份有限公司 For unpiloted recognition methods, electronic equipment, storage medium and system
CN107820013A (en) * 2017-11-24 2018-03-20 上海创功通讯技术有限公司 A kind of photographic method and terminal
CN107844765A (en) * 2017-10-31 2018-03-27 广东欧珀移动通信有限公司 Photographic method, device, terminal and storage medium
CN107872644A (en) * 2016-09-23 2018-04-03 亿阳信通股份有限公司 Video frequency monitoring method and device
CN108052883A (en) * 2017-11-30 2018-05-18 广东欧珀移动通信有限公司 User's photographic method, device and equipment
CN108076290A (en) * 2017-12-20 2018-05-25 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN108184051A (en) * 2017-12-22 2018-06-19 努比亚技术有限公司 A kind of main body image pickup method, equipment and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8615111B2 (en) * 2009-10-30 2013-12-24 Csr Technology Inc. Method and apparatus for image detection with undesired object removal

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7094164B2 (en) * 2001-09-12 2006-08-22 Pillar Vision Corporation Trajectory detection and feedback system
CN102592128A (en) * 2011-12-20 2012-07-18 Tcl集团股份有限公司 Method and device for detecting and processing dynamic image and display terminal
WO2016018487A8 (en) * 2014-05-09 2016-12-08 Eyefluence, Inc. Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects
CN105046719A (en) * 2015-07-03 2015-11-11 苏州科达科技股份有限公司 Method and system for video monitoring
CN105744232A (en) * 2016-03-25 2016-07-06 南京第五十五所技术开发有限公司 Method for preventing power transmission line from being externally broken through video based on behaviour analysis technology
CN106210542A (en) * 2016-08-16 2016-12-07 深圳市金立通信设备有限公司 The method of a kind of photo synthesis and terminal
CN107872644A (en) * 2016-09-23 2018-04-03 亿阳信通股份有限公司 Video frequency monitoring method and device
CN106598046A (en) * 2016-11-29 2017-04-26 北京智能管家科技有限公司 Robot avoidance controlling method and device
CN107193032A (en) * 2017-03-31 2017-09-22 长光卫星技术有限公司 Multiple mobile object based on satellite video quickly tracks speed-measuring method
CN107087106A (en) * 2017-04-19 2017-08-22 深圳市金立通信设备有限公司 A kind of image pickup method and terminal
CN107563282A (en) * 2017-07-25 2018-01-09 大圣科技股份有限公司 For unpiloted recognition methods, electronic equipment, storage medium and system
CN107343149A (en) * 2017-07-31 2017-11-10 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN107566724A (en) * 2017-09-13 2018-01-09 维沃移动通信有限公司 A kind of panoramic picture image pickup method and mobile terminal
CN107844765A (en) * 2017-10-31 2018-03-27 广东欧珀移动通信有限公司 Photographic method, device, terminal and storage medium
CN107820013A (en) * 2017-11-24 2018-03-20 上海创功通讯技术有限公司 A kind of photographic method and terminal
CN108052883A (en) * 2017-11-30 2018-05-18 广东欧珀移动通信有限公司 User's photographic method, device and equipment
CN108076290A (en) * 2017-12-20 2018-05-25 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN108184051A (en) * 2017-12-22 2018-06-19 努比亚技术有限公司 A kind of main body image pickup method, equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN109040604A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN109167893B (en) Shot image processing method and device, storage medium and mobile terminal
EP3163498B1 (en) Alarming method and device
CN108399349B (en) Image recognition method and device
CN108566516B (en) Image processing method, device, storage medium and mobile terminal
CN110992327A (en) Lens contamination state detection method and device, terminal and storage medium
KR102488563B1 (en) Apparatus and Method for Processing Differential Beauty Effect
US20150103193A1 (en) Method and apparatus for long term image exposure with image stabilization on a mobile device
CN109040523B (en) Artifact eliminating method and device, storage medium and terminal
JP7181375B2 (en) Target object motion recognition method, device and electronic device
CN109040524B (en) Artifact eliminating method and device, storage medium and terminal
CN109327691B (en) Image shooting method and device, storage medium and mobile terminal
CN110572636B (en) Camera contamination detection method and device, storage medium and electronic equipment
CN113014846B (en) Video acquisition control method, electronic equipment and computer readable storage medium
CN109218621B (en) Image processing method, device, storage medium and mobile terminal
CN112116624A (en) Image processing method and electronic equipment
CN108665510B (en) Rendering method and device of continuous shooting image, storage medium and terminal
EP3617851B1 (en) Information processing device, information processing method, and recording medium
CN111432245B (en) Multimedia information playing control method, device, equipment and storage medium
CN110597426A (en) Bright screen processing method and device, storage medium and terminal
CN109302563B (en) Anti-shake processing method and device, storage medium and mobile terminal
CN111325701B (en) Image processing method, device and storage medium
CN111586279B (en) Method, device and equipment for determining shooting state and storage medium
CN111986229A (en) Video target detection method, device and computer system
CN109218620B (en) Photographing method and device based on ambient brightness, storage medium and mobile terminal
CN109040604B (en) Shot image processing method and device, storage medium and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant