CN112422828B - Image processing method, image processing apparatus, electronic device, and readable storage medium - Google Patents

Image processing method, image processing apparatus, electronic device, and readable storage medium Download PDF

Info

Publication number
CN112422828B
CN112422828B CN202011290753.2A CN202011290753A CN112422828B CN 112422828 B CN112422828 B CN 112422828B CN 202011290753 A CN202011290753 A CN 202011290753A CN 112422828 B CN112422828 B CN 112422828B
Authority
CN
China
Prior art keywords
preview image
target information
shooting
pedestrian
pedestrians
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011290753.2A
Other languages
Chinese (zh)
Other versions
CN112422828A (en
Inventor
魏文俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011290753.2A priority Critical patent/CN112422828B/en
Publication of CN112422828A publication Critical patent/CN112422828A/en
Application granted granted Critical
Publication of CN112422828B publication Critical patent/CN112422828B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a readable storage medium, and belongs to the field of shooting. The image processing method comprises the following steps: target information in the preview image, the target information including at least one of: pedestrian information, shooting subject information, shooting scene; and in the case that the target information satisfies the trigger condition, at least one of the atomizing process and the eliminating process is performed on the pedestrian area in the preview image. According to the method and the device for processing the pedestrian region, whether the pedestrian region needs to be processed or not can be directly judged in the shooting process, and the processed preview image can be directly displayed to a user, so that the user can know the preview effect in real time, and can shoot after obtaining a proper preview image, satisfactory photos are obtained, and the film forming rate is effectively improved.

Description

Image processing method, image processing apparatus, electronic device, and readable storage medium
Technical Field
The application belongs to the technical field of shooting, and particularly relates to an image processing method, an image processing device, electronic equipment and a readable storage medium.
Background
Along with the high-speed development of shooting technology, the film forming effect is better, more and more people select to use the mobile phone to shoot in the journey, and the film forming method is convenient and quick. However, in the shooting process, pedestrians around the shooting subject interfere with the shooting effect, so that the shooting rate is low, and a satisfactory picture cannot be obtained in many cases, so that a defect in traveling is left.
In the related art, aiming at the technical problem that pedestrians influence the shooting effect, only the shot photo is processed in the later period, the inventor of the application finds that the situation of missed detection or false detection of pedestrians in the processing process is not completely removed when the picture restoration is inconsistent after the pedestrians are removed in the related art, and the pedestrians which the user wants to remove cannot be completely removed, and the effect cannot be previewed in real time in the shooting process, so that the operation of the user is inconvenient.
Disclosure of Invention
An object of the embodiment of the present application is to provide an image processing method, an image processing apparatus, an electronic device, and a readable storage medium, which can solve the problems that pedestrians affect a shooting effect, a film forming rate is low, and a real-time preview effect cannot be achieved in a shooting process.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including: acquiring target information in the preview image, wherein the target information at least comprises one of the following: pedestrian information, shooting subject information, shooting scene; and in the case that the target information satisfies the trigger condition, at least one of the atomizing process and the eliminating process is performed on the pedestrian area in the preview image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: an acquisition unit configured to acquire target information in the preview image, the target information including at least one of: pedestrian information, shooting subject information, shooting scene; and a control unit configured to perform at least one of an atomizing process and a erasing process on the pedestrian area in the preview image in a case where the target information satisfies the trigger condition.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing steps of the image processing method as in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement steps of an image processing method as in the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement a method as the image processing method of the first aspect.
In the embodiment of the application, target information in the preview image is acquired, and whether the target information meets a trigger condition is judged; if the target information meets the triggering condition, the pedestrian on the preview interface can influence the shooting effect of the shooting subject, so that the film quality is poor. At the moment, the pedestrian area in the preview image is directly processed, and the preview image processed by the pedestrian area is displayed to the user again, so that the user can directly know the processed picture effect. Wherein the target information includes, but is not limited to, obtaining the following information: pedestrian information, shooting subject information, shooting scene; processing pedestrians in the preview image, including but not limited to the following: atomization treatment and elimination treatment. That is, in the embodiment of the present application, when at least one of pedestrian information, shooting subject information, and shooting scene satisfies a trigger condition, the atomizing process and/or the eliminating process may be performed on the pedestrian area in the preview image.
Through the embodiment of the application, whether the pedestrian area needs to be atomized or not and/or eliminated can be directly judged in the shooting process, and the preview image after the atomization or elimination is directly displayed to a user, so that the user can know the preview effect in real time, and after the user obtains a proper preview image, the user shoots again, a satisfactory photo is obtained, the film forming rate is effectively improved, and the trouble of post-processing of the photo is avoided.
Particularly, the pedestrian region can be processed in the shooting process, so that a user can directly acquire a processed photo, the problem that pedestrians excessively influence the pain point of the scene imaging effect in the scenic spots of people, mountains and seas can be solved, the visual interference of the excessive pedestrians to the shooting main body can be solved, various defects of post-processing are avoided, particularly the possibility of uncoordinated picture restoration after the pedestrians are removed is avoided, and the condition of missed detection or false detection of the pedestrians is also avoided. In addition, the embodiment of the application can directly display the preview image after the pedestrian area is processed to the user for watching, so that the user can know the shooting effect in real time, the user can control shooting after acquiring the satisfied preview image, and the satisfaction degree of the user on the photo is greatly improved.
Drawings
FIG. 1 is one of the flow diagrams of an image processing method according to an embodiment of the present application;
FIG. 2 is a second flow chart of an image processing method according to an embodiment of the present disclosure;
fig. 3 shows a block diagram of the structure of an image processing apparatus according to an embodiment of the present application;
fig. 4 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method, the image processing apparatus, the electronic device and the readable storage medium provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings by specific embodiments and application scenarios thereof.
An embodiment of the present application provides an image processing method, as shown in fig. 1, including:
step 102, obtaining target information in the preview image, wherein the target information at least comprises one of the following: pedestrian information, shooting subject information, shooting scene;
and 104, performing at least one of atomization processing and elimination processing on the pedestrian area in the preview image in the case that the target information meets the trigger condition.
In the implementation process, the method and the device firstly acquire the target information in the preview image, and then judge whether the target information meets the triggering condition. If the target information meets the triggering condition, the pedestrian on the preview interface can influence the shooting effect, so that the film quality is poor. At the moment, the pedestrian area in the preview image is directly processed, and the preview image processed by the pedestrian area is displayed to the user again, so that the user can directly know the processed preview image. According to the method and the device for processing the pedestrian region, whether the pedestrian region needs to be processed or not can be directly judged in the shooting process, and the processed preview image can be directly displayed to a user, so that the user can know the shooting effect in real time, satisfactory photos are obtained, and the film forming rate is effectively improved. Here, the preview image is a dynamic screen, and the shot content is displayed in real time.
Specifically, in the process of acquiring target information in a preview image, the embodiment of the application includes, but is not limited to, acquiring the following target information:
pedestrian information in the preview image is acquired. That is, the embodiment of the present application acquires pedestrian information in the preview image, and determines whether the trigger condition is satisfied according to the pedestrian information. In particular, the purpose of the embodiment of the present application to determine whether the trigger condition is satisfied is to determine whether the pedestrian area in the preview image needs to be processed. Therefore, the pedestrian information in the preview image is used as a judging standard whether the triggering condition is met, and the judging accuracy can be effectively improved.
Shooting subject information in the preview image is acquired. That is, the present embodiment acquires the shooting subject information in the preview image, and determines whether the trigger condition is satisfied according to the shooting subject information. In particular, the present application aims to process a pedestrian area in a preview image so as to avoid affecting the photographing effect of a photographing subject. Therefore, the shooting subject information in the preview image is used as a judging standard whether the triggering condition is met or not, and the judging accuracy can be effectively improved.
A shooting scene in the preview image is acquired. That is, the present embodiment acquires a shooting scene in a preview image, and determines whether a trigger condition is satisfied according to the shooting scene. In particular, the number of pedestrians is different for different shooting scenes. For example, in shooting scenes such as streets, scenic spots, markets and the like, the number of pedestrians is generally large, and the probability that pedestrians influence shooting effects is high; in the scenes of bedrooms, living rooms and the like, the number of pedestrians is generally small, and the probability that the pedestrians influence the shooting effect is low. Therefore, the shooting scene in the preview image is used as a judging standard whether the triggering condition is met, and the judging accuracy can be effectively improved.
It should be noted that, the method of acquiring the target information in the preview image according to the embodiment of the present application is not limited to the above three cases, and any relevant target information that affects the shooting effect may be used as the criterion for determining whether the triggering condition is satisfied, which is not limited herein.
Specifically, in the process of processing the pedestrian area in the preview image, the embodiment of the application includes, but is not limited to, the following processing on the pedestrian area:
and atomizing the pedestrian area. That is, in the case that the target information meets the trigger condition, the embodiment of the application can perform atomization processing on the pedestrian area in the preview image, so that pedestrians in the preview image are blurred, and visual interference of pedestrians on a shooting subject is avoided. And the pedestrian area is directly atomized in the preview image, so that the trouble of later atomization is avoided, especially the possibility of uncoordinated picture restoration after the pedestrians are removed is avoided, and the condition of missed detection or false detection of the pedestrians is also avoided.
And carrying out elimination processing on the pedestrian area. That is, in the case that the target information satisfies the trigger condition, the embodiment of the application may perform the elimination processing on the pedestrian area in the preview image, so that the pedestrian in the preview image disappears, and the visual interference of the pedestrian on the shooting subject is avoided. And the pedestrian area is directly eliminated in the preview image, so that various defects of post-processing are avoided, especially the possibility of uncoordinated picture restoration after the pedestrian is removed is avoided, and the condition of missing detection or false detection of the pedestrian is also avoided.
In the embodiment of the present application, when the target information includes pedestrian information, the target information satisfies a trigger condition, including: the number of pedestrians in the preview image at any one time is greater than or equal to the first number threshold. The number of pedestrians in the preview image at a certain moment determines whether the shooting effect of the shooting subject is affected. For example, when the number of pedestrians is large at a certain time, the probability that the pedestrians affect the shooting effect is high; when the number of pedestrians is small at a certain moment, the probability that the pedestrians influence the shooting effect is low. Therefore, in the case that the pedestrian information in the preview image is acquired, whether the triggering condition is met or not can be judged by judging whether the number of pedestrians in the preview image at any moment is greater than or equal to the first number threshold.
Specifically, the number of pedestrians in the preview image at a certain moment is acquired, and whether the number of pedestrians is larger than or equal to a first number threshold at any moment is judged. When the number of pedestrians is larger than or equal to a first number threshold, the probability that pedestrians influence the shooting effect is higher, and at the moment, the pedestrians are judged to meet the triggering condition and at least one of atomization treatment and elimination treatment is carried out on the pedestrian area in the preview image; when the number of pedestrians is smaller than the first number threshold, the probability that the pedestrians influence the shooting effect is lower, and at the moment, the condition that no triggering condition is met is judged, and the pedestrians in the preview image do not need to be processed.
It should be noted that, the first number threshold is not a fixed value, and the first number threshold may be within a reasonable parameter interval and designed according to practical situations, as will be understood by those skilled in the art. For example, the first number threshold may be determined according to the shooting scene, and the judgment standard is adjusted according to the shooting scene.
Specifically, different shooting scenes correspond to different first parameter thresholds, for example, in outdoor shooting scenes, scenic spots, markets, etc., the number of pedestrians is generally large, and the first number threshold may be set to a larger parameter, for example, 20, 30, 40, 50, etc.; in a bedroom, living room, etc. shooting scenario, where typically there are few pedestrians, the first number threshold may be set to a smaller parameter, e.g. 5, 10, etc.
That is, before judging whether the preview image meets the triggering condition according to the number of pedestrians, the embodiment of the application first acquires the shooting scene in the preview image, and then selects the parameter corresponding to the shooting scene as the first number threshold. By the design, the first quantity threshold value is ensured to correspond to shooting scenes in actual shooting, and then trigger conditions can be adjusted according to the actual shooting scenes, so that the image processing method is more suitable for different shooting scenes in a targeted manner.
Specifically, taking a shot scene as a street for example, when the street is wide to accommodate a large number of pedestrians, the first number threshold may be larger, for example 10, 20, etc.; if a landscape way, the first number threshold is smaller, e.g., 3, 5, etc. Accordingly, for other shooting scenes, the first number threshold may be adjusted based on the volume of the shooting scene or the containable pedestrian threshold as well.
In the embodiment of the present application, the preview image is a dynamic image, and when the target information includes pedestrian information, the target information satisfies a trigger condition, including: the number of pedestrians in the preview image per unit time is greater than or equal to the second number threshold.
Wherein, the number of pedestrians in the preview image in unit time determines whether the shooting effect of the shooting subject is affected. For example, when the number of pedestrians in the preview image in unit time is large, the pedestrian flow is large, and the probability that pedestrians influence the shooting effect is high; when the number of pedestrians in the preview image in unit time is small, the pedestrian flow is small, and the probability that pedestrians influence the shooting effect is low. Therefore, in the case that the pedestrian information in the preview image is acquired, the embodiment of the application can judge whether the triggering condition is met by the number of pedestrians in the preview image being greater than or equal to the second number threshold value in unit time.
Specifically, when the number of pedestrians in the preview image in unit time is greater than or equal to a second number threshold, the pedestrian flow is larger, the probability of the pedestrians affecting the shooting effect is higher, at the moment, the triggering condition is judged to be met, and at least one of atomization treatment and elimination treatment is performed on the pedestrian area in the preview image; when the number of pedestrians in the preview image in unit time is smaller than the second number threshold, the pedestrian flow is smaller, the probability that pedestrians influence the shooting effect is lower, and the non-triggering condition is judged to be met at the moment, so that the pedestrian area in the preview image is not required to be processed.
It should be noted that, the second number threshold is not a fixed value, and the second number threshold may be within a reasonable parameter interval and designed according to practical situations, as will be understood by those skilled in the art. For example, the second number threshold may be determined according to the shooting scene, and the judgment standard is adjusted according to the shooting scene. And, the unit time may be a fixed value, and the unit time may be within a reasonable time range and designed according to practical situations, as will be understood by those skilled in the art. That is, the number of pedestrians in the preview image per unit time can be understood as the pedestrian flow.
Specifically, the different shooting scenes correspond to different second number thresholds, for example, in outdoor or scenic scenes, the number of pedestrians is generally large, and the number of pedestrians in the preview image is also large, and at this time, the second number thresholds may be set to a larger parameter, for example, 20, 30, 40, 50, etc.; in a bedroom, living room, etc. scenario, where typically there are fewer pedestrians and the amount of pedestrians within the preview image is also smaller, the second number threshold may be set to a smaller parameter, specifically, for example, 5, 10, etc. The unit time may be one minute, five minutes, ten minutes, or the like.
That is, before judging whether the target information meets the triggering condition according to the number of pedestrians in the preview image in unit time, the embodiment of the application firstly acquires the shooting scene in the preview image, and then selects the parameter corresponding to the shooting scene as the second number threshold. By the design, the second quantity threshold value is ensured to correspond to an actual shooting scene, and then the triggering condition can be adjusted according to the actual shooting scene, so that the image processing method is better and pertinently adapted to different shooting scenes.
Specifically, taking the shooting scene as a street to perform distance, when the street is wide and contains a large number of pedestrians, the second number threshold may be larger, for example, 10, 20, etc.; if a landscape way, the second number threshold is smaller, e.g., 3, 5, etc. Accordingly, for other shooting scenes, the second number threshold may be adjusted based on the volume of the shooting scene or the containable pedestrian threshold as well.
Further, in the embodiment of the present application, the preview image is a dynamic image, and when the target information includes pedestrian information and shooting subject information, the target information satisfies a trigger condition, including: the running speed of any pedestrian with respect to the photographing subject is greater than or equal to the speed threshold.
The running speed of any pedestrian in the preview image relative to the shooting subject determines whether the shooting effect of the shooting subject is affected. For example, when the running speed of the pedestrian with respect to the photographing subject is large, the probability that the pedestrian affects the photographing effect is high; when the running speed of the pedestrian relative to the photographing subject is small, the probability that the pedestrian affects the photographing effect is low. Therefore, in the case of acquiring the pedestrian information and the shooting subject information in the preview image, the embodiment of the present application may further determine whether the triggering condition is satisfied by determining whether the running speed of any pedestrian with respect to the shooting subject is greater than or equal to the speed threshold.
Specifically, the running speed of any pedestrian in the preview image with respect to the photographing subject is acquired, and it is determined whether the running speed is greater than or equal to a speed threshold. When the running speed is greater than or equal to the speed threshold, the probability of influencing the shooting effect by pedestrians is higher, at the moment, the triggering condition is judged to be met, and at least one of atomization treatment and elimination treatment is carried out on the pedestrian area in the preview image; when the running speed is smaller than the speed threshold, the probability that the pedestrian influences the shooting effect is lower, and the non-triggering condition is judged to be met at the moment, so that the pedestrian area in the preview image is not required to be processed.
Further, in the embodiment of the present application, when the target information includes shooting scene information, the target information satisfies a trigger condition, including: the shooting scene includes one of: streets, scenic spots, malls.
Wherein, different shooting scenes correspond to different shooting environments, so that whether the pedestrian area needs to be processed or not is determined by the different shooting scenes. For example, in the scenes such as streets, scenic spots, and markets, there are many pedestrians, and the probability that the pedestrians affect the shooting effect is high, so that the pedestrians in the preview image are directly controlled to be processed. In the scenes of bedrooms, living room lamp rooms and the like, generally, pedestrians are fewer, the probability that the pedestrians influence the shooting effect is smaller, and pedestrian areas in the preview images do not need to be processed. Specifically, the preset scene can also be a scenic spot, a street, a park, a road, a market, a restaurant and other places with larger human flow.
Specifically, when the shooting scene in the preview image is judged to be a preset scene such as a street, a scenic spot, a market and the like, the probability of the pedestrian affecting the shooting effect is higher, at the moment, the triggering condition is judged to be met, and at least one of atomization treatment and elimination treatment is performed on the pedestrian area in the preview image; when judging that the shooting scene in the preview image does not comprise preset scenes such as streets, scenic spots and markets, the probability of the pedestrians affecting the shooting effect is low, and the non-triggering condition is judged to be met at the moment, so that the pedestrian area in the preview image is not required to be processed.
Further, in the embodiment of the present application, the target information satisfies the triggering condition, including: the shooting scene includes one of: street, scenic spot, mall, and the number of pedestrians in the preview image at any one time is greater than or equal to the first number threshold. That is, in this embodiment, it is first determined whether the shooting scene is a preset scene, and in the case where the shooting scene is a preset scene such as a street, a scenic spot, a mall, or the like, and the number of pedestrians in the preview image is greater than or equal to the first number threshold, it is determined that the target information at this time satisfies the trigger condition. According to the embodiment, the preset scenes and the number of pedestrians are combined, so that the judgment accuracy can be further improved. Further, the first number threshold may be determined according to a shooting scene.
Further, in the embodiment of the present application, the target information satisfies the triggering condition, including: the shooting scene includes one of: street, scenic spot, market, and the number of pedestrians in the preview image per unit time is greater than or equal to the second number threshold. That is, in this embodiment, it is first determined whether the shooting scene is a preset scene, and in the case where the shooting scene is a preset scene such as a street, a scenic spot, a mall, or the like, and the number of pedestrians in the preview image per unit time is greater than or equal to the second number threshold, it is determined that the target information at this time satisfies the trigger condition. The embodiment combines the preset scene with pedestrian flow, and can further improve the judgment accuracy. Further, the second number threshold may be determined according to a shooting scene.
Further, in the embodiment of the present application, the target information satisfies the triggering condition, including: the shooting scene includes one of: the running speed of any pedestrian relative to the shooting subject is greater than or equal to the speed threshold. That is, in this embodiment, it is first determined whether the shooting scene is a preset scene, and when the shooting scene is a preset scene such as a street, a scenic spot, a mall, or the like, and the running speed of the pedestrian with respect to the shooting subject is greater than or equal to the speed threshold, it is determined that the target information at this time satisfies the trigger condition. According to the embodiment, the preset scene is combined with the running speed of the pedestrian relative to the shooting main body, so that the judgment accuracy can be further improved.
An embodiment of the present application provides an image processing method, as shown in fig. 2, including:
step 202, judging whether a setting instruction is received, if yes, executing step 204, otherwise, ending;
step 204, obtaining target information in the preview image, wherein the target information at least comprises one of the following: pedestrian information, shooting subject information, shooting scene;
and 206, performing at least one of atomization processing and elimination processing on the pedestrian area in the preview image based on the condition that the target information meets the trigger condition.
Firstly judging whether a setting instruction is received, acquiring target information in a preview image under the condition that the setting instruction is received, and processing pedestrians in the preview image under the condition that the target information meets a trigger condition; under the condition that a setting instruction is not received, the shooting is directly controlled to be integrated. Wherein the target information includes, but is not limited to: pedestrian, shooting subject, shooting scene; treatments include, but are not limited to: atomization treatment and elimination treatment.
In this embodiment, user-defined settings for photographing by the user can be achieved. That is, if the user wishes to automatically process the pedestrian area in some cases, it is possible to control the acquisition of the target information in the preview image by the setting instruction, and in the case where the target information satisfies the trigger condition, at least one of the fogging process and the erasing process is performed on the pedestrian area in the preview image; if the user does not want to process the pedestrian area, the user does not input a setting instruction.
In this embodiment of the application, through discernment shooting scene in the current preview image, have shooting subject and pedestrian quantity, intelligence handles pedestrian region in the preview image, will shoot the clear reservation of subject, and other pedestrians carry out atomization treatment or elimination processing, very big promotion the film forming rate, convenience of customers obtains a picture that only oneself wants to take in the sight spot of people's mountain and sea.
Specifically, a shooting scene in the preview image is recognized as being in a preset scene such as outdoors or a wide street, and there is a shooting subject (which may be a landscape or a person) in the preview image. At this time, if there is a large pedestrian flow (pedestrian flow is greater than or equal to the second number threshold) around the photographing subject, or there is a large number of pedestrians (pedestrian number is greater than or equal to the first number threshold) around the photographing subject, the atomizing process or the erasing process is performed on the pedestrian area in the preview image, the photographing subject is clearly reserved, and surrounding pedestrians are atomized or erased.
In the specific embodiment of the application, under the condition that the target information meets the triggering condition, the pedestrian can be subjected to average stack algorithm processing in the preview image, so that the pedestrian is dynamically atomized and disappears, the pain points, in which the imaging effect of the scene is influenced by excessive pedestrians in the scenic spots of people, mountains and seas, can be solved, and the visual interference of the excessive pedestrians on the shooting main body can be solved.
In the specific embodiment, firstly, entering a preview interface of a camera, clicking a setting key at the upper right corner to input a setting instruction, and opening a pedestrian detection algorithm button; then, the AI scene detection identifies the shooting scene of the current preview image, and when the algorithm identifies that the current shooting scene is a preset scene such as an outdoor scene or a street scene, the pedestrian detection algorithm is triggered; then, the pedestrian detection algorithm detects that a large amount of pedestrian flows exist in the current preview image (the pedestrian flows are larger than or equal to a second quantity threshold value) or that a large amount of static pedestrians exist in the current preview image (the pedestrian flows are larger than or equal to a first quantity threshold value), the pedestrians in the preview image are processed, a shooting subject is clearly reserved, surrounding pedestrians are subjected to atomization processing or elimination processing, and the main body is more prominent; then, the pedestrian is atomized or disappears in real time in the preview image, so that the user can be ensured to shoot a shooting subject which the user wants to shoot.
An embodiment of the present application provides an image processing apparatus 300, as shown in fig. 3, the image processing apparatus 300:
an acquiring unit 302, configured to acquire target information in the preview image, where the target information includes at least one of: pedestrian information, shooting subject information, shooting scene;
and a control unit 304 for performing at least one of an atomizing process and a erasing process on pedestrians in the preview image in the case where the target information satisfies the trigger condition.
In the image processing apparatus 300 provided in the embodiment of the present application, the acquiring unit 302 acquires the target information in the preview image, and the control unit 304 may determine whether the target information satisfies the triggering condition according to the acquisition result of the acquiring unit 302. If the target information meets the triggering condition, the pedestrian on the preview interface can influence the shooting effect, so that the film quality is poor. At this time, the control unit 304 directly performs at least one of the atomizing process and the erasing process on the pedestrian region in the preview image, and displays the preview image after the pedestrian has been processed to the user again, at which time the user can directly understand the effect of the processed screen. Wherein the target information includes at least one of: pedestrian information, shooting subject information, shooting scene.
Through the image processing device 300 provided by the embodiment of the application, whether the pedestrian area needs to be processed can be directly judged in the shooting process, and the processed preview image can be directly displayed to the user, so that the user can know the judging preview effect in real time, and can directly shoot after acquiring a proper image, so that satisfactory photos are obtained, and the film forming rate is effectively improved.
Particularly, the pedestrian region can be processed in the shooting process, so that a user can directly acquire the processed photo, various defects of post-processing are avoided, particularly the possibility of picture restoration after the pedestrian is removed is avoided, and the condition of missed detection or false detection of the pedestrian is also avoided. In addition, the preview image after pedestrian processing can be directly displayed to the user for watching, so that the user can control shooting after acquiring the satisfactory preview image, the satisfaction degree of the user on the photo is greatly improved, and the user can be ensured to be connected to the shooting effect in real time.
The image processing apparatus 300 provided in the embodiment of the present application can implement all the method steps in the embodiment of the method and achieve the same technical effects, and will not be described herein again.
Optionally, the embodiment of the present application further provides an electronic device, including a processor, a memory, and a program or an instruction stored in the memory and capable of running on the processor, where the program or the instruction realizes each process of the embodiment of the image processing method when executed by the processor, and the process can achieve the same technical effect, so that repetition is avoided, and no description is repeated here.
It should be noted that, the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 4 is a schematic hardware structure of an electronic device 400 implementing an embodiment of the present application.
As shown in fig. 4, the electronic device 400 includes, but is not limited to: radio frequency unit 402, network module 404, audio output unit 406, input unit 408, sensor 410, display unit 412, user input unit 414, interface unit 416, memory 418, and processor 420.
Those skilled in the art will appreciate that the electronic device 400 may further include a power supply 422 (e.g., a battery) for powering the various components, and that the power supply 422 may be logically coupled to the processor 420 via a power supply 422 management system such that charge, discharge, and power consumption management functions are performed by the power supply 422 management system. The electronic device 400 structure shown in fig. 4 does not constitute a limitation of the electronic device 400, and the electronic device 400 may include more or less components than shown, or may combine certain components, or may be arranged with different components, which are not described in detail herein.
The processor 420 is configured to obtain target information in the preview image, and perform at least one of atomization processing and elimination processing on a pedestrian area in the preview image if the target information meets a trigger condition; wherein the target information includes, but is not limited to, one of: pedestrian information, main information body shooting, and scene shooting.
Specifically, the processor 420 is further configured to obtain pedestrian information in the preview image.
Specifically, the processor 420 is further configured to acquire shooting subject information in the preview image.
Specifically, the processor 420 is further configured to acquire shooting scene information in the preview image.
Specifically, the processor 420 is further configured to determine that the target information satisfies the trigger condition when the number of pedestrians in the preview image is greater than or equal to the first number threshold at any time, and perform at least one of the fogging process and the erasing process on the pedestrian area in the preview image.
Specifically, the processor 420 is further configured to determine that the target information satisfies the trigger condition when the number of pedestrians in the preview image in the unit time is greater than or equal to the second number threshold, and perform at least one of the fogging process and the erasing process on the pedestrian area in the preview image.
Specifically, the processor 420 is further configured to determine that the target information satisfies the trigger condition when the running speed of any pedestrian relative to the subject is greater than or equal to the speed threshold, and perform at least one of the fogging process and the erasing process on the pedestrian area in the preview image.
Specifically, the processor 420 is further configured to, in the shooting scene, include one of: when the target information meets the triggering condition, the pedestrian area in the preview image is subjected to at least one of atomization processing and elimination processing.
Specifically, the processor 420 is further configured to determine that the target information satisfies the trigger condition when the shooting scene is a preset scene such as a street, a scenic spot, a mall, etc., and the number of pedestrians in the preview image at any time is greater than or equal to the first number threshold, and perform at least one of atomization processing and elimination processing on the pedestrian area in the preview image.
Specifically, when the shooting scene is a preset scene such as a street, a scenic spot, a market, etc., and the pedestrian flow in the preview image in unit time is greater than or equal to the second number threshold, the processor 420 is further configured to determine that the target information meets the trigger condition, and perform at least one of atomization processing and elimination processing on the pedestrian area in the preview image.
Specifically, when the shooting scene is a preset scene such as a street, a scenic spot, a market, etc., and the running speed of any pedestrian relative to the shooting subject is greater than or equal to a speed threshold, the processor 420 is further configured to determine that the target information meets a trigger condition, and perform at least one of atomization processing and elimination processing on the pedestrian area in the preview image.
Specifically, the processor 420 is further configured to determine, prior to identifying the preview image, a first quantity threshold based on the captured scene.
Specifically, the processor 420 is further configured to determine, prior to identifying the preview image, a second number threshold according to the captured scene.
It should be understood that, in the embodiment of the present application, the radio frequency unit 402 may be configured to receive and transmit information or signals during a call, and specifically, receive downlink data of a base station or send uplink data to the base station. The radio frequency unit 402 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The network module 404 provides wireless broadband internet access to the user, such as helping the user to email, browse web pages, access streaming media, and the like.
The audio output unit 406 may convert audio data received by the radio frequency unit 402 or the network module 404 or stored in the memory 418 into an audio signal and output as sound. Also, the audio output unit 406 may also provide audio output (e.g., call signal reception sound, message reception sound, etc.) related to a specific function performed by the electronic device 400. The audio output unit 406 includes a speaker, a buzzer, a receiver, and the like.
The input unit 408 is for receiving an audio or video signal. The input unit 408 may include a graphics processor 4082, a processor 420 (Graphics Processing Unit, GPU) and a microphone 4084, the graphics processor 4082 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 412, or stored in the memory 418 (or other storage medium), or transmitted via the radio frequency unit 402 or the network module 404. The microphone 4084 may receive sound and may be capable of processing the sound into audio data, which may be converted into a format output that may be transmitted to the mobile communication base station via the radio frequency unit 402 in the case of a phone call mode.
The electronic device 400 further comprises at least one sensor 410, such as a fingerprint sensor 410, a pressure sensor 410, an iris sensor 410, a molecular sensor 410, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor 410, a light sensor 410, a motion sensor 410, and other sensors 410.
The display unit 412 is used to display information input by a user or information provided to the user. The display unit 412 may include a display panel 4122, and the display panel 4122 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
The user input unit 414 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device 400. In particular, the user input unit 414 includes a touch panel 4142 and other input devices 4144. The touch panel 4142, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user. The touch panel 4142 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 420, and receives and executes commands sent from the processor 420. Other input devices 4144 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Further, the touch panel 4142 may be overlaid on the display panel 4122, and when the touch panel 4142 detects a touch operation thereon or thereabout, the touch panel is transmitted to the processor 420 to determine the type of touch event, and then the processor 420 provides a corresponding visual output on the display panel 4122 according to the type of touch event. The touch panel 4142 and the display panel 4122 may be two independent components or may be integrated into one component.
The interface unit 416 is an interface to which an external device is connected to the electronic apparatus 400. For example, the external devices may include a wired or wireless headset port, an external power 422 (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting devices having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 416 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 400 or may be used to transmit data between the electronic apparatus 400 and an external device.
Memory 418 may be used to store software programs as well as various data. The memory 418 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebooks, etc.) created according to the use of the mobile terminal, etc. In addition, the memory 418 may include high-speed random access memory 418, and may also include non-volatile memory 418, such as at least one disk memory 418 device, flash memory device, or other volatile solid-state memory 418 device.
The processor 420 performs various functions of the electronic device 400 and processes data by running or executing software programs and/or modules stored in the memory 418 and invoking data stored in the memory 418, thereby overall monitoring the electronic device 400. Processor 420 may include one or more processing units; preferably, the processor 420 may integrate an application processor 420 and a modem processor 420, wherein the application processor 420 primarily handles operating systems, user interfaces, applications, etc., and the modem processor 420 primarily handles wireless communications.
The electronic device 400 may also include a power supply 422 for powering the various components, and preferably the power supply 422 may be logically coupled to the processor 420 via a power supply 422 management system such that charge, discharge, and power consumption management functions are performed by the power supply 422 management system.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, where the program or the instruction realizes each process of the embodiment of the image processing method when executed by a processor, and the same technical effects can be achieved, so that repetition is avoided, and no redundant description is given here.
Wherein the processor is the processor in the electronic device 400 in the above embodiment. Readable storage media include computer readable storage media such as Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic or optical disks, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or instructions, each process of the embodiment of the image processing method can be realized, the same technical effect can be achieved, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method of the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (7)

1. An image processing method, comprising:
acquiring target information in the preview image, wherein the target information at least comprises one of the following: pedestrian information, shooting subject information, shooting scene;
performing at least one of atomization processing and elimination processing on a pedestrian area in the preview image under the condition that the target information meets a trigger condition;
the target information meeting the triggering condition includes: the shooting scene is any one of a street, a scenic spot and a market, and when the number of pedestrians in the preview image at any moment is greater than or equal to a first number threshold value, the target information is judged to meet the triggering condition at the moment;
after the target information in the preview image is acquired, the method further comprises: and determining the first quantity threshold according to the shooting scene.
2. The image processing method according to claim 1, wherein the preview image is a dynamic image, and the target information includes pedestrian information;
the target information satisfies the triggering condition, and further includes: the number of pedestrians in the preview image in unit time is greater than or equal to a second number threshold.
3. The image processing method according to claim 1, wherein the preview image is a moving image, and the target information includes pedestrian information and photographic subject information;
The target information satisfies the triggering condition, and further includes: the running speed of any one of the pedestrians relative to the shooting subject is greater than or equal to a speed threshold.
4. The image processing method according to claim 2, wherein the target information further includes a shooting scene;
after the step of obtaining the target information in the preview image, the method further comprises: and determining the second quantity threshold according to the shooting scene.
5. An image processing apparatus, comprising:
an acquisition unit configured to acquire target information in a preview image, the target information including at least one of: pedestrian information, shooting subject information, shooting scene;
a control unit configured to perform at least one of an atomization process and an elimination process on a pedestrian area in the preview image in a case where the target information satisfies a trigger condition;
the target information meeting the triggering condition includes: the shooting scene is any one of a street, a scenic spot and a market, and when the number of pedestrians in the preview image at any moment is greater than or equal to a first number threshold value, the target information is judged to meet the triggering condition at the moment;
After the target information in the preview image is acquired, the method further comprises: and determining the first quantity threshold according to the shooting scene.
6. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the image processing method according to any one of claims 1 to 4.
7. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method according to any of claims 1 to 4.
CN202011290753.2A 2020-11-17 2020-11-17 Image processing method, image processing apparatus, electronic device, and readable storage medium Active CN112422828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011290753.2A CN112422828B (en) 2020-11-17 2020-11-17 Image processing method, image processing apparatus, electronic device, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011290753.2A CN112422828B (en) 2020-11-17 2020-11-17 Image processing method, image processing apparatus, electronic device, and readable storage medium

Publications (2)

Publication Number Publication Date
CN112422828A CN112422828A (en) 2021-02-26
CN112422828B true CN112422828B (en) 2023-04-28

Family

ID=74831993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011290753.2A Active CN112422828B (en) 2020-11-17 2020-11-17 Image processing method, image processing apparatus, electronic device, and readable storage medium

Country Status (1)

Country Link
CN (1) CN112422828B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114071016B (en) * 2021-11-11 2023-10-27 维沃移动通信有限公司 Image processing method, device, electronic equipment and storage medium
CN114143448B (en) * 2021-11-17 2024-04-19 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102090105B1 (en) * 2013-07-16 2020-03-17 삼성전자 주식회사 Apparatus and method for processing an image having a camera device
CN105959561A (en) * 2016-06-13 2016-09-21 Tcl集团股份有限公司 Photographing method and system
CN106453853A (en) * 2016-09-22 2017-02-22 深圳市金立通信设备有限公司 Photographing method and terminal
CN107135351B (en) * 2017-04-01 2021-11-16 宇龙计算机通信科技(深圳)有限公司 Photographing method and photographing device
CN107844765A (en) * 2017-10-31 2018-03-27 广东欧珀移动通信有限公司 Photographic method, device, terminal and storage medium
CN111263071B (en) * 2020-02-26 2021-12-10 维沃移动通信有限公司 Shooting method and electronic equipment

Also Published As

Publication number Publication date
CN112422828A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
US10880495B2 (en) Video recording method and apparatus, electronic device and readable storage medium
CN109361865B (en) Shooting method and terminal
CN107613191B (en) Photographing method, photographing equipment and computer readable storage medium
CN112449120B (en) High dynamic range video generation method and device
CN107786827B (en) Video shooting method, video playing method and device and mobile terminal
CN108605085B (en) Method for acquiring shooting reference data and mobile terminal
CN107959795B (en) Information acquisition method, information acquisition equipment and computer readable storage medium
CN111263071B (en) Shooting method and electronic equipment
CN108848313B (en) Multi-person photographing method, terminal and storage medium
CN111416940A (en) Shooting parameter processing method and electronic equipment
CN112422828B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN107730460B (en) Image processing method and mobile terminal
CN111885307B (en) Depth-of-field shooting method and device and computer readable storage medium
CN109922294B (en) Video processing method and mobile terminal
CN109120853B (en) Long exposure image shooting method and terminal
CN111447371A (en) Automatic exposure control method, terminal and computer readable storage medium
CN111601063B (en) Video processing method and electronic equipment
CN110602387B (en) Shooting method and electronic equipment
CN112511741A (en) Image processing method, mobile terminal and computer storage medium
CN110933293A (en) Shooting method, terminal and computer readable storage medium
CN108243489B (en) Photographing control method and mobile terminal
CN110855897B (en) Image shooting method and device, electronic equipment and storage medium
CN110163036B (en) Image recognition method and device
CN108449560B (en) Video recording method and terminal
CN112532904B (en) Video processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant