CN109167917B - Image processing method and terminal equipment - Google Patents

Image processing method and terminal equipment Download PDF

Info

Publication number
CN109167917B
CN109167917B CN201811152759.6A CN201811152759A CN109167917B CN 109167917 B CN109167917 B CN 109167917B CN 201811152759 A CN201811152759 A CN 201811152759A CN 109167917 B CN109167917 B CN 109167917B
Authority
CN
China
Prior art keywords
image
images
motion
sequence
motion mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811152759.6A
Other languages
Chinese (zh)
Other versions
CN109167917A (en
Inventor
杨威
寇飞
任鹏道
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN201811152759.6A priority Critical patent/CN109167917B/en
Publication of CN109167917A publication Critical patent/CN109167917A/en
Application granted granted Critical
Publication of CN109167917B publication Critical patent/CN109167917B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The embodiment of the invention discloses an image processing method and terminal equipment, which are applied to the technical field of communication and can solve the problem of poor quality of HDR images. The method comprises the following steps: the method comprises the steps of obtaining a first image frame sequence and a second image frame sequence by controlling the simultaneous exposure of a first camera and a second camera, obtaining a first motion mask sequence according to the first image frame sequence, obtaining a second motion mask sequence according to the second image frame sequence, processing N-1 first images by adopting the first motion mask sequence and the second motion mask sequence to obtain N-1 first target images, wherein the N-1 first images are images except a first reference image in the N first images, and fusing the first reference image and the N-1 first target images to obtain a second target image. The method is applied to the image processing scene.

Description

Image processing method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image processing method and terminal equipment.
Background
With the popularization of terminal technology, the application of terminal equipment is more and more extensive. For example, more and more users choose to take pictures using terminal devices.
Generally, when the terminal device is used for shooting in a high dynamic range scene, a high dynamic range imaging (HDR) technology may be used to shoot an image with a high dynamic range. Specifically, when a photo is taken by using the HDR technique, an image sequence including a plurality of frames of images with different exposure parameters may be taken, and then a high dynamic range image is fused by using a plurality of frames of images in the image sequence. In the process of fusing the plurality of images, if a moving object exists in the shooting scene, a motion Ghost (also called Ghost, english name: Ghost) phenomenon may occur in the fused image. Therefore, before the multi-frame images are fused, the multi-frame images need to be subjected to motion ghost elimination through motion estimation and motion compensation. Specifically, a certain frame of image can be selected as a reference image in an image sequence, a moving object is determined through motion estimation of a plurality of frame images, then the moving objects in the other frame images (namely, other frame images except the reference image in the image sequence) are respectively aligned with the moving object in the reference image in a motion compensation mode, and finally the other frame images after motion compensation are fused with the reference image, so that the HDR image obtained after fusion can avoid the motion ghost phenomenon.
In the above-mentioned method for removing motion ghosting, in the process of using the reference image and the other frame images for motion estimation, there may occur a situation that some regions in the reference image have no information because of being too bright or too dark, and the other frame images have information in the corresponding regions, so that in the process of motion estimation, the regions without information in the reference image may be mistakenly determined as the regions where moving objects are located, and thus after performing motion compensation on the other frame images, part of information in the other frame images may be lost. This may result in poor quality of the HDR image obtained after the fusion of the remaining frame image after the motion compensation and the reference image.
Disclosure of Invention
The embodiment of the invention provides an image processing method and terminal equipment, which are used for solving the problem of poor quality of an HDR image in the prior art.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an image processing method is provided, which is applied to a terminal device having a first camera and a second camera, and includes: the method comprises the steps of obtaining a first image frame sequence and a second image frame sequence by controlling a first camera and a second camera to be exposed simultaneously; acquiring a first motion mask sequence according to the first image frame sequence, and acquiring a second motion mask sequence according to the second image frame sequence; processing the N-1 first images by adopting a first motion mask sequence and a second motion mask sequence to obtain N-1 first target images, wherein the N-1 first images are images except a first reference image in the N first images; and fusing the first reference image and the N-1 first target images to obtain a second target image. The first image frame sequence is shot by a first camera and comprises N first images with different exposure parameters; the second image frame sequence is shot by a second camera and comprises N second images with the same exposure parameters; each first image corresponds to one second image, each first image and the second image corresponding to one first image are images shot at the same time in the same scene, and N is an integer greater than or equal to 2.
In a second aspect, a terminal device is provided, which includes: the system comprises an acquisition module, a processing module and a fusion module; the acquisition module is used for controlling the first camera and the second camera to be exposed simultaneously to obtain a first image frame sequence and a second image frame sequence; acquiring a first motion mask sequence according to the first image frame sequence, and acquiring a second motion mask sequence according to the second image frame sequence; the processing module is used for processing the N-1 first images by adopting the first motion mask sequence and the second motion mask sequence which are obtained by the obtaining module to obtain N-1 first target images, wherein the N-1 first images are images except the first reference image in the N first images; and the fusion module is used for fusing the first reference image and the N-1 first target images to obtain a second target image. The first image frame sequence is shot by a first camera and comprises N first images with different exposure parameters; the second image frame sequence is shot by a second camera and comprises N second images with the same exposure parameters; each first image corresponds to one second image, each first image and the second image corresponding to one first image are images shot at the same time in the same scene, and N is an integer greater than or equal to 2.
In a third aspect, a terminal device is provided, the terminal device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the image processing method according to the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the image processing method according to the first aspect.
In the embodiment of the invention, a first image frame sequence and a second image frame sequence can be obtained by controlling the simultaneous exposure of a first camera and a second camera, a first motion mask sequence is obtained according to the first image frame sequence, a second motion mask sequence is obtained according to the second image frame sequence, N-1 first images are processed by adopting the first motion mask sequence and the second motion mask sequence to obtain N-1 first target images, the N-1 first images are images except for a first reference image in the N first images, and the first reference image and the N-1 first target images are fused to obtain a second target image. The first image frame sequence is shot by a first camera and comprises N first images with different exposure parameters; the second image frame sequence is shot by a second camera and comprises N second images with the same exposure parameters; each first image corresponds to one second image, each first image and the second image corresponding to one first image are images shot at the same time in the same scene, and N is an integer greater than or equal to 2. By the scheme, in the process of performing motion estimation on the rest frame images (namely, the first images except the first reference frame image in the first image frame sequence) by using the first reference image, the first motion mask sequence and the second motion mask sequence can be used for processing the N-1 first images to obtain N-1 first target images, and the first reference image and the N-1 first target images are fused to obtain the second target image. Compared with the prior art that only one motion mask sequence (namely the first motion mask sequence) is adopted to process N-1 first images during motion compensation, partial information loss in the rest frame images after the motion compensation is carried out can be avoided, and therefore the quality of the HDR image obtained after fusion can be improved after the rest frame images after the motion compensation are fused with the first reference image.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
fig. 2 is a first schematic diagram illustrating an image processing method according to an embodiment of the present invention;
fig. 3 is a second schematic diagram of an image processing method according to an embodiment of the present invention;
fig. 4 is a third schematic diagram of an image processing method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
FIG. 6 is a system diagram of an image processing system according to an embodiment of the present invention;
fig. 7 is a hardware schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first motion mask sequence and the second motion mask sequence, etc. are used to distinguish between different motion mask sequences, rather than to describe a particular order of motion mask sequences.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The terminal device in the embodiment of the present invention may be a terminal device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which an image processing method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the image processing method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the image processing method may operate based on the android operating system shown in fig. 1. Namely, the processor or the terminal device can implement the image processing method provided by the embodiment of the invention by running the software program in the android operating system.
The embodiment of the invention provides an image processing method and terminal equipment, wherein a first image frame sequence and a second image frame sequence can be obtained by controlling a first camera and a second camera to be exposed simultaneously, a first motion mask sequence is obtained according to the first image frame sequence, a second motion mask sequence is obtained according to the second image frame sequence, N-1 first images are processed by adopting the first motion mask sequence and the second motion mask sequence to obtain N-1 first target images, the N-1 first images are images except for a first reference image in the N first images, and the first reference image and the N-1 first target images are fused to obtain a second target image. The first image frame sequence is shot by a first camera and comprises N first images with different exposure parameters; the second image frame sequence is shot by a second camera and comprises N second images with the same exposure parameters; each first image corresponds to one second image, each first image and the second image corresponding to one first image are images shot at the same time in the same scene, and N is an integer greater than or equal to 2. By the scheme, in the process of performing motion estimation on the rest frame images (namely, the first images except the first reference frame image in the first image frame sequence) by using the first reference image, the first motion mask sequence and the second motion mask sequence can be used for processing the N-1 first images to obtain N-1 first target images, and the first reference image and the N-1 first target images are fused to obtain the second target image. Compared with the prior art that only one motion mask sequence (namely the first motion mask sequence) is adopted to process N-1 first images during motion compensation, partial information loss in the rest frame images after the motion compensation is carried out can be avoided, and therefore the quality of the HDR image obtained after fusion can be improved after the rest frame images after the motion compensation are fused with the first reference image.
The terminal device in the embodiment of the invention can be a mobile terminal device and can also be a non-mobile terminal device. The mobile terminal device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc.; the non-mobile terminal device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, or the like; the embodiments of the present invention are not particularly limited.
The execution subject of the image processing method provided in the embodiment of the present invention may be the terminal device (including a mobile terminal device and a non-mobile terminal device), or may also be a functional module and/or a functional entity capable of implementing the image processing method in the terminal device, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited. The following takes a terminal device as an example to exemplarily explain an image processing method provided by the embodiment of the present invention.
As shown in fig. 2, the image processing method provided by the embodiment of the present invention is applied to a terminal device having a first camera and a second camera, and includes the following steps S10-S13.
And S10, the terminal equipment controls the first camera and the second camera to be exposed simultaneously to obtain a first image frame sequence and a second image frame sequence.
And S11, the terminal equipment acquires a first motion mask sequence according to the first image frame sequence and acquires a second motion mask sequence according to the second image frame sequence.
The first image frame sequence is shot by a first camera and comprises N first images with different exposure parameters; the second image frame sequence is captured by a second camera and comprises N second images with the same exposure parameters. Each first image corresponds to one second image, each first image and the second image corresponding to one first image are images shot at the same time in the same scene, and N is an integer greater than or equal to 2.
Optionally, the method for acquiring the first image frame sequence and the second image frame sequence by the terminal device may include the following steps a to C.
A. The terminal device determines a dynamic range of a scene in which an image is captured.
Optionally, the terminal device may determine a dynamic range of the current scene according to information of the preview image in a shooting preview stage, and then determine, according to the determined dynamic range, a frame number of the first image used for shooting different exposure parameters in the first image frame sequence and an exposure parameter corresponding to each frame of image. The number of frames of the second image in the second image frame sequence is the same as the number of frames of the first image in the first image frame sequence.
B. And the terminal equipment determines the target frame number to be n +1 according to the dynamic range of the scene.
Wherein the target frame number is a number of frames for capturing a first image in the first image frame sequence.
C. And the terminal equipment acquires the n +1 frames of first images with different exposure parameters as a first image frame sequence, and acquires the n +1 frames of second images with the same exposure parameters as a second image frame sequence.
Optionally, the terminal device may include a first camera and a second camera, where the first camera may be configured to capture n +1 frames of first images with different exposure parameters in the first image frame sequence, and the second camera may be configured to capture n +1 frames of second images with the same exposure parameters in the second image frame sequence.
Optionally, the first motion mask sequence includes N-1 first motion masks, and the second motion mask sequence includes N-1 second motion masks. The S11 may specifically include: for each of the N-1 first images and each of the N-1 second images, performing the following steps (1) and (2) to obtain N-1 first motion masks and N-1 second motion masks, respectively:
(1) and performing motion estimation on the first reference image and one of the N-1 first images to obtain a first motion mask.
(2) And performing motion estimation on the second reference image and one of the N-1 second images to obtain a second motion mask, wherein the N-1 second images are images except the second reference image in the N second images.
Exemplarily, assuming that N is 3, taking 3 first images (respectively referred to as a first image a, a first image b, and a first image c) included in a first image frame sequence, taking 3 second images (respectively referred to as a second image a, a second image b, and a second image c) included in a second image frame sequence as an example, assuming that the first image a corresponds to the second image a, the first image b corresponds to the second image b, and the third image c corresponds to the third image c, taking the first image a as a first reference image, a first motion mask a may be obtained by performing motion estimation on the first image a and the first image b, and a first motion mask b may be obtained by performing motion estimation on the first image a and the first image c, where the first motion mask a and the first motion mask b constitute a first motion mask sequence. Similarly, with the second image a as the second reference image, a second motion mask a may be obtained by performing motion estimation on the second image a and the second image b, and a second motion mask b may be obtained by performing motion estimation on the second image a and the second image c, where the second motion mask a and the second motion mask b form a second motion mask sequence.
S12, the terminal device processes the N-1 first images by adopting the first motion mask sequence and the second motion mask sequence to obtain N-1 first target images.
Wherein, the N-1 first images are images except the first reference image in the N first images.
Illustratively, in combination with the first motion mask sequence and the second motion mask sequence in the above example, the first motion mask sequence may include a first motion mask a and a first motion mask b, the second motion mask sequence may include a second motion mask a and a second motion mask b, in combination with the above example N-1 first images may be a first image b and a first image c, the above S12 may specifically be that the first image b is processed by using the first motion mask sequence a and the second motion mask sequence a to obtain a first target image a, and the first image c is processed by using the first motion mask sequence b and the second motion mask sequence b to obtain a first target image b, so that 2 first target images (i.e., the first target image a and the first target image b) may be obtained.
S13, the terminal device fuses the first reference image and the N-1 first target images to obtain a second target image.
Illustratively, S13 may specifically be fusing the first image a, the first target image a, and the first target image b in the above example to obtain the second target image.
Optionally, in the embodiment of the present invention, images may be fused in a pyramid manner, so that a good tone transition effect of the fused images may be ensured. It is to be understood that other ways of fusing images may also be adopted in the embodiments of the present invention, and the embodiments of the present invention are not particularly limited to a specific way of fusing images.
The embodiment of the invention provides an image processing method, which can obtain a first image frame sequence and a second image frame sequence by controlling a first camera and a second camera to be exposed simultaneously, obtain a first motion mask sequence according to the first image frame sequence, obtain a second motion mask sequence according to the second image frame sequence, process N-1 first images by adopting the first motion mask sequence and the second motion mask sequence to obtain N-1 first target images, wherein the N-1 first images are images except for a first reference image in the N first images, and fuse the first reference image and the N-1 first target images to obtain a second target image. The first image frame sequence is shot by a first camera and comprises N first images with different exposure parameters; the second image frame sequence is shot by a second camera and comprises N second images with the same exposure parameters; each first image corresponds to one second image, each first image and the second image corresponding to one first image are images shot at the same time in the same scene, and N is an integer greater than or equal to 2. By the scheme, in the process of performing motion estimation on the rest frame images (namely, the first images except the first reference frame image in the first image frame sequence) by using the first reference image, the first motion mask sequence and the second motion mask sequence can be used for processing the N-1 first images to obtain N-1 first target images, and the first reference image and the N-1 first target images are fused to obtain the second target image. Compared with the prior art that only one motion mask sequence (namely the first motion mask sequence) is adopted to process N-1 first images during motion compensation, partial information loss in the rest frame images after the motion compensation is carried out can be avoided, and therefore the quality of the HDR image obtained after fusion can be improved after the rest frame images after the motion compensation are fused with the first reference image.
Alternatively, in conjunction with fig. 2, as shown in fig. 3, the above S12 may be replaced with S12a and S12b described below.
S12a, the terminal device processes the N-1 first images by adopting the first motion mask sequence to obtain N-1 third images.
S12b, the terminal device processes the N-1 third images by adopting the second motion mask sequence to obtain N-1 first target images.
Optionally, in the embodiment of the present invention, the image is processed by using a motion mask sequence, and motion compensation may be performed on the image according to the motion mask sequence.
Optionally, the one first motion mask is used for processing one first image, and the one first motion mask is obtained by performing motion estimation on the one first image according to the first reference image; the second motion mask is used for processing a third image, the second motion mask is obtained by performing motion estimation on a second image according to a second reference image, and the second image is a second image corresponding to a first image processed by the first motion mask.
Illustratively, in conjunction with the above example, the first motion mask sequence includes a first motion mask a and a first motion mask b, the N-1 first images are a first image b and a first image c, and the second motion mask sequence includes a second motion mask a and a second motion mask b. S12a may specifically be that the terminal device uses the first motion mask a to process the first image b to obtain a third image a, and the terminal device uses the first motion mask b to process the first image c to obtain a third image b, and then S12b may specifically be that the terminal device uses the second motion mask a to process the third image a to obtain a first target image a, and the terminal device uses the second motion mask b to process the third image b to obtain a first target image b, so that 2 first target images (i.e., the first target image a and the first target image b) may be obtained.
Alternatively, in conjunction with fig. 2, as shown in fig. 4, the above S12 may be replaced with S12c and S12d described below.
And S12c, the terminal equipment corrects the first motion mask sequence by adopting the second motion mask sequence to obtain a third motion mask sequence.
S12d, the terminal device processes the N-1 first images by adopting the third motion mask sequence to obtain N-1 first target images.
Optionally, the first motion mask sequence includes N-1 first motion masks, the second motion mask sequence includes N-1 second motion masks, and the third motion mask sequence includes N-1 third motion masks.
Each of the N-1 third motion masks may be obtained by the following step (3):
(3) and correcting one first moving mask in the N-1 first moving masks by adopting one second moving mask in the N-1 second moving masks to obtain a third moving mask.
One second motion mask corresponds to one second image, one first motion mask corresponds to one first image, and one first image and one second image are images shot at the same time in the same scene.
In an embodiment of the present invention, a third motion mask as described above may be used to process a first image.
Optionally, the terminal device may process one first image of the N-1 first images by using a third motion mask, where the third motion mask is obtained by correcting one first motion mask obtained according to the one first image.
Illustratively, in conjunction with the examples in S12a and S12b described above, the first motion mask sequence includes a first motion mask a and a first motion mask b, the second motion mask sequence includes a second motion mask a and a second motion mask b, and the N-1 first images are the first image b and the first image c. S12c may specifically be the terminal device correcting the first motion mask a by using the second motion mask a to obtain a third motion mask a, and the terminal device correcting the first motion mask b by using the second motion mask b to obtain a third motion mask b, and then S12d may specifically be the terminal device processing the first image b by using the third motion mask a to obtain the first target image a, and processing the first image c by using the third motion mask b to obtain the first target image b, so as to obtain 2 first target images (i.e., the first target image a and the first target image b).
Optionally, the step (3) may be specifically implemented by the following steps: the value of the first target area in one first motion mask is corrected to be the same as the value of the second target area in one second motion mask.
The first target area is an area corresponding to a target brightness area of the first reference image in the first motion mask, the second target area is an area corresponding to the target brightness area of the first reference image in the second motion mask, and the target brightness area has a brightness value larger than a first brightness threshold value; or a region having a luminance value less than the second luminance threshold value.
In the embodiment of the present invention, the value of the first target region in one first motion mask may be corrected to be the same as the value of the second target region in the one second motion mask, so that motion estimation performed on the first image according to the corrected motion mask (i.e., the third motion mask) may avoid losing some information in the first image, and may improve the quality of the HDR image obtained after fusion.
As shown in fig. 5, an embodiment of the present invention provides a terminal device 130, where the terminal device 130 includes an obtaining module 131, a processing module 132, and a fusing module 133.
The acquiring module 131 is configured to acquire a first image frame sequence and a second image frame sequence by controlling the first camera and the second camera to be exposed simultaneously, where the first image frame sequence is captured by the first camera and includes N first images with different exposure parameters; the second image frame sequence is captured by a second camera and comprises N second images with the same exposure parameters. Each first image corresponds to one second image, each first image and the second image corresponding to one first image are images shot at the same moment in the same scene, and N is an integer greater than or equal to 2; a first motion mask sequence is obtained from the first image frame sequence and a second motion mask sequence is obtained from the second image frame sequence.
The processing module 132 is configured to process the N-1 first images by using the first motion mask sequence and the second motion mask sequence acquired by the acquiring module 131 to obtain N-1 first target images, where the N-1 first images are images of the N first images except the first reference image.
The fusion module 133 is configured to fuse the first reference image and the N-1 first target images to obtain a second target image.
Optionally, the processing module 132 is specifically configured to process the N-1 first images by using a first motion mask sequence to obtain N-1 third images; and processing the N-1 third images by adopting a second motion mask sequence to obtain N-1 first target images.
Optionally, the processing module 132 is specifically configured to correct the first motion mask sequence by using the second motion mask sequence to obtain a third motion mask sequence; and processing the N-1 first images by adopting a third motion mask sequence to obtain N-1 first target images.
Optionally, the first motion mask sequence includes N-1 first motion masks, and the second motion mask sequence includes N-1 second motion masks.
The obtaining module 131 is specifically configured to perform the following steps for each of the N-1 first images and each of the N-1 second images, respectively, to obtain N-1 first motion masks and the N-1 second motion masks:
and performing motion estimation on the first reference image and one of the N-1 first images to obtain a first motion mask.
And performing motion estimation on the second reference image and one of the N-1 second images to obtain a second motion mask, wherein the N-1 second images are images except the second reference image in the N second images.
Optionally, a first motion mask is used to process a first image; a second motion mask is used to process a second image.
Optionally, the first motion mask sequence includes N-1 first motion masks, the second motion mask sequence includes N-1 second motion masks, and the third motion mask sequence includes N-1 third motion masks.
The processing module 132 is specifically configured to obtain each of the N-1 third motion masks by:
and correcting one first motion mask in the N-1 first motion masks by adopting one second motion mask in the N-1 second motion masks to obtain a third motion mask, wherein one second motion mask corresponds to one second image, one first motion mask corresponds to one first image, and one first image and one second image are images shot at the same moment in the same scene.
Wherein a third motion mask is used to process a first image.
The terminal device provided by the embodiment of the present invention can implement each process shown in the above method embodiments, and is not described herein again to avoid repetition.
The embodiment of the invention provides terminal equipment, which can obtain a first image frame sequence and a second image frame sequence by controlling a first camera and a second camera to be exposed simultaneously, obtain a first motion mask sequence according to the first image frame sequence, obtain a second motion mask sequence according to the second image frame sequence, process N-1 first images by adopting the first motion mask sequence and the second motion mask sequence to obtain N-1 first target images, wherein the N-1 first images are images except for a first reference image in the N first images, and fuse the first reference image and the N-1 first target images to obtain a second target image. The first image frame sequence is shot by a first camera and comprises N first images with different exposure parameters; the second image frame sequence is shot by a second camera and comprises N second images with the same exposure parameters; each first image corresponds to one second image, each first image and the second image corresponding to one first image are images shot at the same time in the same scene, and N is an integer greater than or equal to 2. By the scheme, in the process of performing motion estimation on the rest frame images (namely, the first images except the first reference frame image in the first image frame sequence) by using the first reference image, the first motion mask sequence and the second motion mask sequence can be used for processing the N-1 first images to obtain N-1 first target images, and the first reference image and the N-1 first target images are fused to obtain the second target image. Compared with the prior art that only one motion mask sequence (namely the first motion mask sequence) is adopted to process N-1 first images during motion compensation, partial information loss in the rest frame images after the motion compensation is carried out can be avoided, and therefore the quality of the HDR image obtained after fusion can be improved after the rest frame images after the motion compensation are fused with the first reference image.
As shown in fig. 6, an embodiment of the present invention provides an image processing system, where the image processing system may be disposed inside a terminal device, and an image processing method provided in an embodiment of the present invention may be specifically implemented by the image processing system shown in fig. 6. The image acquisition system comprises a camera 1, a camera 2, an image signal processing unit, an exposure controller, a synchronous trigger, a display unit and a main controller. The camera 1 and the camera 2 are used for shooting images; the image signal processing unit is used for processing the signals output by the camera 1 and the camera 2 and outputting a preview image before shooting and a shot image (such as images in a first image frame sequence and a second image frame sequence), and the exposure controller is used for calculating exposure parameters of the first image frame sequence and the second image frame sequence and controlling exposure; the synchronous trigger is used for synchronously triggering the camera 1 and the camera 2 to shoot; the display unit is used for displaying a preview image before shooting and a shot image; the main controller can be used for controlling and executing the method flow of the image processing method provided by the embodiment of the invention.
Fig. 7 is a hardware schematic diagram of a terminal device for implementing various embodiments of the present invention, where the terminal device 100 includes but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 7 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
The processor 110 is configured to obtain a first image frame sequence and a second image frame sequence by controlling the first camera and the second camera to be exposed simultaneously, obtain a first motion mask sequence according to the first image frame sequence, obtain a second motion mask sequence according to the second image frame sequence, process the N-1 first images by using the first motion mask sequence and the second motion mask sequence to obtain N-1 first target images, where the N-1 first images are images of the N first images except for the first reference image, and fuse the first reference image and the N-1 first target images to obtain a second target image. The first image frame sequence is shot by a first camera and comprises N first images with different exposure parameters; the second image frame sequence is shot by a second camera and comprises N second images with the same exposure parameters; each first image corresponds to one second image, each first image and the second image corresponding to one first image are images shot at the same time in the same scene, and N is an integer greater than or equal to 2.
The embodiment of the invention provides terminal equipment, which can obtain a first image frame sequence and a second image frame sequence by controlling a first camera and a second camera to be exposed simultaneously, obtain a first motion mask sequence according to the first image frame sequence, obtain a second motion mask sequence according to the second image frame sequence, process N-1 first images by adopting the first motion mask sequence and the second motion mask sequence to obtain N-1 first target images, wherein the N-1 first images are images except for a first reference image in the N first images, and fuse the first reference image and the N-1 first target images to obtain a second target image. The first image frame sequence is shot by a first camera and comprises N first images with different exposure parameters; the second image frame sequence is shot by a second camera and comprises N second images with the same exposure parameters; each first image corresponds to one second image, each first image and the second image corresponding to one first image are images shot at the same time in the same scene, and N is an integer greater than or equal to 2. By the scheme, in the process of performing motion estimation on the rest frame images (namely, the first images except the first reference frame image in the first image frame sequence) by using the first reference image, the first motion mask sequence and the second motion mask sequence can be used for processing the N-1 first images to obtain N-1 first target images, and the first reference image and the N-1 first target images are fused to obtain the second target image. Compared with the prior art that only one motion mask sequence (namely the first motion mask sequence) is adopted to process N-1 first images during motion compensation, partial information loss in the rest frame images after the motion compensation is carried out can be avoided, and therefore the quality of the HDR image obtained after fusion can be improved after the rest frame images after the motion compensation are fused with the first reference image.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 7, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
The embodiment of the present invention further provides a terminal device, where the terminal device may include a processor, a memory, and a computer program stored in the memory and capable of running on the processor, and when the computer program is executed by the processor, each process executed by the terminal device in the foregoing method embodiments may be implemented, and the same technical effect may be achieved, and details are not repeated here to avoid repetition.
The computer-readable storage medium according to an embodiment of the present invention is characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process executed by the terminal device in the foregoing method embodiments, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. An image processing method applied to a terminal device with a first camera and a second camera is characterized by comprising the following steps:
obtaining a first image frame sequence and a second image frame sequence by controlling the first camera and the second camera to be exposed simultaneously, wherein the first image frame sequence is a first image which is shot by the first camera and comprises N different exposure parameters; the second image frame sequence is shot by the second camera and comprises N second images with the same exposure parameters; each first image corresponds to one second image, each first image and the second image corresponding to one first image are images shot at the same moment in the same scene, and N is an integer greater than or equal to 2; acquiring a first motion mask sequence according to the first image frame sequence, and acquiring a second motion mask sequence according to the second image frame sequence;
processing N-1 first images by adopting the first motion mask sequence and the second motion mask sequence to obtain N-1 first target images, wherein the N-1 first images are images except for a first reference image in the N first images;
fusing the first reference image and the N-1 first target images to obtain a second target image;
wherein the processing the N-1 first images by using the first motion mask sequence and the second motion mask sequence to obtain N-1 first target images includes:
processing the N-1 first images by adopting the first motion mask sequence to obtain N-1 third images;
and processing the N-1 third images by adopting the second motion mask sequence to obtain the N-1 first target images.
2. The method of claim 1, wherein processing the N-1 first images using the first and second motion mask sequences to obtain N-1 first target images comprises:
correcting the first motion mask sequence by adopting the second motion mask sequence to obtain a third motion mask sequence;
and processing the N-1 first images by adopting the third motion mask sequence to obtain the N-1 first target images.
3. The method of claim 1, wherein the first sequence of motion masks comprises N-1 first motion masks and the second sequence of motion masks comprises N-1 second motion masks;
acquiring a first motion mask sequence and a second motion mask sequence, comprising:
for each of the N-1 first images and each of the N-1 second images, respectively, performing the following steps to obtain the N-1 first motion masks and the N-1 second motion masks:
performing motion estimation on the first reference image and one of the N-1 first images to obtain a first motion mask;
and performing motion estimation on the second reference image and one of the N-1 second images to obtain a second motion mask, wherein the N-1 second images are images except the second reference image in the N second images.
4. The method of claim 3, wherein said one first motion mask is used to process said one first image; the one second motion mask is used for processing the one third image.
5. The method of claim 2, wherein the first sequence of motion masks includes N-1 first motion masks, the second sequence of motion masks includes N-1 second motion masks, and the third sequence of motion masks includes N-1 third motion masks;
each of the N-1 third motion masks is obtained by:
correcting one first motion mask in the N-1 first motion masks by adopting one second motion mask in the N-1 second motion masks to obtain a third motion mask, wherein the one second motion mask corresponds to the one second image, the one first motion mask corresponds to the one first image, and the one first image and the one second image are images shot at the same time in the same scene;
wherein the one third motion mask is used for processing the one first image.
6. A terminal device, comprising: the system comprises an acquisition module, a processing module and a fusion module;
the acquisition module is used for controlling the first camera and the second camera to be exposed simultaneously to obtain a first image frame sequence and a second image frame sequence, wherein the first image frame sequence is shot by the first camera and comprises N first images with different exposure parameters; the second image frame sequence is shot by the second camera and comprises N second images with the same exposure parameters; each first image corresponds to one second image, each first image and the second image corresponding to one first image are images shot at the same moment in the same scene, and N is an integer greater than or equal to 2; acquiring a first motion mask sequence according to the first image frame sequence, and acquiring a second motion mask sequence according to the second image frame sequence;
the processing module is configured to process N-1 first images by using the first motion mask sequence and the second motion mask sequence acquired by the acquisition module to obtain N-1 first target images, where the N-1 first images are images of the N first images except for the first reference image;
the fusion module is used for fusing the first reference image and the N-1 first target images to obtain a second target image;
the processing module is specifically configured to process the N-1 first images by using the first motion mask sequence to obtain N-1 third images; and processing the N-1 third images by adopting the second motion mask sequence to obtain the N-1 first target images.
7. The terminal device according to claim 6, wherein the processing module is specifically configured to modify the first motion mask sequence using the second motion mask sequence to obtain a third motion mask sequence; and processing the N-1 first images by adopting the third motion mask sequence to obtain the N-1 first target images.
8. A terminal device, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the image processing method according to any one of claims 1 to 5.
CN201811152759.6A 2018-09-29 2018-09-29 Image processing method and terminal equipment Active CN109167917B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811152759.6A CN109167917B (en) 2018-09-29 2018-09-29 Image processing method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811152759.6A CN109167917B (en) 2018-09-29 2018-09-29 Image processing method and terminal equipment

Publications (2)

Publication Number Publication Date
CN109167917A CN109167917A (en) 2019-01-08
CN109167917B true CN109167917B (en) 2020-10-20

Family

ID=64877197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811152759.6A Active CN109167917B (en) 2018-09-29 2018-09-29 Image processing method and terminal equipment

Country Status (1)

Country Link
CN (1) CN109167917B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111489320A (en) * 2019-01-29 2020-08-04 华为技术有限公司 Image processing method and device
CN113052056A (en) * 2021-03-19 2021-06-29 华为技术有限公司 Video processing method and device
CN113237491A (en) * 2021-04-22 2021-08-10 北京航天计量测试技术研究所 Frequency characteristic testing device and method of digital gyroscope

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105577983A (en) * 2014-10-30 2016-05-11 韩华泰科株式会社 Apparatus and method of detecting motion mask
CN107277387A (en) * 2017-07-26 2017-10-20 维沃移动通信有限公司 High dynamic range images image pickup method, terminal and computer-readable recording medium
CN107465882A (en) * 2017-09-22 2017-12-12 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN107507160A (en) * 2017-08-22 2017-12-22 努比亚技术有限公司 A kind of image interfusion method, terminal and computer-readable recording medium
CN108012080A (en) * 2017-12-04 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN108419023A (en) * 2018-03-26 2018-08-17 华为技术有限公司 A kind of method and relevant device generating high dynamic range images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10284761B2 (en) * 2016-11-17 2019-05-07 Motorola Mobility Llc Multi-camera capture of a high dynamic range image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105577983A (en) * 2014-10-30 2016-05-11 韩华泰科株式会社 Apparatus and method of detecting motion mask
CN107277387A (en) * 2017-07-26 2017-10-20 维沃移动通信有限公司 High dynamic range images image pickup method, terminal and computer-readable recording medium
CN107507160A (en) * 2017-08-22 2017-12-22 努比亚技术有限公司 A kind of image interfusion method, terminal and computer-readable recording medium
CN107465882A (en) * 2017-09-22 2017-12-12 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN108012080A (en) * 2017-12-04 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN108419023A (en) * 2018-03-26 2018-08-17 华为技术有限公司 A kind of method and relevant device generating high dynamic range images

Also Published As

Publication number Publication date
CN109167917A (en) 2019-01-08

Similar Documents

Publication Publication Date Title
CN108513070B (en) Image processing method, mobile terminal and computer readable storage medium
CN108307109B (en) High dynamic range image preview method and terminal equipment
CN109688322B (en) Method and device for generating high dynamic range image and mobile terminal
US20220279116A1 (en) Object tracking method and electronic device
CN109743498B (en) Shooting parameter adjusting method and terminal equipment
CN110719402B (en) Image processing method and terminal equipment
CN110096326B (en) Screen capturing method, terminal equipment and computer readable storage medium
CN110913131B (en) Moon shooting method and electronic equipment
CN110602401A (en) Photographing method and terminal
CN108234894B (en) Exposure adjusting method and terminal equipment
CN108449541B (en) Panoramic image shooting method and mobile terminal
CN107730460B (en) Image processing method and mobile terminal
CN111010511B (en) Panoramic body-separating image shooting method and electronic equipment
CN109005355B (en) Shooting method and mobile terminal
CN110769174B (en) Video viewing method and electronic equipment
CN111145192A (en) Image processing method and electronic device
CN109474784B (en) Preview image processing method and terminal equipment
CN109167917B (en) Image processing method and terminal equipment
CN110798621A (en) Image processing method and electronic equipment
CN112188082A (en) High dynamic range image shooting method, shooting device, terminal and storage medium
CN108174109B (en) Photographing method and mobile terminal
CN111083386B (en) Image processing method and electronic device
CN111147754B (en) Image processing method and electronic device
CN110913133B (en) Shooting method and electronic equipment
CN111182206B (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant