CN111654623B - Photographing method and device and electronic equipment - Google Patents

Photographing method and device and electronic equipment Download PDF

Info

Publication number
CN111654623B
CN111654623B CN202010478136.9A CN202010478136A CN111654623B CN 111654623 B CN111654623 B CN 111654623B CN 202010478136 A CN202010478136 A CN 202010478136A CN 111654623 B CN111654623 B CN 111654623B
Authority
CN
China
Prior art keywords
image
target image
target
images
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010478136.9A
Other languages
Chinese (zh)
Other versions
CN111654623A (en
Inventor
魏经纬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010478136.9A priority Critical patent/CN111654623B/en
Publication of CN111654623A publication Critical patent/CN111654623A/en
Application granted granted Critical
Publication of CN111654623B publication Critical patent/CN111654623B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The application discloses a photographing method, a photographing device and electronic equipment, and belongs to the technical field of photographing. The method comprises the following steps: obtaining N frames of a first image in response to a first input; aligning a second target image with the first target image by taking the first target image as a reference, wherein the first target image is an image in N frames of first images, and the second target image is other images except the first target image in the N frames of first images; and carrying out fusion processing on the first target image and the aligned second target image to obtain a photographed image. The shooting hardware does not need to be additionally operated, the photos shot by the electronic equipment do not need to be input into other software for post-processing, only the first input is needed by a user, and the second target images in the N frames of first images and the aligned second target images are fused through the electronic equipment, so that the operation is simple, and the shooting efficiency can be improved.

Description

Photographing method and device and electronic equipment
Technical Field
The application belongs to the technical field of photographing, and particularly relates to a photographing method and device and electronic equipment.
Background
At present, in order to shoot a piece of starry sky work with excellent image quality and excellent tone, firstly, an electronic device needs to shoot a plurality of images by means of shooting hardware such as an equatorial telescope, and then the plurality of images are input into software specially used for image processing to be subjected to post-processing to obtain a final image. In other words, in the shooting process, more operation steps are required, the shooting is time-consuming, and the shooting efficiency is poor.
Disclosure of Invention
The embodiment of the application aims to provide a shooting method, a shooting device and electronic equipment, and the problem that shooting efficiency is poor due to the fact that the existing process is complex for shooting a starry sky image with good image quality can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a photographing method, where the method includes:
receiving a first input of a user, and responding to the first input to obtain N frames of first images, wherein N is an integer greater than 1;
aligning a second target image with the first target image by taking the first target image as a reference, wherein the first target image is an image in the N frames of first images, and the second target image is an image except the first target image in the N frames of first images;
and fusing the first target image and the aligned second target image to obtain a photographed image.
In a second aspect, an embodiment of the present application provides a photographing apparatus, including:
the receiving module is used for receiving a first input of a user;
a frame image obtaining module, configured to obtain N frames of first images in response to the first input, where N is an integer greater than 1;
an alignment module, configured to align a second target image with a first target image by using the first target image as a reference, where the first target image is an image in the N frames of first images, and the second target image is another image except the first target image in the N frames of first images;
and the fusion processing is used for carrying out fusion processing on the first target image and the aligned second target image to obtain a photographed image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, because the second target image in the N frames of first images obtained by frame capturing is taken as a reference, the second target image in the N frames of first images is rotated, the rotated second target image is aligned with the first target image, and the first target image and the rotated second target image aligned with the first target image are fused to obtain the photographed image, so that the quality of the photographed image is improved, therefore, in the process of photographing the starry sky image, no additional operation on photographing hardware is needed, and no post-processing is needed to input the photograph photographed by the electronic device into other software, the method of the embodiment of the application only needs the first input by a user, aligns the second target image in the N frames of first images with the first target image through the electronic device, and fuses the first target image and the second target image aligned with the first target image to obtain the photographed image, the operation is simple, and the shooting efficiency can be improved.
Drawings
Fig. 1 is a flowchart of a photographing method provided in an embodiment of the present application;
fig. 2 is a second flowchart of a photographing method according to an embodiment of the present application;
fig. 3 is a schematic block diagram of a photographing apparatus according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an electronic device provided by an embodiment of the application;
fig. 5 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The photographing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, an embodiment of the present application provides a photographing method, which is applicable to an electronic device, where the electronic device may be a mobile terminal, and the method includes:
step 101: receiving a first input of a user;
step 102: and responding to the first input, performing frame grabbing processing to obtain N frames of first images, wherein N is an integer greater than 1.
Firstly, the electronic device needs to start a photographing mode, namely, a camera of the electronic device is turned on, the electronic device enters the photographing mode, a user can perform first input to realize photographing, and after the electronic device receives the first input of the user, the electronic device can respond to the first input to perform frame capturing processing to obtain N frames of first images. As an example, the first input may be an input of a user in a starry sky photographing mode, which may be used to photograph in the starry sky photographing mode, and in the electronic device, a plurality of photographing modes are included, for example, including but not limited to a portrait photographing mode, a virtual photographing mode, a large aperture photographing mode, and a starry sky photographing mode, each photographing mode has a corresponding photographing parameter value, and the photographing parameter values of different photographing modes may be different, where the photographing parameter value in the starry sky photographing mode may be understood as an empirical value determination based on the photographing parameters of the historical photographed starry sky image. As an example, the first input may be a click input, such as a click input of clicking a photo button, or the like, and the first input may also be a voice input or a gesture input, or the like.
Step 103: and aligning the second target image with the first target image by taking the first target image as a reference.
The first target image is an image in the N frames of first images, and the second target image is other images except the first target image in the N frames of first images.
In order to improve the shooting quality, the second target image needs to be aligned with the first target image by taking the first target image in the N frames of first images as a reference. For example, the second target image may be rotated with respect to the first target image to align the rotated second target image with the first target image. As an example, the first target image may be an image with the highest sharpness in the N frames of the first image, and thus, the quality of the fused photographed image may be ensured.
Step 104: and carrying out fusion processing on the first target image and the aligned second target image to obtain a photographed image.
By taking the first target image as a reference, after the second target image is aligned with the first target image, the first target image and the second target image aligned with the first target image can be fused to obtain a photographed image, so that the phenomenon that the image is trailing due to misalignment of other images with the first target image can be reduced, and the quality of the photographed image obtained by fusion can be improved.
In the photographing method of the embodiment of the application, since the first target image in the N frames of first images is used as a reference, the second target image in the N frames of first images is aligned with the first target image, and the first target image and the aligned second target image are fused to obtain the photographed image, the quality of the photographed image is improved, and thus, even in the process of shooting the starry sky image, no extra shooting hardware is required to be operated, and the photos shot by the electronic equipment are not required to be input into other software for post-processing, the method of the embodiment of the application only needs the first input of the user, the second target image in the N frames of first images is aligned with the first target image through the electronic equipment, the first target image and the second target image aligned with the first target image are fused to obtain a photographed image, the photographing quality can be improved, the operation is simple, and the photographing efficiency can be improved.
In one example, N first images may be divided into M groups of images, where M is an integer greater than zero, each group of images includes at least one first image, the sum of the images in the M groups of images is N first images, the number of first target images is M, each group of images corresponds to one first target image, the first target image corresponding to one group of images may be the image with the highest definition in the group of images, each group of images rotates the other images except the corresponding first target image in the group of images with reference to the first target image corresponding to the group of images, the other rotated images in the group of images are aligned with the corresponding first target image, and then the first target image in the group of images is fused with the other rotated images in the group of images to obtain a first fused image corresponding to the group of images, and each group of images is subjected to the above-mentioned process, thus, M first fused images can be obtained, and then, one reference image is selected from the M first fused images, for example, one first fused image in the middle of the M first fused images can be selected as the reference image (if M is an even number, and the number of the middle first fused images is 2, one of the first fused images can be arbitrarily selected), the other images except the reference image in the M first fused images are rotated, the other rotated images in the M first fused images are aligned with the reference image, and the other rotated images in the M first fused images are fused with the reference image, so that the photographed image is obtained.
In one embodiment, before receiving the first input of the user, the method further comprises: obtaining the ambient brightness; acquiring first motion detection information of the electronic equipment; and outputting prompt information under the conditions that the ambient brightness is less than the preset brightness and the offset of the first motion detection information relative to the second motion detection information is less than the preset offset, wherein the prompt information can be used for prompting that the electronic equipment is stable. That is, in the present embodiment, as shown in fig. 2, there is provided a photographing method including:
step 201: and acquiring the ambient brightness.
It is understood that the current brightness of the environment in which the electronic device is located is obtained. For shooting a starry sky, the required ambient brightness is low, and whether the ambient brightness meets the brightness shooting requirement needs to be judged, so that the ambient brightness is obtained firstly.
Step 202: first motion detection information of an electronic device is acquired.
The first motion detection information can be understood as current motion detection information, and in the photographing process, not only the condition of the ambient brightness is needed, but also whether the electronic device is stable needs to be detected, so that the first motion detection information of the electronic device needs to be acquired for judging whether the electronic device is stable. As an example, acquiring the first motion detection information of the electronic device may include acquiring first motion detection information detected by a motion detection device in the electronic device, where the motion detection device may be a gyroscope (and the corresponding detected information is gyroscope information), an attitude sensor, and the like.
Step 203: and outputting prompt information under the conditions that the ambient brightness is less than the preset brightness and the offset of the first motion detection information relative to the second motion detection information is less than the preset offset.
The prompt message is used for prompting that the electronic equipment is stable. The environment brightness is smaller than the preset brightness, the environment brightness accords with the brightness shooting requirement, after the environment brightness and the information detected by the motion detection device are obtained, the prompt information can be output under the conditions that the environment brightness is smaller than the preset brightness and the offset of the first motion detection information relative to the second motion detection information is smaller than the preset offset, so that the user is prompted that the electronic equipment is stable, the user can take a picture, and the subsequent user can perform first input on the electronic equipment. In this embodiment, it is not only necessary that the ambient brightness is less than the preset brightness, but also that the electronic device is stable, and the prompt information is output to prompt that the user is stable, so that the user can take a picture, that is, the user can take a picture when the electronic device is stable and the brightness meets the requirement during the subsequent taking of the picture, so that the quality of the taken picture can be improved.
Step 204: receiving a first input of a user;
step 205: obtaining N frames of first images in response to a first input, N being an integer greater than 1;
step 206: aligning the second target image with the first target image by taking the first target image as a reference;
the first target image is an image in N frames of first images, and the second target image is other images except the first target image in the N frames of first images;
step 207: and carrying out fusion processing on the first target image and the aligned second target image to obtain a photographed image.
Steps 204 to 207 correspond to steps 101 to 104 one to one, and are not described herein again.
In one embodiment, obtaining N frames of the first image in response to the first input comprises: and responding to the first input, starting frame grabbing processing at a target time to obtain N frames of first images, wherein the difference between the target time and the receiving time of the first input is a preset time length.
In order to prevent the first input action from causing shake to the electronic equipment and affecting the shooting quality, after the first input is received and the preset time length is passed, the frame grabbing is carried out to obtain N frames of first images.
In one embodiment, obtaining N frames of the first image in response to the first input comprises: responding to a first input when the electronic equipment is in a target photographing mode, and performing hyperfocal distance focusing; and under the condition that focusing is completed, performing frame grabbing processing to obtain N frames of first images.
When the lens is focused at infinity, the distance from the front of the depth of field (the closest sharp point to the lens) to the lens is called the hyperfocal distance. That is, when the lens is focused at the hyperfocal distance, the distance from the hyperfocal distance to half of the camera is clear starting at infinity. Therefore, in the target photographing mode, the photographing effect can be improved. As an example, the target photographing mode may be understood as a starry sky photographing mode.
In addition, in response to the first input, shooting parameters can be adjusted according to the shooting environment, frame capture processing is performed according to the adjusted shooting parameters under the condition that focusing is completed, N frames of first images are obtained, and the shooting parameters can be adjusted by the shooting environment to enable the shooting parameters of the electronic equipment to adapt to the environment. So as to improve the shooting effect. As one example, the photographing parameter may be a white balance parameter or the like. As one example, the shooting environment includes ambient brightness and the like. In addition, in the focusing process, an AF automatic focusing system can be called to carry out photographing focusing, if the foreground has a remarkable object to lock the foreground, if the foreground does not have the foreground, the most far focus section is automatically locked in focusing, and the hyperfocal distance focusing is carried out by default.
In one example, starting the frame capture processing at the target time to obtain the N frames of the first image may include: under the condition that the system time of the electronic equipment is the target moment, adjusting shooting parameters according to the shooting environment, and carrying out shooting focusing; and under the condition that the photographing focusing is finished, performing frame capturing processing to obtain N frames of first images. Therefore, the shake can be prevented in the shooting process, the shooting parameters can be adjusted according to the shooting environment, and the shooting quality is improved.
In one example, before the frame capture processing is performed when focusing is completed and the first image of N frames is obtained, the shooting duration may be predicted according to the ambient brightness, and the frame capture interval duration may be determined according to the preset shooting duration and N, where N may be preset. Then, in the case that the photographing focusing is completed, performing frame capture processing to obtain N frames of first images may include: under the condition that photographing focusing is completed, frame capturing processing is carried out according to the frame capturing interval duration to obtain N frames of first images, namely, frame capturing is carried out once for each frame capturing interval duration, and the frame capturing time difference between two adjacent first images is the frame capturing interval duration.
In one embodiment, the fusing the first target image and the aligned second target image to obtain the photographed image includes: performing fusion processing on the first target image and the aligned second target image to obtain a second image; and under the condition that the first target object is identified in the second image, carrying out brightening treatment on a sub-area corresponding to the second target object in the first area, and carrying out brightening treatment on a second area in the second image to obtain a photographed image, wherein the first area is an area corresponding to the first target object in the second image, the second area is an area except the first area in the second image, and the first target object comprises the second target object.
That is, in this embodiment, the sub-region of the second target object in the first region of the first target object in the second image obtained by the fusion processing may be subjected to the highlighting processing, so that the second target object in the first region may be more prominent, and the overall image of the first region may have a more hierarchical sense. In addition, the second area can be subjected to a brightening process to improve the visibility of the second area, so that more details of the second area can be observed.
In one embodiment, the fusing the first target image and the aligned second target image to obtain a second image includes: and denoising the first target image and the aligned second target image, and fusing the denoised first target image and the denoised second target image to obtain a second image.
In other words, in the process of obtaining the second image through the fusion processing, the first target image to be fused and the rotated second target image are subjected to noise reduction processing, so that the image quality of the first target image and the second target image is improved, the image quality of the obtained second image can be improved, and the image quality of the photographed image is improved. The noise reduction method is various, and is not limited in the embodiments of the present application.
In an example, after the fusing the first target image and the aligned second target image to obtain the photographed image, the method may further include: and determining a target tone according to the N frames of first images, and processing the photographed image by using the target tone to obtain a target image. Therefore, the obtained colors of the target image can be more suitable for the current shooting environment.
The following describes the procedure of the above photographing method in an embodiment. The electronic device is taken as a mobile phone, and shooting of a starry sky is taken as an example for explanation.
Firstly, a mobile phone camera is turned on, a user can select a starry sky photographing mode, and after the electronic equipment enters the starry sky photographing mode, the electronic equipment guides the user to keep the mobile phone stable and use a tripod. The user fixes the mobile phone on the tripod.
Then, the mobile phone judges whether the environment brightness accords with an extremely night environment (starry sky shooting, the environment brightness needs to be low), when the environment brightness is larger than a preset brightness x, the mobile phone calls a night scene algorithm (night scene shooting mode), and after receiving a first input of a user, image processing can be carried out through the night scene algorithm to finish shooting. When the ambient brightness is smaller than x, it may be determined whether the mobile phone is stable (whether the mobile phone is placed on the stand) according to the gyroscope information (i.e., the first motion detection information) in the mobile phone, for example, by determining a deviation between the gyroscope information and the gyroscope information that was last acquired before the gyroscope was acquired, if the deviation is smaller than a preset deviation (generally set to be smaller, so as to improve accuracy of stability determination), it is determined that the mobile phone is stable, otherwise, it is determined that the mobile phone is unstable. If the mobile phone is stable, the starry sky shooting mode shooting preparation phase is completed, and after the user can click the shooting key (that is, the first input can be a click operation for the shooting key), in order to prevent the click operation from shaking the mobile phone, the mobile phone defaults to start countdown shooting, for example, the countdown duration is a preset duration, which can be preset, and after the countdown is completed, the device starts to capture frames.
In the process of capturing frames, the shooting time length can be predicted according to the current environment brightness, the frame capturing interval time length is determined according to the frame number N of the preset required frame image, the picture white balance parameter is adjusted according to the environment brightness, and an AF automatic focusing system is called for automatic focusing. And capturing frames according to the frame capturing time interval and the adjusted white balance parameter to obtain N frames of first images.
In the photographing process of the embodiment of the application, a multi-frame alignment mode is adopted, based on a first target image in N frames of first images (for example, the first frame image in the N frames of first images, the frame capture time of the first frame image is earliest in the N frames of first images), other images (i.e., second target images) in the N frames of first images are rotated, the other images in the N frames of first images are aligned with the first target image after being rotated, then the aligned first target images and the rotated second target images are subjected to stack fusion, and the stack fusion can achieve the effect that stars are not smeared (if the images are not aligned, stars in the images can be smeared due to earth rotation). And noise reduction is simultaneously carried out in the process of multi-frame fusion, noise is smeared, and the image quality is improved.
After the stack fusion is completed, areas of the sky (first target object) and the ground scene in the image are identified, the areas of the sky are first areas, the areas of the ground scene are second areas, the sky and the ground scene are segmented, and the sky and the ground scene are processed respectively. After the division, the land scene area is slightly brightened, so that the land scene is ensured to be visible and has certain details. Meanwhile, the sky area is beautified, namely brightness of stars (second target objects) in the sky is improved, so that invisible dim light stars become visible, bright stars are more prominent, and the whole picture is more layered. In the aspect of color, a tone which is most suitable for starry sky photography subject matters is preset in the mobile phone, and the tone can refer to the shooting parameters of a photography professional user and can also be obtained through big data learning.
The photographing method can realize photographing of professional scenes such as 'starry sky' and the like, and solves the problems that a user has a high threshold for photographing starry sky and consumes time and labor. In addition, the shooting method provided by the embodiment of the application can be used for shooting the starry sky through one key, automatically adjusting appropriate parameters according to environmental conditions, obtaining excellent picture quality, avoiding manual adjustment of a user, aligning the N frames of first images, avoiding tailing, segmenting sky and ground scenes, respectively performing corresponding processing, brightening stars, and capturing the dark stars which cannot be seen by human eyes. According to the shooting method, professional shooting knowledge, image post-processing capacity and shooting hardware such as an equatorial telescope are not needed, and the efficiency, the image quality and the like in the shooting process can be improved through a mobile phone.
It should be noted that, in the photographing method provided in the embodiment of the present application, the execution main body may be a photographing device, or a control module in the photographing device for executing the loading photographing method. The embodiment of the present application describes a shooting device provided in the embodiment of the present application by taking an example where the shooting device executes a loading shooting method.
As shown in fig. 3, the present application further provides a photographing apparatus of an embodiment, where the apparatus 300 includes:
a receiving module 301, configured to receive a first input of a user;
a frame image obtaining module 302, configured to obtain N frames of first images in response to a first input, where N is an integer greater than 1;
an alignment module 303, configured to align a second target image with the first target image based on the first target image, where the first target image is an image in N frames of the first image, and the second target image is another image except the first target image in the N frames of the first image;
and a fusion process 304, configured to perform a fusion process on the first target image and the aligned second target image to obtain a photographed image.
In one embodiment, the apparatus 300 further comprises:
the first acquisition module is used for receiving a first input of a user by the frame image acquisition module, responding to the first input, and acquiring the ambient brightness before the N frames of first images are acquired;
the second acquisition module is used for acquiring first motion detection information of the electronic equipment;
outputting prompt information under the condition that the ambient brightness is smaller than the preset brightness and the offset of the first motion detection information relative to the second motion detection information is smaller than the preset offset;
the second motion detection information is the motion detection information of the electronic equipment which is acquired last before the first motion detection information of the electronic equipment is acquired.
In one embodiment, obtaining N frames of the first image in response to the first input comprises:
and responding to the first input, starting frame grabbing processing at a target time to obtain N frames of first images, wherein the difference between the target time and the receiving time of the first input is a preset time length.
In one embodiment, a frame image acquisition module includes:
the focusing module is used for responding to the first input and carrying out hyperfocal distance focusing when the electronic equipment is in a target photographing mode;
and the frame capturing processing module is used for performing frame capturing processing under the condition that focusing is finished to obtain N frames of first images.
In one embodiment, a fusion module includes:
the image fusion module is used for carrying out fusion processing on the first target image and the aligned second target image to obtain a second image;
and the image processing module is used for carrying out brightening processing on a sub-area corresponding to the second target object in the first area and carrying out brightening processing on a second area in the second image under the condition that the first target object is identified in the second image to obtain a photographed image, wherein the first area is an area corresponding to the first target object in the second image, the second area is an area except the first area in the second image, and the first target object comprises the second target object.
In one embodiment, the fusing the first target image and the aligned second target image to obtain a second image includes:
and denoising the first target image and the aligned second target image, and fusing the denoised first target image and the denoised second target image to obtain a second image.
In the photographing device of the embodiment of the application, the second target image in the N frames of first images obtained by frame capture is taken as a reference, the rotated second target image is aligned with the first target image, and the first target image and the rotated second target image aligned with the first target image are fused to obtain the photographed image, so that the quality of the photographed image is improved, even if in the process of photographing the star sky image, no extra operation on photographing hardware is needed, and the photograph photographed by the electronic device is not needed to be input into other software for post-processing, the method of the embodiment of the application only needs the first input by a user, the second target image in the N frames of first images is rotated by the electronic device, and the photographed image can be improved by fusing the first target image and the rotated second target image aligned with the first target image, the operation is simple, and the shooting efficiency can be improved.
The shooting device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The shooting device provided by the embodiment of the application can realize each process realized by the method embodiments of fig. 1-2, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 4, an electronic device 400 is further provided in this embodiment of the present application, and includes a processor 401, a memory 402, and a program or an instruction stored in the memory 402 and executable on the processor 401, where the program or the instruction is executed by the processor 401 to implement each process of the foregoing photographing method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic device and the non-mobile electronic device described above.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 500 includes, but is not limited to: radio frequency unit 505, network module 502, audio output unit 503, input unit 504, sensor 505, display unit 506, user input unit 507, interface unit 508, memory 509, processor 510, and the like.
Those skilled in the art will appreciate that the electronic device 500 may further include a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 510 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 5 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The image processing apparatus includes a user input unit 507 configured to receive a first input of a user, a processor 510 configured to obtain N frames of first images in response to the first input, where N is an integer greater than 1, and align a second target image with the first target image with reference to the first target image, where the first target image is an image in the N frames of first images, and the second target image is an image other than the first target image in the N frames of first images; and carrying out fusion processing on the first target image and the aligned second target image to obtain a photographed image.
The shooting hardware does not need to be additionally operated, the photos shot by the electronic equipment do not need to be input into other software for post-processing, only the first input needs to be carried out by a user, the second target images in the N frames of first images are rotated through the processor 510 in the electronic equipment, the first target images and the aligned second target images are fused, the operation is simple, and the shooting efficiency can be improved.
Optionally, the processor 510 is configured to obtain ambient brightness; acquiring first motion detection information of the electronic equipment; and outputting prompt information under the conditions that the ambient brightness is less than the preset brightness and the offset of the first motion detection information relative to the second motion detection information is less than the preset offset. The second motion detection information is the motion detection information of the electronic equipment which is acquired last before the first motion detection information of the electronic equipment is acquired.
The prompt information can be output under the condition that the ambient brightness is smaller than the preset brightness and the offset of the first motion detection information relative to the second motion detection information is smaller than the preset offset, so that the user is prompted that the electronic equipment is stable, the user can take a picture, and the subsequent user can perform first input on the electronic equipment. In this embodiment, it is not only necessary that the ambient brightness is less than the preset brightness, but also that the electronic device is stable, and the prompt information is output to prompt that the user is stable, so that the user can take a picture, that is, the user can take a picture when the electronic device is stable and the brightness meets the requirement during the subsequent taking of the picture, so that the quality of the taken picture can be improved.
Optionally, the user input unit 507 is configured to receive a first input of a user, and the processor 510 is configured to start, in response to the first input, frame capture processing at a target time to obtain N frames of first images, where a difference between the target time and a receiving time of the first input is a preset time duration.
In order to prevent the first input action from causing shake to the electronic equipment and affecting the shooting quality, after the first input is received and the preset time length is passed, the frame grabbing is carried out to obtain N frames of first images.
Optionally, the user input unit 507 is configured to receive a first input from a user, and the processor 510 is configured to perform refocus focusing in response to the first input; and under the condition that focusing is completed, performing frame grabbing processing to obtain N frames of first images.
In the shooting process, the super-focus focusing is carried out to shoot so as to improve the shooting effect.
Optionally, the processor 510 is configured to perform fusion processing on the first target image and the aligned second target image to obtain a second image;
and under the condition that the first target object is identified in the second image, carrying out brightening treatment on a sub-area corresponding to the second target object in the first area, and carrying out brightening treatment on a second area in the second image to obtain a photographed image, wherein the first area is an area corresponding to the first target object in the second image, the second area is an area except the first area in the second image, and the first target object comprises the second target object.
That is, in this embodiment, the region of the second target object in the first region of the first target object in the second image obtained by the fusion processing may be subjected to the highlighting processing, so that the second target object in the first region may be more prominent, and the overall screen of the first region may have a more hierarchical sense. In addition, the second area can be subjected to a brightening process to improve the visibility of the second area, so that more details of the second area can be observed.
Optionally, the processor 510 is configured to perform noise reduction processing on the first target image and the aligned second target image, and perform fusion processing on the noise-reduced first target image and the noise-reduced second target image to obtain a second image.
In other words, in the process of obtaining the second image through the fusion processing, the first target image to be fused and the rotated second target image are subjected to noise reduction processing, so that the image quality of the first target image and the second target image is improved, the image quality of the obtained second image can be improved, and the image quality of the photographed image is improved.
It should be understood that in the embodiment of the present application, the input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 506 may include a display panel 5061, and the display panel 5061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 507 includes a touch panel 5071 and other input devices 5072. A touch panel 5071, also referred to as a touch screen. The touch panel 5071 may include two parts of a touch detection device and a touch controller. Other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in further detail herein. The memory 509 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems. Processor 510 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements the processes of the embodiment of the photographing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer-readable storage media, such as Read-Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, etc.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the foregoing photographing method embodiment, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A method of taking a picture, the method comprising:
receiving a first input of a user;
obtaining N frames of first images in response to the first input, N being an integer greater than 1;
taking a first target image as a reference, rotating a second target image to align the rotated second target image with the first target image, wherein the first target image is an image in the N frames of first images, and the second target image is an image except the first target image in the N frames of first images;
stacking and fusing the first target image and the aligned second target image to realize that the stars have no tailing and obtain a photographed image;
the first target image and the aligned second target image are subjected to stack fusion to realize that the stars have no tailing, and a photographed image is obtained, and the method comprises the following steps: performing fusion processing on the first target image and the aligned second target image to obtain a second image; and under the condition that a first target object is identified in the second image, performing brightening processing on a sub-area corresponding to a second target object in a first area, and performing brightening processing on a second area in the second image to obtain the photographed image, wherein the first area is an area corresponding to the first target object in the second image, the second area is an area except the first area in the second image, and the first target object comprises the second target object.
2. The method of claim 1, wherein prior to receiving the first input from the user, comprising:
obtaining the ambient brightness;
acquiring first motion detection information of the electronic equipment;
outputting prompt information under the condition that the ambient brightness is smaller than preset brightness and the offset of the first motion detection information relative to the second motion detection information is smaller than a preset offset;
the second motion detection information is the motion detection information of the electronic equipment which is obtained last before the first motion detection information of the electronic equipment is obtained.
3. The method of claim 1, wherein obtaining N frames of the first image in response to the first input comprises:
and responding to the first input, starting frame grabbing processing at a target time to obtain N frames of first images, wherein the difference between the target time and the receiving time of the first input is a preset time length.
4. The method of claim 1, wherein obtaining N frames of the first image in response to the first input comprises:
responding to the first input when the electronic equipment is in a target photographing mode, and performing hyperfocal distance focusing;
and under the condition that focusing is finished, performing frame grabbing processing to obtain the N frames of first images.
5. The method according to claim 1, wherein the fusing the first target image and the aligned second target image to obtain the second image comprises:
and denoising the first target image and the aligned second target image, and fusing the denoised first target image and the denoised second target image to obtain the second image.
6. A photographing apparatus, characterized in that the apparatus comprises:
the receiving module is used for receiving a first input of a user;
a frame image obtaining module, configured to obtain N frames of first images in response to the first input, where N is an integer greater than 1;
an alignment module, configured to rotate a second target image with a first target image as a reference, so that the rotated second target image is aligned with the first target image, where the first target image is an image in the N frames of first images, and the second target image is another image except the first target image in the N frames of first images;
the fusion module is used for performing stack fusion on the first target image and the aligned second target image to realize that the stars have no tailing and obtain a photographed image;
the fusion module comprises:
the image fusion module is used for performing stack fusion on the first target image and the aligned second target image to realize that the stars have no trailing and obtain a second image;
an image processing module, configured to, when a first target object is identified in the second image, perform a brightening process on a sub-area corresponding to a second target object in a first area, and perform a brightening process on a second area in the second image, to obtain the photographed image, where the first area is an area corresponding to the first target object in the second image, the second area is an area other than the first area in the second image, and the first target object includes the second target object.
7. The apparatus of claim 6, further comprising:
the first acquisition module is used for receiving a first input of a user by the frame image acquisition module, responding to the first input and acquiring the ambient brightness before acquiring N frames of first images;
the second acquisition module is used for acquiring first motion detection information of the electronic equipment;
outputting prompt information under the condition that the ambient brightness is smaller than preset brightness and the offset of the first motion detection information relative to the second motion detection information is smaller than a preset offset;
the second motion detection information is the motion detection information of the electronic equipment which is obtained last before the first motion detection information of the electronic equipment is obtained.
8. The apparatus of claim 6, wherein obtaining N frames of a first image in response to the first input comprises:
and responding to the first input, starting frame grabbing processing at a target time to obtain N frames of first images, wherein the difference between the target time and the receiving time of the first input is a preset time length.
9. The apparatus of claim 6, wherein the frame image acquisition module comprises:
the focusing module is used for responding to the first input and carrying out hyperfocal distance focusing when the electronic equipment is in a target photographing mode;
and the frame capturing processing module is used for performing frame capturing processing under the condition that focusing is finished to obtain the N frames of first images.
10. The apparatus according to claim 6, wherein the fusing the first target image and the aligned second target image to obtain the second image comprises:
and denoising the first target image and the aligned second target image, and fusing the denoised first target image and the denoised second target image to obtain the second image.
11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the photographing method according to any one of claims 1-5.
12. A readable storage medium, on which a program or instructions are stored, which when executed by a processor, carry out the steps of the photographing method according to any one of claims 1-5.
CN202010478136.9A 2020-05-29 2020-05-29 Photographing method and device and electronic equipment Active CN111654623B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010478136.9A CN111654623B (en) 2020-05-29 2020-05-29 Photographing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010478136.9A CN111654623B (en) 2020-05-29 2020-05-29 Photographing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111654623A CN111654623A (en) 2020-09-11
CN111654623B true CN111654623B (en) 2022-03-22

Family

ID=72348413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010478136.9A Active CN111654623B (en) 2020-05-29 2020-05-29 Photographing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111654623B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802033B (en) * 2021-01-28 2024-03-19 Oppo广东移动通信有限公司 Image processing method and device, computer readable storage medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109068067A (en) * 2018-08-22 2018-12-21 Oppo广东移动通信有限公司 Exposal control method, device and electronic equipment
CN109151333A (en) * 2018-08-22 2019-01-04 Oppo广东移动通信有限公司 Exposal control method, device and electronic equipment
CN110443766A (en) * 2019-08-06 2019-11-12 厦门美图之家科技有限公司 Image processing method, device, electronic equipment and readable storage medium storing program for executing

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9990536B2 (en) * 2016-08-03 2018-06-05 Microsoft Technology Licensing, Llc Combining images aligned to reference frame
CN110620873B (en) * 2019-08-06 2022-02-22 RealMe重庆移动通信有限公司 Device imaging method and device, storage medium and electronic device
CN110930329B (en) * 2019-11-20 2023-04-21 维沃移动通信有限公司 Star image processing method and device
CN110958401B (en) * 2019-12-16 2022-08-23 北京迈格威科技有限公司 Super night scene image color correction method and device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109068067A (en) * 2018-08-22 2018-12-21 Oppo广东移动通信有限公司 Exposal control method, device and electronic equipment
CN109151333A (en) * 2018-08-22 2019-01-04 Oppo广东移动通信有限公司 Exposal control method, device and electronic equipment
CN110443766A (en) * 2019-08-06 2019-11-12 厦门美图之家科技有限公司 Image processing method, device, electronic equipment and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN111654623A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
CN112532881B (en) Image processing method and device and electronic equipment
CN112637515B (en) Shooting method and device and electronic equipment
CN112637500B (en) Image processing method and device
CN113794829B (en) Shooting method and device and electronic equipment
CN112333386A (en) Shooting method and device and electronic equipment
CN114125268A (en) Focusing method and device
CN111787230A (en) Image display method and device and electronic equipment
CN112422798A (en) Photographing method and device, electronic equipment and storage medium
CN111770277A (en) Auxiliary shooting method, terminal and storage medium
CN113747067B (en) Photographing method, photographing device, electronic equipment and storage medium
CN112702531B (en) Shooting method and device and electronic equipment
CN111654623B (en) Photographing method and device and electronic equipment
CN113114933A (en) Image shooting method and device, electronic equipment and readable storage medium
CN112672055A (en) Photographing method, device and equipment
WO2022095878A1 (en) Photographing method and apparatus, and electronic device and readable storage medium
CN112653841B (en) Shooting method and device and electronic equipment
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN115134532A (en) Image processing method, image processing device, storage medium and electronic equipment
CN112367464A (en) Image output method and device and electronic equipment
CN112672056A (en) Image processing method and device
CN112399092A (en) Shooting method and device and electronic equipment
CN112887619A (en) Shooting method and device and electronic equipment
CN113873147A (en) Video recording method and device and electronic equipment
CN113873160B (en) Image processing method, device, electronic equipment and computer storage medium
CN114040099B (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant