CN109274894B - Shooting method and shooting device - Google Patents

Shooting method and shooting device Download PDF

Info

Publication number
CN109274894B
CN109274894B CN201811479252.1A CN201811479252A CN109274894B CN 109274894 B CN109274894 B CN 109274894B CN 201811479252 A CN201811479252 A CN 201811479252A CN 109274894 B CN109274894 B CN 109274894B
Authority
CN
China
Prior art keywords
image
factor
camera
imaging effect
quality evaluation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811479252.1A
Other languages
Chinese (zh)
Other versions
CN109274894A (en
Inventor
董江凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811479252.1A priority Critical patent/CN109274894B/en
Publication of CN109274894A publication Critical patent/CN109274894A/en
Application granted granted Critical
Publication of CN109274894B publication Critical patent/CN109274894B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides a shooting method and a shooting device, relates to the field of shooting, and aims to solve the problems that certain differences exist between picture composition modes such as shooting angles of a user before a camera is popped up or screwed out and the shooting angles of the user after the camera is popped up or screwed out, the user needs to manually adjust the shooting angles and the like, the use difficulty of the user is increased, and the shooting quality of the camera is influenced. The method comprises the following steps: acquiring a first image shot by a camera at a first position; if the first imaging effect of the first image meets a first preset condition, displaying or storing a second image shot by the camera at the first position, wherein the second image is the first image or a third image shot by the camera at the first position; wherein the first imaging effect specifically includes: at least one of a first degree of exposure equalization, a first degree of image distortion, and a first degree of portrait distortion of the first image.

Description

Shooting method and shooting device
Technical Field
The embodiment of the invention relates to the field of photographing, in particular to a photographing method and a photographing device.
Background
Along with the development of terminal equipment technology and the improvement of human consumption level, the function of a camera in the terminal equipment needs to be more and more powerful, and the use requirements of people on the terminal equipment can be met.
At present, in order to pursue a larger screen occupation ratio, the mounting position of the front camera of the full-screen mobile phone becomes a very challenging industrial design. In order to solve the problem, front cameras such as lifting cameras and rotary cameras are provided. When a user starts the front-mounted shooting function, the camera automatically rises to a certain fixed position at a constant speed or rotates to a certain fixed position. When the front shooting function is finished, the camera automatically returns to the original position.
However, the specific position of the camera to be lifted or rotated is fixed. When the user uses the camera, because it is nearer to shoot the distance, the camera lens distortion appears more easily, and light is also great to the influence that goes up and down the camera or rotate the camera. There are therefore the following situations: the picture composition modes such as the shooting angle of the user before the camera is popped up or screwed out and the picture composition modes such as the shooting angle of the user after the camera is popped up or screwed out have certain differences, the user needs to manually adjust the shooting angle and other operations, the use difficulty of the user is increased, and the shooting quality of the camera is influenced.
Disclosure of Invention
The embodiment of the invention provides a shooting method and a shooting device, and aims to solve the problems that certain differences exist between picture composition modes such as shooting angles of a user before a camera is popped up or screwed out and picture composition modes such as shooting angles of the user after the camera is popped up or screwed out, the user needs to manually adjust the shooting angles and the like, the use difficulty of the user is increased, and the shooting quality of the camera is influenced.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a shooting method, where the method includes: acquiring a first image shot by a camera at a first position; if the first imaging effect of the first image meets a first preset condition, displaying or storing a second image shot by the camera at the first position, wherein the second image is the first image or a third image shot by the camera at the first position; wherein, the first imaging effect specifically includes: at least one of a first exposure balance level, a first image distortion level, and a first human image deformation level of the first image.
Preferably, after acquiring the first image taken by the camera at the first position, the method further comprises:
fixing or adjusting the camera to the first position.
Preferably, after acquiring the first image taken by the camera at the first position, the method further includes: performing quality evaluation on the first image to obtain a first quality evaluation result;
wherein the first quality assessment result comprises at least one of a first exposure equalization factor, a first image distortion factor, and a first human image deformation factor for the first image;
the first imaging effect is the first quality evaluation result or an imaging parameter calculated according to the first quality evaluation result.
Preferably, after the quality evaluation is performed on the first image and a first quality evaluation result is obtained, the method further includes:
and taking 0.4, 0.3 and 0.3 as the first exposure equalization factor, the first image distortion factor and the first human image deformation weight factor respectively, and carrying out weighted average on the first exposure equalization factor, the first image distortion factor and the first human image deformation factor to obtain the imaging parameters.
Preferably, the quality evaluation of the first image to obtain a first quality evaluation result specifically includes:
acquiring a first duty ratio L1 of over-exposed and/or over-dark pixels of the first image;
and obtaining the first exposure equalization factor according to the first proportion L1.
Preferably, the quality evaluation of the first image to obtain a first quality evaluation result specifically includes:
acquiring a first pitch angle of the first image;
if the first pitching angle is smaller than or equal to a first threshold value, setting the first human image deformation factor as a first preset value;
if the first pitch angle is greater than the first threshold, then:
according to the first face up-down ratio of the first image
Figure BDA0001892969190000021
First nose ratio
Figure BDA0001892969190000022
In proportion to the first lip
Figure BDA0001892969190000023
Obtaining a value of the first portrait deformation factor;
wherein the first surface is in vertical proportion
Figure BDA0001892969190000024
Is the ratio of the distance L11 from the eyebrow center to the lower nose boundary and the distance L12 from the lower nose boundary to the chin boundary in the first image;
the first nose ratio
Figure BDA0001892969190000025
The ratio of the area of the nose region in the first image to the area of the whole portrait region is obtained;
the first lip proportion
Figure BDA0001892969190000026
The ratio of the lip area in the first image to the whole portrait area is obtained.
Preferably, before determining whether the first imaging effect of the first image satisfies the first preset condition, the method further includes:
acquiring a fourth image shot by the camera at a second position;
the first imaging effect of the first image meets a first preset condition, and the method specifically comprises the following steps:
the first imaging effect of the first image is better than the fourth imaging effect of the fourth image;
wherein the fourth imaging effect specifically includes: at least one of a fourth degree of exposure equalization, a fourth degree of image distortion, and a fourth degree of portrait distortion for the fourth image.
Preferably, the first imaging effect of the first image is better than the fourth imaging effect of the fourth image, and specifically includes:
the first exposure balance degree is better than the fourth exposure balance degree, the first image distortion degree is better than at least one of the fourth image distortion degree and the first portrait distortion degree is better than the fourth portrait distortion degree.
Preferably, after acquiring the fourth image captured by the camera at the second position, the method further includes:
and if the fourth image does not comprise the face area, setting the fourth human image deformation degree as a second preset value.
Preferably, after acquiring the first image taken by the camera at the first position, the method further includes:
performing quality evaluation on the first image to obtain a first quality evaluation result; wherein the first quality assessment result comprises at least one of a first exposure equalization factor, a first image distortion factor, and a first human image deformation factor for the first image;
the first imaging effect is the first quality evaluation result or an imaging parameter obtained by calculating the first quality evaluation result according to a first preset mode;
after acquiring a fourth image taken by the camera at the second position, the method further comprises:
performing quality evaluation on the fourth image to obtain a fourth quality evaluation result; wherein the fourth quality assessment result comprises at least one of a fourth exposure equalization factor, a fourth image distortion factor, and a fourth portrait deformation factor for the fourth image;
the fourth imaging effect is the fourth quality evaluation result or an imaging parameter calculated by the fourth quality evaluation result according to a fourth preset mode.
Preferably, the first preset manner is the same as the fourth preset manner, the first exposure equalization factor corresponds to the fourth exposure equalization factor, the first image distortion factor corresponds to the fourth image distortion factor, and the first human image deformation factor corresponds to the fourth human image deformation factor.
Preferably, the first imaging effect is: taking 0.4, 0.3 and 0.3 as the first exposure equalization factor, the first image distortion factor and the first human image deformation weight factor respectively, and carrying out weighted average on the first exposure equalization factor, the first image distortion factor and the first human image deformation factor to obtain imaging parameters;
the fourth imaging effect is: and taking 0.4, 0.3 and 0.3 as the fourth exposure equalization factor, the fourth image distortion factor and the fourth portrait deformation weight factor respectively, and carrying out weighted average on the fourth exposure equalization factor, the fourth image distortion factor and the fourth portrait deformation factor to obtain the imaging parameters.
Preferably, the quality evaluation of the first image to obtain a first quality evaluation result specifically includes:
acquiring a first duty ratio L1 of over-exposed and/or over-dark pixels of the first image;
obtaining the first exposure equalization factor according to the first proportion L1;
performing quality evaluation on the fourth image to obtain a fourth quality evaluation result, which specifically includes:
acquiring a fourth duty ratio L4 of the over-exposed and/or over-dark pixels of the fourth image;
and obtaining the fourth exposure equalization factor according to the fourth proportion L4.
Preferably, the quality evaluation of the first image to obtain a first quality evaluation result specifically includes:
acquiring a first pitch angle of the first image;
if the first pitching angle is smaller than or equal to a second threshold value, setting the first human image deformation factor as a third preset value;
if the first pitch angle is greater than the second threshold, then:
according to the first face up-down ratio of the first image
Figure BDA0001892969190000031
First nose ratio
Figure BDA0001892969190000032
In proportion to the first lip
Figure BDA0001892969190000033
Obtaining a value of the first portrait deformation factor;
wherein the first surface is in vertical proportion
Figure BDA0001892969190000034
Is the ratio of the distance L11 from the eyebrow center to the lower nose boundary and the distance L12 from the lower nose boundary to the chin boundary in the first image;
the first nose ratio
Figure BDA0001892969190000035
The ratio of the area of the nose region in the first image to the area of the whole portrait region is obtained;
the first lip proportion
Figure BDA0001892969190000036
The ratio of the area of the lip region in the first image to the area of the whole portrait region is obtained;
performing quality evaluation on the fourth image to obtain a fourth quality evaluation result, which specifically includes:
acquiring a fourth pitch angle of the fourth image;
if the fourth pitch angle is smaller than or equal to a second threshold value, setting the fourth portrait deformation factor as a third preset value;
if the fourth pitch angle is greater than the second threshold, then:
according to the up-down proportion of the fourth surface part of the fourth image
Figure BDA0001892969190000037
Fourth nose ratio
Figure BDA0001892969190000038
And the fourth lip proportion
Figure BDA0001892969190000039
Obtaining a value of the fourth portrait distortion factor;
wherein the fourth surface portion is in a vertical proportion
Figure BDA00018929691900000310
Is the ratio of the distance L41 from the eyebrow center to the lower nose boundary and the distance L42 from the lower nose boundary to the chin boundary in the fourth image;
the fourth nose ratio
Figure BDA00018929691900000311
The ratio of the area of the nose region in the fourth image to the area of the whole portrait region is obtained;
the fourth lip proportion
Figure BDA00018929691900000312
The area of the lip region in the fourth image is in the whole portrait regionThe area fraction of the domains.
In a second aspect, an embodiment of the present invention further provides a shooting apparatus, where the shooting apparatus includes: the device comprises an acquisition module and a processing module;
the acquisition module is used for acquiring a first image shot by the camera at a first position;
the processing module is used for displaying or storing a second image shot by the camera at the first position when a first imaging effect of the first image meets a first preset condition, wherein the second image is the first image or a third image shot by the camera at the first position;
wherein the first imaging effect specifically includes:
at least one of a first degree of exposure equalization, a first degree of image distortion, and a first degree of portrait distortion of the first image.
Preferably, the photographing apparatus further includes a control module;
the control module is used for fixing or adjusting the camera to the first position after the acquisition module acquires the first image shot by the camera at the first position.
Preferably, the photographing apparatus further includes a quality evaluation module;
the quality evaluation module is used for carrying out quality evaluation on the first image to obtain a first quality evaluation result;
wherein the first quality assessment result comprises at least one of a first exposure equalization factor, a first image distortion factor, and a first human image deformation factor for the first image;
the first imaging effect is the first quality evaluation result or an imaging parameter calculated according to the first quality evaluation result.
Preferably, the photographing apparatus further includes:
and the weighting calculation module is used for respectively taking 0.4, 0.3 and 0.3 as the first exposure equalization factor, the first image distortion factor and the first human image deformation weight factor, and carrying out weighted average on the first exposure equalization factor, the first image distortion factor and the first human image deformation factor to obtain the imaging parameters.
Preferably, the quality assessment module comprises a first quality assessment sub-module;
the first quality evaluation submodule is configured to:
acquiring a first duty ratio L1 of over-exposed and/or over-dark pixels of the first image;
and obtaining the first exposure equalization factor according to the first proportion L1.
Preferably, the quality assessment module comprises a second quality assessment sub-module;
the second quality evaluation submodule is configured to:
acquiring a first pitch angle of the first image;
if the first pitching angle is smaller than or equal to a first threshold value, setting the first human image deformation factor as a first preset value;
if the first pitch angle is greater than the first threshold, then:
according to the first face up-down ratio of the first image
Figure BDA0001892969190000041
First nose ratio
Figure BDA0001892969190000042
In proportion to the first lip
Figure BDA0001892969190000043
Obtaining a value of the first portrait deformation factor;
wherein the first surface is in vertical proportion
Figure BDA0001892969190000044
Is the ratio of the distance L11 from the eyebrow center to the lower nose boundary and the distance L12 from the lower nose boundary to the chin boundary in the first image;
the first nose ratio
Figure BDA0001892969190000045
The ratio of the area of the nose region in the first image to the area of the whole portrait region is obtained;
the first lip proportion
Figure BDA0001892969190000046
The ratio of the lip area in the first image to the whole portrait area is obtained.
Preferably, the acquisition module comprises a first acquisition submodule;
the first obtaining submodule is used for obtaining a fourth image shot by the camera at a second position;
the photographing apparatus further includes:
the judging module is used for judging whether the first imaging effect of the first image is better than the fourth imaging effect of the fourth image or not, and if the first imaging effect of the first image is better than the fourth imaging effect of the fourth image, the first imaging effect of the first image is determined to meet a first preset condition;
wherein the fourth imaging effect specifically includes: at least one of a fourth degree of exposure equalization, a fourth degree of image distortion, and a fourth degree of portrait distortion for the fourth image.
Preferably, the determining module is specifically configured to:
when at least one of the first exposure balance degree is better than the fourth exposure balance degree, the first image distortion degree is better than the fourth image distortion degree, and the first portrait deformation degree is better than the fourth portrait deformation degree is satisfied, determining that the first imaging effect of the first image satisfies a first preset condition.
Preferably, the photographing apparatus further includes a setting module;
the setting module is configured to: and when the fourth image does not comprise a fourth face image, setting the deformation degree of the fourth face image as a second preset value.
Preferably, the quality assessment module comprises a first computation submodule and a second computation submodule;
the first calculation submodule is used for carrying out quality evaluation on the first image to obtain a first quality evaluation result; wherein the first quality assessment result comprises at least one of a first exposure equalization factor, a first image distortion factor, and a first human image deformation factor for the first image;
the second calculation submodule is used for carrying out quality evaluation on the fourth image to obtain a fourth quality evaluation result; wherein the fourth quality assessment result comprises at least one of a fourth exposure equalization factor, a fourth image distortion factor, and a fourth portrait deformation factor for the fourth image;
the first imaging effect is the first quality evaluation result or an imaging parameter obtained by calculating the first quality evaluation result according to a first preset mode;
the fourth imaging effect is the fourth quality evaluation result or an imaging parameter calculated by the fourth quality evaluation result according to a fourth preset mode.
Preferably, the first preset mode in the first computation submodule is the same as the fourth preset mode in the second computation submodule.
Preferably, the first calculating sub-module is specifically configured to calculate the imaging parameter by performing weighted average on the first exposure equalization factor, the first image distortion factor and the first human image deformation factor according to weights of 0.4, 0.3 and 0.3;
the second calculating submodule is specifically configured to perform weighted average on the fourth exposure equalization factor, the fourth image distortion factor, and the fourth human image deformation factor according to weights of 0.4, 0.3, and 0.3 to calculate the imaging parameter.
Preferably, the quality assessment module comprises a third quality assessment submodule and a fourth quality assessment submodule;
the third quality evaluation submodule is configured to:
acquiring a first duty ratio L1 of over-exposed and/or over-dark pixels of the first image;
obtaining the first exposure equalization factor according to the first proportion L1;
the fourth quality evaluation submodule is configured to:
acquiring a fourth duty ratio L4 of the over-exposed and/or over-dark pixels of the fourth image;
and obtaining the fourth exposure equalization factor according to the fourth proportion L4.
Preferably, the quality assessment module comprises a fifth quality assessment submodule and a sixth quality assessment submodule;
the fifth quality evaluation submodule is configured to:
acquiring a first pitch angle of the first image;
if the first pitching angle is smaller than or equal to a second threshold value, setting the first human image deformation factor as a third preset value;
if the first pitch angle is greater than the second threshold, then:
according to the first face up-down ratio of the first image
Figure BDA0001892969190000051
First nose ratio
Figure BDA0001892969190000052
In proportion to the first lip
Figure BDA0001892969190000053
Obtaining a value of the first portrait deformation factor;
wherein the first surface is in vertical proportion
Figure BDA0001892969190000054
Is the ratio of the distance L11 from the eyebrow center to the lower nose boundary and the distance L12 from the lower nose boundary to the chin boundary in the first image;
the first nose ratio
Figure BDA0001892969190000055
The area of the nose region in the first image isThe ratio of the area of the whole portrait area;
the first lip proportion
Figure BDA0001892969190000056
The ratio of the area of the lip region in the first image to the area of the whole portrait region is obtained;
the sixth quality evaluation submodule is configured to:
acquiring a fourth pitch angle of the fourth image;
if the fourth pitch angle is smaller than or equal to a second threshold value, setting the fourth portrait deformation factor as a third preset value;
if the fourth pitch angle is greater than the second threshold, then:
according to the up-down proportion of the fourth surface part of the fourth image
Figure BDA0001892969190000057
Fourth nose ratio
Figure BDA0001892969190000058
And the fourth lip proportion
Figure BDA0001892969190000059
Obtaining a value of the fourth portrait distortion factor;
wherein the fourth surface portion is in a vertical proportion
Figure BDA0001892969190000061
Is the ratio of the distance L41 from the eyebrow center to the lower nose boundary and the distance L42 from the lower nose boundary to the chin boundary in the fourth image;
the fourth nose ratio
Figure BDA0001892969190000062
The ratio of the area of the nose region in the fourth image to the area of the whole portrait region is obtained;
the fourth lip proportion
Figure BDA0001892969190000063
And the ratio of the lip area in the fourth image to the whole portrait area is obtained.
In a third aspect, an embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the shooting method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the image processing method according to the first aspect.
In the embodiment of the invention, if the first imaging effect of the first image meets the first preset condition, the second image shot by the camera at the first position is displayed or stored, and the picture meeting the shooting requirement of the user can be obtained without the need of the user to manually adjust the shooting angle and the like. Therefore, the shooting method provided by the embodiment of the invention can simplify the operation of the user, reduce the difficulty of the user and ensure the shooting quality of the camera.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a shooting method according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of another shooting method according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of another photographing method according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of another shooting method according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a possible structure of a camera according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of another possible structure of a camera according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a possible structure of another photographing apparatus according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a possible structure of another camera according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a possible structure of a quality evaluation module in the photographing apparatus according to the embodiment of the invention;
fig. 11 is a schematic diagram of another possible structure of a quality evaluation module in the photographing apparatus according to the embodiment of the invention;
FIG. 12 is a schematic diagram of a possible structure of another camera according to an embodiment of the present invention;
FIG. 13 is a schematic diagram of a possible structure of another camera according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of another possible quality evaluation module in the photographing apparatus according to the embodiment of the invention;
fig. 15 is a schematic diagram of a possible structure of another quality evaluation module in the photographing apparatus according to the embodiment of the invention;
fig. 16 is a schematic diagram of a possible structure of another quality evaluation module in the photographing apparatus according to the embodiment of the invention;
fig. 17 is a schematic diagram of a hardware structure of a terminal device according to various embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that "/" in this context means "or", for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. "plurality" means two or more than two.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first image and the second image, etc. are for distinguishing different images, rather than for describing a particular order of the images.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The terminal device in the embodiment of the present invention may be a terminal device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment applied to the shooting method provided by the embodiment of the present invention, taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the shooting method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the shooting method may run based on the android operating system shown in fig. 1. Namely, the processor or the terminal device can implement the shooting method provided by the embodiment of the invention by running the software program in the android operating system.
The photographing method according to the embodiment of the present invention will be described with reference to fig. 2. Fig. 2 is a schematic flowchart of a shooting method according to an embodiment of the present invention, and as shown in fig. 2, the shooting method includes S201 and S202:
s201, the terminal equipment acquires a first image shot by the camera at a first position.
Optionally, the camera in the embodiment of the present invention may be a part of the terminal device, or may not belong to the terminal device, and only the first image shot by the camera at the first position is transmitted to the terminal device, which is not specifically limited in the embodiment of the present invention.
It should be noted that, in the embodiment of the present invention, the terminal device may acquire the first image captured at the first position by the camera in multiple ways. The terminal device may directly obtain the first image, or may obtain the first image from other devices, for example, first store the first image captured by the camera at the first position in the other devices, and then obtain the first image through the other devices. The embodiment of the present invention is not particularly limited thereto.
S202, if the first imaging effect of the first image meets a first preset condition, displaying or storing a second image shot by the camera at a first position.
The first imaging effect may specifically include: at least one of a first exposure balance level, a first image distortion level, and a first human image distortion level of the first image.
It can be understood that the judgment on whether the first imaging effect of the first image meets the first preset condition may be completed by a single module of the terminal device, or may be performed by other devices to transmit the result to the terminal device after the judgment is completed, or may be performed by the user, which is not specifically limited in the embodiment of the present invention.
It is to be understood that, if the first image includes portrait information, the first imaging effect may be any one of the first exposure equalization degree, the first image distortion degree, and the first portrait deformation degree, or a combination of any two of them, or a common combination of the three items, and this is not limited in this embodiment of the present invention.
If the first image does not include portrait information, the first imaging effect may be any one of the first exposure equalization degree and the first image distortion degree, or may be a combination of the two, which is not specifically limited in this embodiment of the present invention.
The second image may be the first image or a third image captured by the camera at the first position.
Optionally, after the terminal device displays or stores the second image captured by the camera at the first position, the terminal device may edit or share the target image, which is not specifically limited in the embodiment of the present invention.
Optionally, after the terminal device displays or stores the second image captured by the camera at the first position, the camera may stay at the first position, or may move to another position, which is not specifically limited in this embodiment of the present invention.
In the shooting method provided by the embodiment of the present invention, since the terminal device can display or store the second image shot by the camera at the first position, and the first imaging effect of the first image shot by the camera at the first position already satisfies the first preset condition, it can be considered that the second image also satisfies the first preset condition. Namely, the user can obtain the photo meeting the photographing requirement of the user without manually adjusting the photographing angle and other operations. Therefore, the shooting method provided by the embodiment of the invention can simplify the operation of the user, reduce the difficulty of the user and ensure the shooting quality of the camera.
It should be noted that, in the embodiment of the present invention, after displaying or storing the second image captured by the camera at the first position, the terminal device may further continue to use the capturing method in the embodiment of the present invention to obtain other photos meeting the user's capturing requirement, and the user may select a favorite photo of the user from the multiple finally obtained photos meeting the condition, which is not specifically limited in this embodiment of the present invention.
As shown in fig. 3, a possible implementation manner of the shooting method provided by the embodiment of the present invention includes the following steps:
s301, the terminal equipment acquires a first image shot by the camera at a first position.
S302, the terminal equipment carries out quality evaluation on the first image to obtain a first quality evaluation result; and obtaining a first imaging effect according to the first quality evaluation result.
Wherein the first quality assessment result comprises at least one of a first exposure equalization factor, a first image distortion factor, and a first human image deformation factor for the first image.
Optionally, the first image distortion factor may be obtained by: and calculating the curvature corresponding to the linear target appearing in the first image, and then obtaining a first image distortion factor according to the curvature.
Alternatively, the first exposure equalization factor may be obtained by: the terminal equipment acquires a first ratio L1 of the overexposed and/or the overexposed pixels of the first image, and obtains a first exposure equalization factor according to the first ratio L1.
The first ratio L1 may be only the ratio of the overexposed pixels of the first image, may also be the ratio of the overexposed pixels, and may also be the total ratio of the overexposed and the overexposed pixels, which is not specifically limited in this embodiment of the present invention.
It will be appreciated that if the width and height of the first image are w and h, respectively, the portion of the pixels in the first image that are too dark is
Figure BDA0001892969190000081
The portion of the pixels in the first image that are too bright is
Figure BDA0001892969190000082
For example, if the first duty ratio L1 is the total duty ratio of the overexposed and the overexposed pixels, L1 may be represented by the formula
Figure BDA0001892969190000083
And (6) calculating.
Optionally, the first portrait deformation factor may be obtained by: the terminal device obtains a first pitch angle of the first image. And if the first pitching angle is smaller than or equal to a first threshold value, the terminal equipment sets the first human image deformation factor as a first preset value. If the first pitch angle is larger than a first threshold value, the terminal equipment is in proportion to the first surface of the first image up and down
Figure BDA0001892969190000084
First nose ratio
Figure BDA0001892969190000085
In proportion to the first lip
Figure BDA0001892969190000086
Obtaining a value of the first portrait deformation factor.
Wherein the first pitch angle may be a pitch angle of the portrait information when the first image contains the portrait information.
It is understood that the first preset value may be 0, may also be infinite or infinitesimal, and may also be other values, which is not specifically limited in this embodiment of the present invention.
Wherein the first surface is in vertical proportion
Figure BDA0001892969190000091
The ratio of the distance L11 from the eyebrow center to the lower nose boundary and the distance L12 from the lower nose boundary to the chin boundary in the first image;
first nose ratio
Figure BDA0001892969190000092
The ratio of the area of the nose region in the first image to the area of the whole portrait region;
first lip proportion
Figure BDA0001892969190000093
The ratio of the lip area in the first image to the whole portrait area is shown.
It will be appreciated that the value of the first portrait deformation factor may be directly the first face up-down scale
Figure BDA0001892969190000094
First nose ratio
Figure BDA0001892969190000095
In proportion to the first lip
Figure BDA0001892969190000096
The value of one of the terms may also be a value obtained by calculating one or more of them, which is not particularly limited in the embodiment of the present invention.
Optionally, the first human image deformation factor may be an up-down ratio to the first face of the first image
Figure BDA0001892969190000097
First external nasal proportion
Figure BDA0001892969190000098
And the first lip external proportion
Figure BDA0001892969190000099
A value obtained by weighted averaging of one or more of the above-mentioned terms.
It is understood that, in the embodiment of the present invention, the first image distortion factor, the first exposure equalization factor, and the first human image deformation factor may also be obtained by other methods, which is not particularly limited by the embodiment of the present invention.
It is to be understood that, if the first image includes portrait information, the first quality evaluation result may be any one of the first exposure equalization factor, the first image distortion factor, and the first portrait deformation factor, or a combination of any two of them, or a common combination of the three items, and this is not limited in this embodiment of the present invention.
If the first image does not include portrait information, the first quality evaluation result may be any one of the first exposure equalization factor and the first image distortion factor, or may be a combination of the two, which is not specifically limited in this embodiment of the present invention.
Optionally, the first imaging effect may be a first quality evaluation result, or may be an imaging parameter calculated according to the first quality evaluation result. The embodiment of the present invention is not particularly limited thereto.
Optionally, 0.4, 0.3, and 0.3 may be respectively used as weights of the first exposure equalization factor, the first image distortion factor, and the first human image deformation factor, and the first exposure equalization factor, the first image distortion factor, and the first human image deformation factor of the first image are weighted and averaged to obtain an imaging parameter as the first imaging effect.
And S303, if the first imaging effect of the first image is less than or equal to a third threshold value, the terminal equipment fixes or adjusts the camera to a first position, and displays or stores a second image shot by the camera at the first position.
The second image may be the first image or a third image captured by the camera at the first position.
Optionally, the terminal device may fix or adjust the camera first, and then display or store the second image; or the second image can be displayed or stored first and then the camera can be fixed or adjusted; or may be performed simultaneously. The embodiment of the present invention is not particularly limited thereto.
It is to be understood that, in the embodiment of the present invention, the fixing or adjusting the camera to the first position may be directly fixing the camera to the first position, or may be moving the camera to another position and then adjusting the camera to the first position, which is not limited in the embodiment of the present invention.
It is understood that the camera fixed or adjusted to the first position in the embodiment of the present invention may be applied to any embodiment of the present shooting method.
In the embodiment of the present invention, when the first imaging effect of the first image is less than or equal to the third threshold, the terminal device fixes or adjusts the camera to the first position, so that the user can keep the camera at a position suitable for taking a picture without performing operations such as manually adjusting a shooting angle. Therefore, the shooting method provided by the embodiment of the invention can simplify the operation of the user, reduce the difficulty of the user and keep the camera at the position capable of ensuring the shooting quality of the camera.
Further, the terminal device performs quality evaluation on the first image, and determines the first imaging effect based on the first quality evaluation result. This embodies the basis for judging the first imaging effect, ensures the imaging effect of the first image, and can increase the use experience of the user.
As shown in fig. 4, a possible implementation manner of the shooting method provided by the embodiment of the present invention includes the following steps:
s401, the terminal equipment acquires a first image shot by the camera at a first position.
S402, the terminal equipment acquires a fourth image shot by the camera at the second position.
It is to be understood that the fourth image may be one image or a plurality of images, and this is not particularly limited by the embodiment of the present invention.
S403, judging whether the first imaging effect of the first image is better than the fourth imaging effect of the fourth image; if the first imaging effect is better than the fourth imaging effect, step S404 is performed, and if the first imaging effect is not better than the fourth imaging effect, step S405 is performed.
The first imaging effect may specifically include: at least one of a first exposure balance level, a first image distortion level, and a first human image distortion level of the first image. The fourth imaging effect specifically includes: at least one of a fourth degree of exposure equalization, a fourth degree of image distortion, and a fourth degree of portrait distortion for the fourth image.
It is to be understood that the first imaging effect may be any one of the first exposure equalization degree, the first image distortion degree and the first human image deformation degree, or a combination of any two of them, or a common combination of the three, and this is not particularly limited in this embodiment of the present invention; the fourth imaging effect may be any one of a fourth exposure equalization degree, a fourth image distortion degree and a fourth human image deformation degree, or a combination of any two of them, or a common combination of the three items, which is not specifically limited in this embodiment of the present invention.
Optionally, the first imaging effect of the first image is better than the fourth imaging effect of the fourth image, specifically, the first exposure equalization degree is better than the fourth exposure equalization degree, the first image distortion degree is better than at least one of the fourth image distortion degree and the first human image distortion degree is better than the fourth human image distortion degree, that is, one or two or three of the first image distortion degree and the fourth human image distortion degree are better than each other, and this is not specifically limited in the embodiment of the present invention.
Optionally, if the first image does not include the face region, the first person deformation degree may be set to a second preset value.
Optionally, if the fourth image does not include the face region, the fourth human image deformation degree may be set to a second preset value.
It can be understood that the second preset value may be 0, may also be infinite or infinitesimal, may also be other numerical values, and may also be only a certain degree such as worst or worse, and the embodiment of the present invention does not specifically limit this.
And S404, displaying or storing a second image shot by the camera at the first position.
The second image may be the first image or a third image captured by the camera at the first position.
S405, the third position is set as a new second position, and the process returns to step S402.
It is understood that, in other embodiments of the present invention, step S402 may be executed first and then step S401 may be executed at the same time, which is not limited in this embodiment of the present invention.
In the embodiment of the invention, the imaging effect of the first image is ensured to be better by comparing the imaging effect of the first image and the imaging effect of the fourth image. By the scheme, the judgment basis of the first imaging effect is embodied, the imaging effect of the first image is ensured, and the use experience of a user can be increased.
As shown in fig. 5, a possible implementation manner of the shooting method provided by the embodiment of the present invention includes the following steps:
s501, the terminal equipment acquires a first image shot by the camera at a first position.
S502, the terminal device carries out quality evaluation on the first image to obtain a first quality evaluation result.
Wherein the first quality assessment result comprises at least one of a first exposure equalization factor, a first image distortion factor, and a first human image deformation factor for the first image.
Optionally, the first image distortion factor may be obtained by: and calculating the curvature corresponding to the linear target appearing in the first image, and then obtaining a first image distortion factor according to the curvature.
Alternatively, the first exposure equalization factor may be obtained by: the terminal equipment acquires a first ratio L1 of the overexposed and/or the overexposed pixels of the first image, and obtains a first exposure equalization factor according to the first ratio L1.
The first ratio L1 may be only the ratio of the overexposed pixels of the first image, may also be the ratio of the overexposed pixels, and may also be the total ratio of the overexposed and the overexposed pixels, which is not specifically limited in this embodiment of the present invention.
It will be appreciated that if the width and height of the first image are w and h, respectively, the portion of the pixels in the first image that are too dark is
Figure BDA0001892969190000111
The portion of the pixels in the first image that are too bright is
Figure BDA0001892969190000112
For example, if the first duty ratio L1 is the total duty ratio of the overexposed and the overexposed pixels, L1 may be represented by the formula
Figure BDA0001892969190000113
And (6) calculating.
Optionally, the first portrait deformation factor may be obtained by: the terminal device obtains a first pitch angle of the first image. And if the first pitching angle is smaller than or equal to the second threshold value, the terminal equipment sets the first portrait deformation factor as a third preset value. If the first pitch angle is larger than the second threshold, the terminal device scales up and down according to the first surface of the first image
Figure BDA0001892969190000114
First nose ratio
Figure BDA0001892969190000115
In proportion to the first lip
Figure BDA0001892969190000116
Obtaining a value of the first portrait deformation factor.
The first pitch angle may be a pitch angle of the portrait information when the first image includes the portrait information.
It can be understood that the third preset value may be 0, may also be infinite or infinitesimal, and may also be other values, which is not specifically limited in this embodiment of the present invention.
Wherein the first surface is in vertical proportion
Figure BDA0001892969190000117
The ratio of the distance L11 from the eyebrow center to the lower nose boundary and the distance L12 from the lower nose boundary to the chin boundary in the first image;
first nose ratio
Figure BDA0001892969190000118
The ratio of the area of the nose region in the first image to the area of the whole portrait region;
first lip proportion
Figure BDA0001892969190000119
The ratio of the lip area in the first image to the whole portrait area is shown.
It will be appreciated that the value of the first portrait deformation factor may be directly the first face up-down scale
Figure BDA00018929691900001110
First nose ratio
Figure BDA00018929691900001111
In proportion to the first lip
Figure BDA00018929691900001112
The value of one of the terms may also be a value obtained by calculating one or more of them, which is not particularly limited in the embodiment of the present invention.
Optionally, the first human image deformation factor may be an up-down ratio to the first face of the first image
Figure BDA00018929691900001113
First external nasal proportion
Figure BDA00018929691900001114
And the outside of the first lipRatio of
Figure BDA00018929691900001115
A value obtained by weighted averaging of one or more of the above-mentioned terms.
It is understood that, in the embodiment of the present invention, the first image distortion factor, the first exposure equalization factor, and the first human image deformation factor may also be obtained by other methods, which is not particularly limited by the embodiment of the present invention.
The first imaging effect may be a first quality evaluation result, or may be an imaging parameter calculated from the first quality evaluation result according to a first preset manner, which is not specifically limited in the embodiment of the present invention.
Optionally, if the first image does not include the face region, the first human deformation factor may be set to a fourth preset value.
It can be understood that the fourth preset value may be 0, may also be infinite or infinitesimal, and may also be another numerical value, which is not specifically limited in this embodiment of the present invention.
It is to be understood that, if the first image includes portrait information, the first quality evaluation result may be any one of the first exposure equalization factor, the first image distortion factor, and the first portrait deformation factor, or a combination of any two of them, or a common combination of the three items, and this is not limited in this embodiment of the present invention.
If the first image does not include portrait information, the first quality evaluation result may be any one of the first exposure equalization factor and the first image distortion factor, or may be a combination of the two, which is not specifically limited in this embodiment of the present invention.
Optionally, the first imaging effect is an imaging parameter obtained by calculating the first quality evaluation result according to a first preset manner, and may be an imaging parameter obtained by performing weighted average on the first quality evaluation result, or an imaging parameter obtained by calculating the first quality evaluation result according to another calculation manner.
And S503, the terminal equipment acquires a fourth image shot by the camera at the second position.
And S504, the terminal equipment carries out quality evaluation on the fourth image to obtain a fourth quality evaluation result.
Wherein the fourth quality assessment result comprises at least one of a fourth exposure equalization factor, a fourth image distortion factor, and a fourth portrait deformation factor for the fourth image.
Optionally, the fourth image distortion factor may be obtained by: and calculating the curvature corresponding to the linear target appearing in the fourth image, and then obtaining a fourth image distortion factor according to the curvature.
Optionally, the fourth exposure equalization factor may be obtained by: the terminal equipment acquires a fourth ratio L4 of the over-exposed and/or over-dark pixels of the fourth image, and obtains a fourth exposure equalization factor according to the fourth ratio L4.
The fourth ratio L4 may be only the ratio of the overexposed pixels of the fourth image, may also be the ratio of the over-dark pixels, and may also be the total ratio of the overexposed and the over-dark pixels, which is not specifically limited in this embodiment of the present invention.
Optionally, the calculation manner of the fourth ratio L4 may be the same as the calculation manner of the first ratio L1, and this is not specifically limited in this embodiment of the present invention.
It will be appreciated that if the width and height of the fourth image are w4 and h4, respectively, the too dark pixel portions of the first image are
Figure BDA0001892969190000121
The portion of the pixels in the first image that are too bright is
Figure BDA0001892969190000122
For example, if the fourth duty ratio L4 is the total duty ratio of the overexposed and the overexposed pixels, L4 may be represented by the formula
Figure BDA0001892969190000123
And (6) calculating.
Optionally, the fourth human image deformation factor may be obtained by: terminal deviceAnd obtaining a fourth pitch angle of a fourth image. And if the fourth pitch angle is smaller than or equal to the second threshold value, the terminal equipment sets the fourth portrait deformation factor to be a third preset value. If the fourth pitch angle is larger than the second threshold, the terminal device scales up and down according to the fourth surface of the fourth image
Figure BDA0001892969190000124
Fourth nose ratio
Figure BDA0001892969190000125
And the fourth lip proportion
Figure BDA0001892969190000126
To obtain a value of a fourth portrait deformation factor.
Wherein the fourth pitch angle may be a pitch angle of the portrait information when the fourth image contains the portrait information.
It can be understood that the third preset value may be 0, may also be infinite or infinitesimal, and may also be other values, which is not specifically limited in this embodiment of the present invention.
Wherein the fourth surface portion is in vertical proportion
Figure BDA0001892969190000127
The ratio of the distance L41 from the eyebrow center to the lower nose boundary and the distance L42 from the lower nose boundary to the chin boundary in the fourth image;
fourth nose ratio
Figure BDA0001892969190000128
The ratio of the area of the nose region in the fourth image to the area of the whole portrait region;
fourth lip proportion
Figure BDA0001892969190000129
The ratio of the lip area in the fourth image to the whole portrait area is shown.
It will be appreciated that the value of the fourth human image deformation factor may be directIs the ratio of the fourth surface to the top and bottom
Figure BDA00018929691900001210
Fourth nose ratio
Figure BDA00018929691900001212
And the fourth lip proportion
Figure BDA00018929691900001211
The value of one of the terms may also be a value obtained by calculating one or more of them, which is not particularly limited in the embodiment of the present invention.
Optionally, the fourth human image deformation factor may be up-down proportional to a fourth surface of the fourth image
Figure BDA00018929691900001215
Fourth nasal external proportion
Figure BDA00018929691900001213
And the fourth lip external proportion
Figure BDA00018929691900001214
A value obtained by weighted averaging of one or more of the above-mentioned terms.
It is understood that, in the embodiment of the present invention, the fourth image distortion factor, the fourth exposure equalization factor, and the fourth human image deformation factor may also be obtained by other methods, which is not particularly limited in the embodiment of the present invention.
The fourth imaging effect may be a fourth quality evaluation result, or may be an imaging parameter obtained by calculating the fourth quality evaluation result according to a fourth preset mode, which is not specifically limited in the embodiment of the present invention.
Optionally, if the fourth image does not include the face region, the fourth human image deformation factor may be set to a fourth preset value.
It can be understood that the fourth preset value may be 0, may also be infinite or infinitesimal, and may also be another numerical value, which is not specifically limited in this embodiment of the present invention.
It is to be understood that the fourth quality evaluation result may be any one of a fourth exposure equalization factor, a fourth image distortion factor and a fourth human image deformation factor, or a combination of any two of them, or a common combination of the three, and this is not limited in this embodiment of the present invention.
Optionally, the fourth imaging effect is that the imaging parameter obtained by calculating the fourth quality evaluation result according to a fourth preset mode, may be the imaging parameter obtained by performing weighted average on the fourth quality evaluation result, or may be the imaging parameter obtained by calculating in other calculation modes, which is not specifically limited in this embodiment of the present invention.
Optionally, the first preset mode and the fourth preset mode may be the same. Specifically, for the first preset mode and the fourth preset mode, the first exposure equalization factor corresponds to the fourth exposure equalization factor, the first image distortion factor corresponds to the fourth image distortion factor, and the first portrait deformation factor corresponds to the fourth portrait deformation factor.
Optionally, in a case that the first preset mode and the fourth preset mode may be the same, the first imaging effect of the first image may specifically be: taking 0.4, 0.3 and 0.3 as the weights of the first exposure equalization factor, the first image distortion factor and the first human image deformation factor respectively, and carrying out weighted average on the first exposure equalization factor, the first image distortion factor and the first human image deformation factor to obtain imaging parameters;
the fourth imaging effect of the corresponding fourth image is specifically: and taking 0.4, 0.3 and 0.3 as the weights of a fourth exposure equalization factor, a fourth image distortion factor and a fourth portrait deformation factor respectively, and carrying out weighted average on the fourth exposure equalization factor, the fourth image distortion factor and the fourth portrait deformation factor to obtain the imaging parameters.
S505, judging whether the first imaging effect of the first image is less than or equal to the fourth imaging effect of the fourth image; if the first imaging effect is less than or equal to the fourth imaging effect, step S506 is executed, and if the first imaging effect is greater than the fourth imaging effect, step S507 is executed.
And S506, displaying or storing a second image shot by the camera at the first position.
The second image may be the first image or a third image captured by the camera at the first position.
S507, the third position is set as a new second position, and the process returns to step S503.
It is understood that, in the embodiment of the present invention, the first imaging effect is equal to or less than the fourth imaging effect as the specific determination condition that the first imaging effect is better than the fourth imaging effect, and in other examples of applying the shooting method of the present invention, other preset conditions may be selected as the specific determination condition that the first imaging effect is better than the fourth imaging effect, and the present invention is not limited to the case of equal to or less than the specific determination condition. Other preset conditions also belong to the protection scope of the shooting method.
It is understood that, in other embodiments of the present invention, S503 to S504 may be executed first, and then S501 to S502 may also be executed at the same time, which is not specifically limited in this embodiment of the present invention.
Based on the scheme, the terminal equipment carries out quality evaluation on the first image and the fourth image, and then compares the imaging effect of the first image and the imaging effect of the fourth image based on the quality evaluation result, so that the imaging effect of the first image is ensured to be really better than that of the fourth image. By the scheme, the judgment basis of the first imaging effect is embodied, the imaging effect of the first image is ensured, and the use experience of a user can be increased.
Fig. 6 is a schematic diagram of a possible structure of a camera according to an embodiment of the present invention, and as shown in fig. 6, a camera 600 includes: an acquisition module 601 and a processing module 602; an obtaining module 601, configured to obtain a first image captured by a camera at a first position; the processing module 602 is configured to display or store a second image captured by the camera at the first position when the first imaging effect of the first image meets a first preset condition, where the second image may be the first image or a third image captured by the camera at the first position;
the first imaging effect specifically includes at least one of a first exposure equalization degree, a first image distortion degree and a first human image deformation degree of the first image.
Optionally, with reference to fig. 6, as shown in fig. 7, the camera 600 further includes a control module 603; a control module 603 for fixing or adjusting the camera to the first position before or after the processing module displays or stores the second image taken by the camera at the first position.
Optionally, with reference to fig. 6, as shown in fig. 8, the camera 600 further includes a quality evaluation module 604; the quality evaluation module 604 is configured to perform quality evaluation on the first image to obtain a first quality evaluation result.
Wherein the first quality assessment result comprises at least one of a first exposure equalization factor, a first image distortion factor, and a first human image deformation factor for the first image.
The first imaging effect is a first quality evaluation result or an imaging parameter calculated according to the first quality evaluation result.
Optionally, with reference to fig. 8, as shown in fig. 9, the camera 600 further includes a weighting calculation module 605; the weighting calculation module 605 performs combined calculation on the first exposure equalization factor, the first image distortion factor and the first human image deformation factor to obtain an imaging parameter.
Optionally, the weighting calculation module 605 may use 0.4, 0.3, and 0.3 as the first exposure equalization factor, the first image distortion factor, and the first human image deformation weight factor, respectively, and perform weighted average on the first exposure equalization factor, the first image distortion factor, and the first human image deformation factor to obtain the imaging parameter.
Optionally, in conjunction with fig. 8, as shown in fig. 10, the quality evaluation module 604 in the photographing apparatus 600 further includes a first quality evaluation sub-module 6041; a first quality evaluation sub-module 6041 for: a first fraction L1 of overexposed and/or overexposed pixels of the first image is acquired. And obtains a first exposure equalization factor according to the first ratio L1.
Optionally, in conjunction with fig. 8, as shown in fig. 11, the quality evaluation module 604 in the camera 600 further includes a second moduleA quality assessment sub-module 6042; a second quality evaluation sub-module 6042 for: a first pitch angle of the first image is acquired. And if the first pitch angle is smaller than or equal to a first threshold value, setting the first human image deformation factor as a first preset value. If the first pitch angle is larger than the first threshold, the vertical proportion of the first surface of the first image is determined
Figure BDA0001892969190000141
First nose ratio
Figure BDA0001892969190000142
In proportion to the first lip
Figure BDA0001892969190000143
Obtaining a value of the first portrait deformation factor.
Wherein the first surface is in vertical proportion
Figure BDA0001892969190000144
The ratio of the distance L11 from the center of the eyebrow to the lower border of the nose to the distance L12 from the lower border of the nose to the chin border in the first image.
First nose ratio
Figure BDA0001892969190000145
The ratio of the area of the nose region in the first image to the area of the whole portrait region.
First lip proportion
Figure BDA0001892969190000146
The ratio of the lip area in the first image to the whole portrait area is shown.
Optionally, with reference to fig. 6, as shown in fig. 12, the obtaining module 601 in the photographing apparatus 600 further includes a first obtaining sub-module 6011; the first obtaining sub-module 6011 is configured to obtain a fourth image captured by the camera at the second position.
Optionally, the shooting device 600 further includes a determining module 606; the determining module 606 is configured to determine whether the first imaging effect of the first image is better than the fourth imaging effect of the fourth image. And if the first imaging effect of the first image is better than the fourth imaging effect of the fourth image, determining that the first imaging effect of the first image meets a first preset condition.
Wherein the fourth imaging effect specifically includes: at least one of a fourth degree of exposure equalization, a fourth degree of image distortion, and a fourth degree of portrait distortion for the fourth image.
Optionally, the determining module 606 is specifically configured to: and when at least one condition of the first exposure balance degree being superior to the fourth exposure balance degree, the first image distortion degree being superior to the fourth image distortion degree and the first portrait deformation degree being superior to the fourth portrait deformation degree is satisfied, determining that the first imaging effect of the first image satisfies a first preset condition.
Optionally, with reference to fig. 12, as shown in fig. 13, the shooting apparatus 600 further includes a setting module 607; a setting module 607 configured to: and when the fourth image does not comprise the fourth face image, setting the deformation degree of the fourth face image as a second preset value.
Optionally, with reference to fig. 12, as shown in fig. 14, the camera 600 further includes a quality evaluation module 604; the quality assessment module 604 includes a first calculation sub-module 6043 and a second calculation sub-module 6044;
the first calculating sub-module 6043 is configured to perform quality evaluation on the first image to obtain a first quality evaluation result.
Wherein the first quality assessment result comprises at least one of a first exposure equalization factor, a first image distortion factor, and a first human image deformation factor of the first image;
and the second calculating submodule 6044 is configured to perform quality evaluation on the fourth image to obtain a fourth quality evaluation result.
Wherein the fourth quality assessment result comprises at least one of a fourth exposure equalization factor, a fourth image distortion factor, and a fourth portrait deformation factor for the fourth image.
The first imaging effect is the first quality evaluation result or the imaging parameter obtained by calculating the first quality evaluation result according to a first preset mode, and the fourth imaging effect is the fourth quality evaluation result or the imaging parameter obtained by calculating the fourth quality evaluation result according to a fourth preset mode.
Optionally, the first preset manner in the first calculation sub-module 6043 and the fourth preset manner in the second calculation sub-module 6044 may be the same.
Optionally, the first calculating sub-module 6043 may be specifically configured to calculate the imaging parameter by performing weighted average on the first exposure equalization factor, the first image distortion factor, and the first human image deformation factor according to weights of 0.4, 0.3, and 0.3; the second calculation sub-module 6044 may be specifically configured to calculate the imaging parameters by weighted averaging of the fourth exposure equalization factor, the fourth image distortion factor, and the fourth human image deformation factor by weights of 0.4, 0.3, and 0.3.
Optionally, with reference to fig. 14, as shown in fig. 15, the first calculation sub-module 6043 in the photographing apparatus 600 further includes a third quality evaluation sub-module 6045; the second computation submodule also includes a fourth quality evaluation submodule 6046.
A third quality evaluation sub-module 6045 for: acquiring a first duty ratio L1 of over-exposed and/or over-dark pixels of the first image; according to the first ratio L1, a first exposure equalization factor is obtained.
A fourth quality evaluation sub-module 6046 for: acquiring a fourth duty ratio L4 of overexposed and/or overexposed pixels of the fourth image; according to the fourth ratio L4, a fourth exposure equalization factor is obtained.
Optionally, with reference to fig. 14, as shown in fig. 16, the first calculation sub-module 6043 in the photographing apparatus 600 further includes a fifth quality evaluation sub-module 6047; the second calculation sub-module 6044 also includes a sixth quality assessment sub-module 6048.
A fifth quality evaluation sub-module 6047 for: a first pitch angle of the first image is acquired. And if the first pitch angle is smaller than or equal to a second threshold value, setting the first portrait deformation factor as a third preset value. If the first pitch angle is greater than a second threshold value: according to the first face up-down ratio of the first image
Figure BDA0001892969190000151
The first noseSub-proportions
Figure BDA0001892969190000152
In proportion to the first lip
Figure BDA0001892969190000153
Obtaining a value of the first portrait deformation factor.
Wherein the first surface is in vertical proportion
Figure BDA0001892969190000154
Is the ratio of the distance L11 from the eyebrow center to the lower nose boundary and the distance L12 from the lower nose boundary to the chin boundary in the first image; first nose ratio
Figure BDA0001892969190000155
The ratio of the area of the nose region in the first image to the area of the whole portrait region is obtained; first lip proportion
Figure BDA0001892969190000156
The ratio of the lip area in the first image to the whole portrait area is shown.
A sixth quality assessment sub-module 6048 to: a fourth pitch angle of a fourth image is acquired. And if the fourth pitch angle is smaller than or equal to the second threshold value, setting a fourth portrait deformation factor as a third preset value. If the fourth pitch angle is greater than the second threshold value: fourth surface up-down ratio according to fourth image
Figure BDA0001892969190000157
Fourth nose ratio
Figure BDA0001892969190000158
And the fourth lip proportion
Figure BDA0001892969190000159
To obtain a value of a fourth portrait deformation factor.
Wherein the fourth surface portion is in vertical proportion
Figure BDA00018929691900001510
The ratio of the distance L41 from the eyebrow center to the lower nose boundary and the distance L42 from the lower nose boundary to the chin boundary in the fourth image; fourth nose ratio
Figure BDA00018929691900001511
The ratio of the area of the nose region in the fourth image to the area of the whole portrait region is obtained; fourth lip proportion
Figure BDA00018929691900001512
The ratio of the lip area in the fourth image to the whole portrait area is shown.
The shooting device 600 provided by the embodiment of the present invention can implement each process implemented by the terminal device in the above method embodiments, and is not described here again to avoid repetition.
According to the shooting device provided by the embodiment of the invention, firstly, the shooting device acquires a first image shot by the camera at a first position. Secondly, if the first imaging effect of the first image meets a first preset condition, the shooting device displays or stores a second image shot by the camera at a first position. The second image shot by the camera at the first position can be displayed or stored by the shooting device, the second image can be the first image or the third image shot by the camera at the first position, and the first imaging effect of the first image shot by the camera at the first position meets the first preset condition, so that the second image can also meet the first preset condition. Namely, the user can obtain the photo meeting the photographing requirement of the user without manually adjusting the photographing angle and other operations. Therefore, the shooting device provided by the embodiment of the invention can simplify the operation of the user, reduce the difficulty of the user and ensure the shooting quality of the camera.
Fig. 17 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention, where the terminal device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 17 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
The processor 110 is configured to obtain a first image captured by the camera at a first position; the display unit 106 is used for displaying a second image shot by the camera at a first position when the first imaging effect of the first image meets a first preset condition; and the memory 109 is used for storing a second image shot by the camera at the first position when the first imaging effect of the first image meets a first preset condition. The second image can be the first image or a third image shot by the camera at the first position; the first imaging effect specifically includes: at least one of a first exposure balance level, a first image distortion level, and a first human image deformation level of the first image.
According to the terminal device provided by the embodiment of the invention, firstly, the terminal device acquires a first image shot by the camera at a first position. And secondly, if the first imaging effect of the first image meets a first preset condition, the terminal equipment displays or stores a second image shot by the camera at the first position. The terminal device may display or store a second image captured by the camera at the first position, where the second image may be the first image or a third image captured by the camera at the first position, and the first imaging effect of the first image captured by the camera at the first position already satisfies the first preset condition, so that the second image may also satisfy the first preset condition. Namely, the user can obtain the photo meeting the photographing requirement of the user without manually adjusting the photographing angle and other operations. Therefore, the terminal equipment provided by the embodiment of the invention can simplify the operation of the user, reduce the difficulty of the user and ensure the photographing quality of the camera.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 17, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and preferably, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Optionally, an embodiment of the present invention further provides a terminal device, which, with reference to fig. 17, includes a processor 110, a memory 109, and a computer program that is stored in the memory 109 and is executable on the processor 110, and when the computer program is executed by the processor 110, the computer program implements each process of the shooting method embodiment, and can achieve the same technical effect, and is not described herein again to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be substantially or partially embodied in the form of a software product, where the software product is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk), and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the shooting method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (4)

1. A shooting method is applied to terminal equipment and is characterized by comprising the following steps:
acquiring a first image shot by a camera at a first position;
if the first imaging effect of the first image meets a first preset condition, displaying or storing a second image shot by the camera at the first position, wherein the second image is the first image or a third image shot by the camera at the first position;
wherein the first imaging effect specifically includes:
a first exposure balance degree, a first image distortion degree and a first portrait deformation degree of the first image;
after the acquiring the first image shot by the camera at the first position, the method further comprises: performing quality evaluation on the first image to obtain a first quality evaluation result;
wherein the first quality assessment result comprises a first exposure equalization factor, a first image distortion factor, and a first human image deformation factor for the first image;
the first imaging effect is an imaging parameter calculated according to the first quality evaluation result;
wherein the imaging parameters calculated from the first quality assessment results include:
respectively setting weights of a first exposure equalization factor, a first image distortion factor and a first human image deformation factor, and carrying out weighted average on the first exposure equalization factor, the first image distortion factor and the first human image deformation factor of the first image to obtain an imaging parameter as a first imaging effect;
the first exposure equalization factor is obtained by the following method: acquiring a first duty ratio L1 of over-exposed and/or over-dark pixels of the first image; obtaining the first exposure equalization factor according to the first proportion L1;
the first image distortion factor is obtained by the following method: calculating the curvature corresponding to the linear target appearing in the first image, and then obtaining a first image distortion factor according to the curvature;
the first portrait deformation factor is obtained by the following method: acquiring a first pitch angle of the first image; if the first pitching angle is smaller than or equal to a first threshold value, setting the first human image deformation factor as a first preset value; if the first pitch angle is larger than a first threshold value, the vertical proportion of the first surface of the first image is determined
Figure 668438DEST_PATH_IMAGE001
First nose ratio
Figure 602896DEST_PATH_IMAGE002
In proportion to the first lip
Figure 65102DEST_PATH_IMAGE003
Obtaining a value of a first portrait deformation factor;
after the acquiring the first image shot by the camera at the first position, the method further comprises:
fixing or adjusting the camera to the first position.
2. The method of claim 1, further comprising, after said acquiring the first image taken by the camera at the first location:
acquiring a fourth image shot by the camera at a second position;
judging whether the first imaging effect of the first image meets a first preset condition or not;
wherein, the first imaging effect of the first image satisfies a first preset condition, and specifically includes:
the first imaging effect of the first image is better than the fourth imaging effect of the fourth image.
3. A photographing apparatus, characterized by comprising: the device comprises an acquisition module and a processing module;
the acquisition module is used for acquiring a first image shot by the camera at a first position;
the processing module is used for displaying or storing a second image shot by the camera at the first position when a first imaging effect of the first image meets a first preset condition, wherein the second image is the first image or a third image shot by the camera at the first position;
wherein the first imaging effect specifically includes:
a first exposure balance degree, a first image distortion degree and a first portrait deformation degree of the first image;
the shooting device also comprises a quality evaluation module;
the quality evaluation module is used for carrying out quality evaluation on the first image to obtain a first quality evaluation result;
wherein the first quality assessment result comprises a first exposure equalization factor, a first image distortion factor, and a first human image deformation factor for the first image;
the first imaging effect is an imaging parameter calculated according to the first quality evaluation result;
wherein the imaging parameters calculated from the first quality assessment results include:
respectively setting weights of a first exposure equalization factor, a first image distortion factor and a first human image deformation factor, and carrying out weighted average on the first exposure equalization factor, the first image distortion factor and the first human image deformation factor of the first image to obtain an imaging parameter as a first imaging effect;
the first exposure equalization factor is obtained by the following method: acquiring a first duty ratio L1 of over-exposed and/or over-dark pixels of the first image; obtaining the first exposure equalization factor according to the first proportion L1;
the first image distortion factor is obtained by the following method: calculating the curvature corresponding to the linear target appearing in the first image, and then obtaining a first image distortion factor according to the curvature;
the first portrait deformation factor is obtained by the following method: acquiring a first pitch angle of the first image; if the first pitch angleIf the degree is less than or equal to a first threshold value, setting the first portrait deformation factor as a first preset value; if the first pitch angle is larger than a first threshold value, the vertical proportion of the first surface of the first image is determined
Figure 408314DEST_PATH_IMAGE001
First nose ratio
Figure 410905DEST_PATH_IMAGE002
In proportion to the first lip
Figure 403131DEST_PATH_IMAGE003
Obtaining a value of a first portrait deformation factor;
the shooting device also comprises a control module;
the control module is used for fixing or adjusting the camera to the first position.
4. The camera of claim 3, wherein the acquisition module comprises a first acquisition sub-module;
the first obtaining submodule is used for obtaining a fourth image shot by the camera at a second position;
the shooting device also comprises a judging module;
the judging module is configured to judge whether the first imaging effect of the first image is better than a fourth imaging effect of the fourth image, and if the first imaging effect of the first image is better than the fourth imaging effect of the fourth image, determine that the first imaging effect of the first image satisfies a first preset condition.
CN201811479252.1A 2018-12-05 2018-12-05 Shooting method and shooting device Active CN109274894B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811479252.1A CN109274894B (en) 2018-12-05 2018-12-05 Shooting method and shooting device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811479252.1A CN109274894B (en) 2018-12-05 2018-12-05 Shooting method and shooting device

Publications (2)

Publication Number Publication Date
CN109274894A CN109274894A (en) 2019-01-25
CN109274894B true CN109274894B (en) 2021-01-08

Family

ID=65186965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811479252.1A Active CN109274894B (en) 2018-12-05 2018-12-05 Shooting method and shooting device

Country Status (1)

Country Link
CN (1) CN109274894B (en)

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236086B (en) * 2008-01-31 2010-08-11 北京控制工程研究所 Ultraviolet moon sensor output data evaluation and judging method
CN104427234A (en) * 2013-09-02 2015-03-18 联想(北京)有限公司 Image distortion correction method and electronic device
CN106464755B (en) * 2015-02-28 2019-09-03 华为技术有限公司 The method and electronic equipment of adjust automatically camera
CN104754218B (en) * 2015-03-10 2018-03-27 广东欧珀移动通信有限公司 A kind of Intelligent photographing method and terminal
CN104883495B (en) * 2015-04-30 2018-05-29 广东欧珀移动通信有限公司 A kind of photographic method and device
CN105139404B (en) * 2015-08-31 2018-12-21 广州市幸福网络技术有限公司 A kind of the license camera and shooting quality detection method of detectable shooting quality
CN105959542B (en) * 2016-05-17 2019-06-25 联想(北京)有限公司 Image processing method and electronic equipment
CN106027907B (en) * 2016-06-30 2019-08-20 维沃移动通信有限公司 A kind of method and mobile terminal of adjust automatically camera
US10218901B2 (en) * 2017-04-05 2019-02-26 International Business Machines Corporation Picture composition adjustment
CN108111751B (en) * 2017-12-12 2020-06-02 北京小米移动软件有限公司 Shooting angle adjusting method and device

Also Published As

Publication number Publication date
CN109274894A (en) 2019-01-25

Similar Documents

Publication Publication Date Title
CN110719402B (en) Image processing method and terminal equipment
CN108307109B (en) High dynamic range image preview method and terminal equipment
CN109743498B (en) Shooting parameter adjusting method and terminal equipment
CN110896451B (en) Preview picture display method, electronic device and computer readable storage medium
CN110505400B (en) Preview image display adjustment method and terminal
CN108234894B (en) Exposure adjusting method and terminal equipment
CN109462745B (en) White balance processing method and mobile terminal
CN107623818B (en) Image exposure method and mobile terminal
CN111031234B (en) Image processing method and electronic equipment
CN111601032A (en) Shooting method and device and electronic equipment
CN109005355B (en) Shooting method and mobile terminal
CN110830713A (en) Zooming method and electronic equipment
CN111083386B (en) Image processing method and electronic device
CN111459233A (en) Display method, electronic device, and storage medium
CN110769154B (en) Shooting method and electronic equipment
CN110868546B (en) Shooting method and electronic equipment
CN110602390B (en) Image processing method and electronic equipment
CN109859718B (en) Screen brightness adjusting method and terminal equipment
CN109104573B (en) Method for determining focusing point and terminal equipment
CN111131722A (en) Image processing method, electronic device, and medium
CN110708475A (en) Exposure parameter determination method, electronic equipment and storage medium
CN111432122B (en) Image processing method and electronic equipment
CN111147754B (en) Image processing method and electronic device
CN110913133B (en) Shooting method and electronic equipment
CN110769162B (en) Electronic equipment and focusing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant