CN110868542A - Photographing method, device and equipment - Google Patents

Photographing method, device and equipment Download PDF

Info

Publication number
CN110868542A
CN110868542A CN201911161610.9A CN201911161610A CN110868542A CN 110868542 A CN110868542 A CN 110868542A CN 201911161610 A CN201911161610 A CN 201911161610A CN 110868542 A CN110868542 A CN 110868542A
Authority
CN
China
Prior art keywords
target
terminal
composition
preview image
shot object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911161610.9A
Other languages
Chinese (zh)
Inventor
肖明
李凌志
陆伟峰
朱荣昌
代雪刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Microphone Holdings Co Ltd
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Microphone Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Microphone Holdings Co Ltd filed Critical Shenzhen Microphone Holdings Co Ltd
Priority to CN201911161610.9A priority Critical patent/CN110868542A/en
Publication of CN110868542A publication Critical patent/CN110868542A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Abstract

The embodiment of the invention discloses a photographing method, a photographing device and photographing equipment, wherein the photographing method acquires a preview image through a terminal; the terminal outputs a target adjusting parameter according to the preview image, wherein the target adjusting parameter is used for indicating the terminal to carry out position adjustment through the target adjusting parameter or indicating the shot object to carry out position adjustment through the target adjusting parameter; and when receiving indication information for indicating the completion of the adjustment, the terminal performs shooting. The shooting method can solve the problem that the terminal is mainly controlled by people to shoot in the shooting function of the existing terminal, and the terminal can adjust the position of the shot object by processing the preview image, thereby improving the quality of the terminal for automatically shooting the picture.

Description

Photographing method, device and equipment
Technical Field
The embodiment of the application relates to the field of photographing, in particular to a photographing method, device and equipment.
Background
With the continuous development of terminal equipment, the photographing function is also different day by day, the recording of beautiful life through photographing becomes an essential part in the life of people, a photo in a journey and a self-timer in a leisure time add much fun to the recording of life of people.
At present, people mainly control a mobile phone or a camera to carry out self-shooting by holding the mobile phone or the camera by hands or using a self-shooting rod and other equipment, and the situation that the posture is unnatural is easy to occur; or people need to take pictures with the help of other people, people who take pictures can not be seen with people at the expense when the group takes pictures, and even the embarrassment that people cannot be found to take pictures with the help can occur if people are found in places with rare people. Therefore, the photographing function of the terminal can be developed towards more intellectualization.
Disclosure of Invention
The embodiment of the invention discloses a photographing method, a photographing device and photographing equipment, which can solve the problem that the photographing function of the conventional terminal mainly needs to be controlled by a human to photograph, and the terminal can adjust the position of a photographed object by processing a preview image, so that the quality of automatically photographing a photo by the terminal is improved.
In a first aspect, an embodiment of the present application provides a photographing method, including:
the terminal acquires a preview image;
the terminal determines a target adjustment parameter according to the position of the shot object in the preview image, wherein the target adjustment parameter is used for indicating the adjustment of the position of the shot object in the preview image;
and after receiving indication information for indicating the completion of the adjustment, the terminal shoots.
As a possible implementation manner, the terminal determines the target adjustment parameter according to the position of the captured object in the preview image, including the terminal determining the target adjustment parameter according to a difference between the position of the captured object in the preview image and a target position in a target composition, where the target position is used for indicating the position of the captured object in the target composition, and the method further includes:
and the terminal outputs the target adjusting parameter.
As a possible implementation manner, before determining the target adjustment parameter according to the difference between the position of the subject in the preview image and the target position in the target composition, the method further includes one of the following steps:
the terminal determines the target composition as a composition corresponding to the shape presented by the position of the shot object in the preview image according to the corresponding relation between the shape and the composition;
the terminal determines the target composition as the composition corresponding to the depth of the shot object and the proportion of the pixel points of the shot object in the pixel points of the preview image according to the corresponding relation among the depth, the proportion and the composition;
and the terminal determines the target composition to be the composition corresponding to the number of the shot objects according to the corresponding relation between the number and the composition.
As a possible implementation, the determining the target adjustment parameter according to the difference between the position of the captured object in the preview image and the target position in the target composition includes:
the terminal determines the target adjusting parameters according to the difference between the position of the shot object in the preview image and the target position in the target composition and the depth of the shot object, wherein the target adjusting parameters comprise the translation amount and the rotation amount of the terminal, and the target adjusting parameters are used for indicating the terminal to adjust the position through the target adjusting parameters.
As a possible implementation manner, the outputting, by the terminal, the target adjustment parameter includes:
and the terminal sends the target adjustment parameters to a support so that the support adjusts the terminal according to the target adjustment parameters, wherein the support is in communication connection with the terminal.
As a possible implementation, the determining the target adjustment parameter according to the difference between the position of the captured object in the preview image and the target position in the target composition includes:
and determining the target adjustment parameter according to the difference between the position of the shot object in the preview image and the target position in the target composition and the depth of the shot object, wherein the target adjustment parameter is used for indicating the shot object to carry out position adjustment through the target adjustment parameter.
As a possible implementation manner, after receiving the indication information indicating that the adjustment is completed, before the terminal performs shooting, the method further includes:
and the terminal outputs the target audio and video.
As a possible implementation manner, before the subject is a person and the terminal outputs the target audio/video, the method further includes:
the terminal identifies the age of the shot object;
and the terminal determines the audio and video corresponding to the age of the shot object in an audio and video database according to the age of the shot object, wherein the audio and video database comprises the audio and video corresponding to a plurality of age groups.
As a possible implementation manner, after receiving the indication information indicating that the adjustment is completed, before the terminal performs shooting, the method further includes:
the terminal outputs indication information indicating a posture of the subject.
In a second aspect, an embodiment of the present application discloses a photographing apparatus, including:
an acquisition unit configured to acquire a preview image;
a parameter determining unit, configured to determine a target adjustment parameter according to a position of a captured object in the preview image, where the target adjustment parameter is used to instruct to adjust the position of the captured object in the preview image;
a receiving unit, configured to receive indication information indicating that adjustment is completed;
and the shooting unit is used for shooting after the receiving unit receives the indication information for indicating the completion of the adjustment.
As a possible implementation manner, the parameter determining unit is specifically configured to determine the target adjustment parameter according to a difference between a position of the subject in the preview image and a target position in a target composition, where the target position is used to indicate the position of the subject in the target composition, and the apparatus further includes:
and the first output unit is used for outputting the target adjustment parameter.
As a possible implementation manner, the apparatus further includes a matching unit, and the matching unit is specifically configured to perform one of the following steps:
determining the target composition as a composition corresponding to the shape presented by the position of the shot object in the preview image according to the corresponding relation between the shape and the composition;
the terminal determines the target composition as the composition corresponding to the depth of the shot object and the proportion of the pixel points of the shot object in the pixel points of the preview image according to the corresponding relation among the depth, the proportion and the composition;
and the terminal determines the target composition to be the composition corresponding to the number of the shot objects according to the corresponding relation between the number and the composition.
As a possible implementation manner, the parameter determining unit is specifically configured to:
determining the target adjustment parameters according to the difference between the position of the shot object in the preview image and the target position in the target composition and the depth of the shot object, wherein the target adjustment parameters comprise the translation amount and the rotation amount of the terminal, and the target adjustment parameters are used for indicating the terminal to carry out position adjustment through the target adjustment parameters.
As a possible implementation, the first output unit is specifically configured to:
and sending the target adjustment parameters to a support so that the support adjusts the terminal according to the target adjustment parameters, wherein the support is in communication connection with the terminal.
As a possible implementation manner, the parameter determining unit is specifically configured to:
and determining the target adjustment parameter according to the difference between the position of the shot object in the preview image and the target position in the target composition and the depth of the shot object, wherein the target adjustment parameter is used for indicating the shot object to carry out position adjustment through the target adjustment parameter.
As a possible implementation, the apparatus further comprises:
and the second output unit is used for outputting the target audio and video after the receiving unit receives the indication information for indicating the completion of the adjustment and before the shooting unit shoots.
As a possible implementation manner, before the subject is a person and the terminal outputs the target audio/video, the apparatus further includes:
an identification unit configured to identify an age of the subject;
and the age determining unit is used for determining the audio and video corresponding to the age of the shot object in an audio and video database according to the age of the shot object, wherein the audio and video database comprises the audio and video corresponding to a plurality of age groups.
As a possible implementation manner, after receiving the indication information indicating that the adjustment is completed, before the terminal performs shooting, the apparatus further includes:
a third output unit configured to output instruction information indicating a posture of the subject.
A third aspect discloses a photographing apparatus, which includes a processor and a memory, wherein the processor is connected to the memory, the memory is used for storing program codes, and the processor is used for calling the program codes to implement the photographing method disclosed in the first aspect or any embodiment of the first aspect.
A fourth aspect discloses a computer-readable storage medium storing a computer program or computer instructions which, when executed, implement the photographing method as disclosed in the first aspect or any of the embodiments of the first aspect.
In the embodiment of the invention, a terminal acquires a preview image; the terminal determines a target adjusting parameter according to the position of the shot object in the preview image, wherein the target adjusting parameter is used for indicating and adjusting the position of the shot object in the preview image; and after receiving the indication information for indicating the completion of the adjustment, the terminal performs shooting. By implementing the embodiment of the invention, the problem that the photographed object can not be freely photographed due to the fact that the terminal is mainly manually controlled to photograph in the photographing function of the conventional terminal can be solved, and the terminal can adjust the position of the photographed object in the preview image according to the composition corresponding to the preview image, so that the quality of automatically photographing the picture by the terminal can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a photographing system architecture according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an exemplary imaging principle disclosed in an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating a photographing method according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating an embodiment of outputting target adjustment parameters according to a position of a captured object in a preview image;
FIG. 5 illustrates an exemplary method for determining an output target adjustment parameter, according to an embodiment of the present disclosure;
FIG. 6 is a flowchart illustrating another embodiment of outputting target adjustment parameters according to the position of the captured object in the preview image;
FIG. 7 illustrates another exemplary method for determining target tuning parameters, in accordance with an embodiment of the present disclosure;
FIG. 8 is a flowchart of a method for determining a composition according to a shape assumed by a position of a subject in a preview image according to an embodiment of the present invention;
FIG. 9 is a flowchart of an exemplary method for determining a composition based on a shape assumed by a position of a subject in a preview image according to embodiments of the present disclosure;
FIG. 10 is an exemplary illustration of a shape to composition correspondence, in accordance with embodiments of the present invention;
fig. 11 is a flowchart of a method for determining a composition according to a depth of a subject and a ratio of pixel points of the subject to be photographed to pixel points of a preview image according to an embodiment of the present invention;
fig. 12 is a method for determining a composition according to a depth of a subject and a ratio of pixel points of the subject to pixel points of a preview image according to an embodiment of the present invention;
FIG. 13 is an exemplary illustration of a depth, scale and composition correspondence according to embodiments of the present invention;
FIG. 14 is a flowchart of a method for determining a composition based on the number of subjects disclosed in an embodiment of the present invention;
fig. 15 is an exemplary method for determining a composition according to the number of subjects to be photographed according to an embodiment of the present invention;
FIG. 16 is an exemplary illustration of a quantity versus composition according to an embodiment of the present invention;
fig. 17 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present invention;
fig. 18 is a schematic structural diagram of another photographing apparatus according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention discloses a photographing method and device, which are used for improving the quality of automatic photographing of a terminal. The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be understood that the terminology used in the embodiments of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, fig. 1 is a schematic diagram of a photographing system according to an embodiment of the present invention. As shown in fig. 1, the system architecture diagram may include a terminal 101, a first server 102, and a second server 103.
The terminal 101 acquires a preview image, where the preview image may be acquired by a camera of the terminal 101, and the terminal 101 may be a smart phone, a tablet computer, a camera, or other devices including a camera, which is not limited herein.
The terminal 101 identifies the object to be photographed in the preview image, matches a corresponding composition (for example, an S-shaped composition, a spiral composition, a golden ratio composition, a squared figure, etc.) according to the shape presented by the position distribution of the object to be photographed in the preview image, and then the terminal 101 determines the target adjustment parameter that the object to be photographed needs to obtain according to the composition according to the position difference between the target position determined by the matched composition mode and the position distribution presented by the object to be photographed in the preview image. The target adjustment parameter can be used for indicating the terminal to adjust the position through the target adjustment parameter; and can also be used to instruct the subject to adjust by the target adjustment parameter. For example, the terminal 101 may adjust the position of the terminal 101 itself by manipulating the cradle of the terminal 101 by outputting the target adjustment parameter to the cradle of the terminal 101; the terminal 101 may output the target adjustment parameter to the object, and the object adjusts the position of the object according to the target adjustment parameter, for example, how the object performs position adjustment by voice broadcast through the terminal 101. The position, shooting angle, and the like of the subject in the preview image can be adjusted by the target adjustment parameter.
In some embodiments of the present application, after the terminal 101 identifies the photographed object in the preview image, a corresponding composition (for example, a diagonal composition, a trisection composition, and the like) may be determined according to the depth of the photographed object and the proportion of the pixel points of the photographed object in the pixel points of the preview image. The depth of the subject refers to a vertical distance between the subject and the camera. Referring to fig. 2, fig. 2 exemplarily shows a schematic diagram of a photographing imaging principle, and as shown in fig. 2, the terminal 101 may obtain the depth and the image distance of the photographed object, that is, the vertical distance between the imaging plane and the lens, by calculating. Then, the terminal 101 determines a target adjustment parameter according to a difference between the target position determined by the composition and the position of the subject in the preview image.
In some embodiments of the present application, after the terminal 101 identifies the photographed objects in the preview image, the number of the photographed objects may also be identified, and the composition (e.g., triangle composition, squared composition, etc.) corresponding to the preview image is determined according to the photographed objects. Then, the terminal 101 determines a target adjustment parameter according to a difference between the target position determined by the composition and the position of the subject in the preview image.
Optionally, after the terminal 101 acquires the preview image, the preview image may also be sent to the server 102, the server 102 identifies the photographed object in the preview image, determines a corresponding composition mode according to the photographed object in the preview image, then, the server 102 sends the target adjustment parameter to the terminal 101, and the terminal 101 outputs the target adjustment parameter.
After the terminal 101 detects that the shot object does not need to be adjusted in position, the terminal 101 can identify the age of the shot object and output the target audio and video from the audio and video library according to the age of the shot object. The fact that the subject does not need to be subjected to position adjustment means that the subject in the preview image has already been adjusted in position according to the selected composition or the terminal 101 detects a command that the position of the subject does not need to be adjusted. The target audio and video library may be from the second server 103, or may be pre-stored by the terminal 101. The target audio and video can be used for changing the expression of the shot object.
After the terminal 101 detects that the subject does not need to adjust the position, the terminal 101 may further output indication information indicating that the subject makes the posture described by the indication information. The indication information may be in the form of pictures, text, voice, video or other media.
The terminal 101 is also configured to detect, in real time, a feature state of the subject in the preview image, the feature state being a facial expression or gesture that can be recognized by the terminal 101. The characteristic state may be a designated state preset by the terminal 101, such as "smiley face", "V-shaped gesture", "blinking eye", and the like. When the terminal 101 detects that the characteristic state preset by the terminal 101 exists in the preview image, the preview image is saved. For example, "smiling face" is a feature state pre-stored in the terminal 101, the terminal 101 detects that a subject to be photographed shows a smiling face in a preview image, and the terminal 101 saves the preview image.
Alternatively, the first server 102 and the second service 103 may not be necessary.
Alternatively, the first server 102 and the second server 103 may be the same server.
Without being limited to the system architecture diagram shown in fig. 1, the photographing system provided in the embodiment of the present application may further include other devices, for example, a third-party server, which may be a server for detecting whether the preview image is suspected of illegal content. The first server 102 may provide third-party data and third-party functions for the photographing system through interaction with the third-party server, so as to further ensure services that the photographing system can provide.
Referring to fig. 3, fig. 3 is a schematic flow chart of a photographing method according to an embodiment of the present invention. As shown in fig. 3, the photographing method may be implemented by the photographing system shown in fig. 1, where the terminal may be the terminal 101, and the implementation of the photographing method may include the following steps.
S202, the terminal acquires a preview image.
The terminal can acquire the preview image in real time through the camera.
And S204, the terminal determines a target adjusting parameter according to the position of the shot object in the preview image, wherein the target adjusting parameter is used for indicating the adjustment of the position of the shot object in the preview image.
The terminal determines the target adjustment parameter according to the position of the shot object in the preview image, and may include that the terminal determines the target adjustment parameter according to a difference between the position of the shot object in the preview image and a target position in the target composition, the target position indicating the position of the shot object in the target composition. Then, the terminal outputs the target adjustment parameter. The terminal can output target adjustment parameters for indicating a bracket in communication connection with the terminal and can also output the target adjustment parameters for enabling the shot object to adjust the position.
And S206, when receiving the instruction information for instructing the adjustment completion, the terminal shoots.
And the terminal identifies the characteristic state in the overview image in real time, and when the characteristic state is identified, the terminal saves the preview image.
Two implementations of outputting the target adjustment parameter according to the position of the subject in the preview image in S204 are described below:
implementation mode (one):
as shown in fig. 4, fig. 4 is a flowchart for outputting the target adjustment parameter according to the position of the captured object in the preview image, and may include the following steps:
s301, the terminal determines target adjusting parameters according to the difference between the position of the shot object in the preview image and the target position in the target composition and the depth of the shot object, wherein the target adjusting parameters comprise the translation amount and the rotation amount of the terminal, and the target adjusting parameters are used for indicating the terminal to carry out position adjustment through the target adjusting parameters.
Referring to fig. 5, fig. 5 illustrates an exemplary method for determining an output target tuning parameter. As shown in fig. 5, G is the actual position of the subject, G1 is the position of G in the preview image, and G2 is the target position specified in the target composition corresponding to the preview image. As can be seen from fig. 5, G1 needs to be moved to the position G2, and according to the photographing imaging principle and the geometric operation relationship, the axial distance d1 between G1 and the imaging plane and the axial distance d2 between G2 and the imaging plane can be obtained, and the image distance M and the depth M of the object to be photographed can also be obtained. The terminal can obtain a path from G1 to G2 in the imaging plane, so that the actual path which the terminal needs to move is calculated according to the path in the imaging plane, the attribute parameters of the actual path are target adjustment parameters, and the attribute parameters comprise direction and distance. For example, as shown in fig. 5, according to the principle of reverse imaging, a path from G1 to G2 for "right translation D '" is determined, and the target adjustment parameter can be obtained as "right translation D'".
S302, the terminal sends the target adjusting parameter to a support so that the support can adjust the terminal according to the target adjusting parameter, and the support is in communication connection with the terminal.
Implementation mode (b):
as shown in fig. 6, fig. 6 is another flow chart for outputting the target adjustment parameter according to the position of the captured object in the preview image, and may include the following steps:
s401, determining a target adjusting parameter according to the difference between the position of the shot object in the preview image and the target position in the target composition and the depth of the shot object, wherein the target adjusting parameter is used for indicating the shot object to carry out position adjustment through the target adjusting parameter.
Referring to fig. 7, fig. 7 illustrates another method for determining target tuning parameters. As shown in fig. 7, a is the actual position of the subject, a1 is the position of a in the preview image, a2 is the target position in the target composition corresponding to the preview image, and a' is the adjusted actual position of a obtained from the target position of the target composition. As can be seen from fig. 7, according to the imaging principle and the geometric operation relationship, the distances d between the axial distances P1 and P2 of a1 and the imaging plane and between the axial distances P2 and a1 and a2 of the imaging plane can be obtained, and the image distance M and the depth M of the object to be photographed can also be obtained. The terminal can obtain a path from A1 to A2 on the imaging plane, thereby calculating the path of the shot object needing to move according to the path of the imaging plane, wherein the attribute parameters of the path are target adjustment parameters, and the attribute parameters comprise direction and distance. As shown in fig. 7, in order to keep the terminal unchanged, the terminal outputs the target adjustment parameter to the subject to adjust the position of the subject, so that the subject moves from the position a1 to a2 in the preview image, a path from a1 to a2 "shift right D" is determined on the imaging plane, and the actual path of a is "shift right D" according to the geometric operation relationship, so that the target adjustment parameter "shift right D" is obtained.
S402, the terminal outputs the target adjusting parameters.
The terminal outputs the target adjusting parameter to the shot object, and the terminal can output the target adjusting parameter in a voice form, can output the target adjusting parameter in a text form, and can output the target adjusting parameter in other forms.
Optionally, determining the position in the preview image to the target position in the target composition in the implementation manner (i) and the implementation manner (ii) may obtain different paths according to different algorithms, so that different target adjustment parameters may be obtained.
It should be understood that the principle of translation in other directions is the same as that of the left and right translation planes.
Before the terminal determines the target adjustment parameter according to the difference between the position of the shot object in the preview image and the target position in the target composition, the target composition of the preview image needs to be determined. The target composition may be a composition selected by the user in the composition gallery after the user opens the camera application of the terminal and before the shooting in step S206, or a composition selected by the terminal according to the acquired preview image.
Three implementation modes that the terminal selects composition according to the acquired preview image are introduced as follows:
implementation mode (one):
and the terminal determines the target composition as the composition corresponding to the shape presented by the position of the shot object in the preview image according to the corresponding relation between the shape and the composition.
Referring to fig. 8, fig. 8 is a flowchart of a method for determining a composition according to a shape presented by a position of a subject in a preview image, which may include the following steps.
S502, recognizing the shot object in the preview image.
Wherein the subject in the preview image can be identified by an image recognition algorithm. For example, a person in the preview image is recognized as a subject by a face recognition algorithm. Referring to fig. 9, fig. 9 is an exemplary illustration of a method of determining a composition according to a shape assumed by a position of a subject in a preview image. As shown in fig. 9, 5 persons in the image can be respectively identified by a face recognition algorithm: A. b, C, D and E.
And S504, determining a corresponding composition according to the shape presented by the position of the shot object in the preview image.
Determining the corresponding composition according to the shape assumed by the position of the photographed object in the preview image in S504 may include the following steps.
S5041, the position of the subject is specified.
As shown in fig. 9, after the terminal recognizes the subject, the position of the lips of the face of the person being photographed may be determined as the position of the subject. Other methods for determining the position of the object to be photographed may be used, for example, the position in the middle between the two eyes of the person to be photographed is used as the position of the person to be photographed, and the present invention is not limited thereto.
S5042, a shape showing the position of the subject is specified from the position of the subject.
After the position of each shot object is determined, the position of each shot object in the preview image is acquired, so that the shape presented by the position in the preview image can be determined. As shown in fig. 9, the location distribution where A, B, C, D and E can be obtained exhibits a substantially S-shape.
S5043, determining a corresponding composition according to the shape of the subject in the preview image.
The composition corresponding to the preview image can be matched in the corresponding relation between the shape and the composition pre-stored in the terminal according to the shape presented by the position of the shot object in the preview image. As shown in fig. 10, fig. 10 exemplarily shows a correspondence of a shape to a composition. For example, the determined position of the subject shown in fig. 9 may have an S-shape, and the most similar composition corresponding to the preview image may be matched to an S-shape composition in the correspondence relationship shown in fig. 10.
Implementation mode (b):
and the terminal determines the target composition as the composition corresponding to the depth of the shot object and the proportion of the pixel points of the shot object in the pixel points of the preview image according to the corresponding relation among the depth, the proportion and the composition.
Referring to fig. 11, fig. 11 is a flowchart of a method for determining a composition according to a depth of a photographed object and a ratio of pixel points of the photographed object to pixel points of a preview image according to an embodiment of the present application, where the method may include the following steps.
S602, recognizing the shot object in the preview image.
The same as step S502. For example, as shown in fig. 12, fig. 12 exemplarily shows a method of determining a composition according to a depth of a subject and a ratio of pixel points of the subject to pixel points of a preview image. As shown in fig. 12, the terminal can recognize the subject F.
And S604, acquiring the depth of the shot object.
The depth of the object is the vertical distance between the object and the lens.
And S606, acquiring the proportion of the pixel points of the shot object in the pixel points of the preview image.
After the terminal identifies the shot object, the number of the pixel points occupied by the shot object can be obtained, and therefore the proportion of the shot object in the pixel points of the preview image is calculated.
And S608, determining a corresponding composition according to the depth of the shot object and the proportion of the pixel points of the shot object in the pixel points of the preview image.
After acquiring the depth of the shot object and the proportion of the pixel points of the shot object in the pixel points of the preview image, the terminal searches the composition corresponding to the preview image in the corresponding relation among the pre-stored depth, proportion and composition. Referring to fig. 13, fig. 13 exemplarily shows a correspondence relationship between depth, scale, and composition. If the depth of the photographed object shown in fig. 12 is less than 30cm and the proportion of the pixel points of the photographed object in the pixel points of the preview image is greater than 45%, the composition of the preview image can be matched to a diagonal composition according to the corresponding relationship between the pre-stored depth, proportion and composition.
Alternatively, other compositions may be preset for the depth and scale according to the corresponding relationship of the depth, scale, and composition.
Implementation mode (c):
and the terminal determines the target composition as the composition corresponding to the number of the shot objects according to the corresponding relation between the number and the composition.
Referring to fig. 14, fig. 14 is a flowchart of a method for determining a composition according to the number of objects to be photographed, which may include the following steps.
S702, the shot object in the preview image is identified.
The same as step S502. Referring to fig. 15, fig. 15 illustrates a method of determining a composition according to the number of subjects to be photographed. As shown in fig. 15, the photographed objects A, B, C, D and E can be recognized in the preview image.
And S704, determining the number of the shot objects.
After recognizing the subject in the preview image, the terminal may acquire the number of subjects. For example, as shown in fig. 15, the number of subjects is 5.
And S706, determining a corresponding composition according to the number of the shot objects.
According to the number of the acquired shot objects, the composition corresponding to the preview image can be determined from the corresponding relationship between the pre-stored number and the composition. Referring to fig. 16, fig. 16 exemplarily shows a correspondence relationship between the amount and the composition. For example, it is recognized in fig. 15 that the number of subjects is 5, and the composition corresponding to the preview image may be determined to be a triangle composition from the correspondence between the number and the composition.
Alternatively, other compositions may be preset for the number according to the correspondence between the number and the composition.
In some embodiments of the present application, the shapes, patterns and positions designed by the composition provided by the above three composition methods are not limited.
In the embodiment of the present application, the composition is not limited to the above three ways of determining the composition according to the preview image, and may also be determined according to other attributes in the preview image. For example, the composition mode may also be determined according to the category of the subject recognized in the preview image.
The shooting object is a person, and the terminal can be used for outputting a target audio and video before the terminal shoots after receiving the indication information for indicating the completion of the adjustment.
The target audio and video can be an audio and video in an audio and video database stored by the terminal, or an audio and video selected by the terminal from the audio and video database, wherein one implementation of selecting the target audio and video by the terminal can comprise the following steps:
s802, the terminal identifies the age of the shot object.
And S804, determining the audio and video corresponding to the age of the shot object in an audio and video database by the terminal according to the age of the shot object, wherein the audio and video database comprises the audio and video corresponding to a plurality of age groups.
For example, the terminal recognizes that the shooting object is a 7-year-old child, and upon receiving the instruction information for instructing the completion of the adjustment, may search for the target audio/video from the audio/video library. For example, the terminal may output a piece of audio: "one polar bear alone stays on ice and stumbles, really starts to pull out own hair to play in boring, one, two, three, and one pulled out at last is not left, and then he dies by cold. "
The terminal may be further configured to output, after receiving the instruction information for instructing completion of the adjustment and before the terminal performs the photographing, instruction information for instructing a posture of the subject to be photographed.
For example, the terminal outputs and displays an image of a V-shaped gesture.
The following describes apparatuses and devices according to embodiments of the present application.
Referring to fig. 17, fig. 17 is a schematic structural diagram of a photographing device according to an embodiment of the present invention. As shown in fig. 17, the photographing apparatus 1700 may be applied to the terminal in the corresponding embodiments of fig. 3, fig. 8, fig. 11 and fig. 14, and the apparatus 1700 may include:
an acquisition unit 1701 for acquiring a preview image;
a parameter determining unit 1702, configured to determine a target adjustment parameter according to a position of the captured object in the preview image, where the target adjustment parameter is used to instruct to adjust the position of the captured object in the preview image;
a receiving unit 1703, configured to receive indication information for indicating that the adjustment is completed;
a shooting unit 1704, configured to perform shooting after the receiving unit receives instruction information for instructing completion of adjustment.
In an implementation of the embodiment of the present application, the parameter determining unit 1702 is specifically configured to determine the target adjustment parameter according to a difference between a position of the captured subject in the preview image and a target position in a target composition, where the target position is used to indicate the position of the captured subject in the target composition, and the apparatus further includes:
a first output unit 1705, configured to output the target adjustment parameter.
In an implementation of the embodiment of the present application, the apparatus further includes a matching unit 1706, where the matching unit 1706 is specifically configured to perform one of the following steps:
determining the target composition as a composition corresponding to the shape presented by the position of the shot object in the preview image according to the corresponding relation between the shape and the composition;
according to the corresponding relation among the depth, the proportion and the composition, determining the target composition as the composition corresponding to the depth of the shot object and the proportion of the pixel points of the shot object in the pixel points of the preview image;
and determining the target composition as the composition corresponding to the number of the shot objects according to the corresponding relation between the number and the composition.
In an implementation of the embodiment of the present application, the parameter determining unit 1702 is specifically configured to:
determining the target adjustment parameters according to the difference between the position of the shot object in the preview image and the target position in the target composition and the depth of the shot object, wherein the target adjustment parameters comprise the translation amount and the rotation amount of the terminal, and the target adjustment parameters are used for indicating the terminal to carry out position adjustment through the target adjustment parameters.
In an implementation of the embodiment of the present application, the first output unit 1705 is specifically configured to:
and sending the target adjustment parameters to a support so that the support adjusts the terminal according to the target adjustment parameters, wherein the support is in communication connection with the terminal.
In an implementation of the embodiment of the present application, the parameter determining unit 1702 is further specifically configured to:
and determining the target adjustment parameter according to the difference between the position of the shot object in the preview image and the target position in the target composition and the depth of the shot object, wherein the target adjustment parameter is used for indicating the shot object to carry out position adjustment through the target adjustment parameter.
In one implementation of this embodiment of the present application, the apparatus 1700 further includes:
and a second output unit 1707, configured to output a target audio and video after the receiving unit receives the indication information indicating that the adjustment is completed and before the shooting unit shoots.
In an implementation of the embodiment of the present application, before the object to be photographed is a person and the terminal outputs the target audio/video, the apparatus 1700 further includes:
an identifying unit 1708 configured to identify an age of the subject;
an age determining unit 1709, configured to determine, according to the age of the captured subject, an audio and video corresponding to the age of the captured subject in an audio and video library, where the audio and video library includes audio and video corresponding to multiple age groups.
In an implementation of the embodiment of the present application, after receiving indication information indicating that the adjustment is completed, before the terminal performs shooting, the apparatus 1700 further includes:
a third output unit 1710 for outputting instruction information indicating the posture of the subject.
It should be understood that, for specific functional implementation manners of the above-mentioned functional units, reference may be made to the related descriptions in the corresponding embodiments of fig. 3, fig. 8, fig. 11, and fig. 14, and details are not described here again.
Referring to fig. 18, fig. 18 is a schematic structural diagram of another photographing apparatus 1800 according to an embodiment of the present invention. As shown in fig. 18, the photographing apparatus 1800 may correspond to the terminal 101 in the embodiment corresponding to fig. 1, and the photographing apparatus 1800 may include: the processor 1801, the network interface 1804, and the memory 1805, wherein the photographing apparatus 1800 further includes: a user interface 1803, and at least one communication bus 1802. A communication bus 1802 is used to enable, among other things, connectivity communication between these components. The user interface 1803 may include a Display (Display) and a Keyboard (Keyboard), and optionally, the user interface 1803 may also include a standard wired interface and a standard wireless interface. The network interface 1804 may optionally include a standard wired interface, a wireless interface (e.g., a WI-FI interface). The memory 1804 may be a high-speed RAM memory or a non-volatile memory (e.g., at least one disk memory). The memory 1805 may optionally be at least one storage device located remotely from the processor 1801. As shown in fig. 18, the memory 1801, which is a kind of computer-readable storage medium, may include therein an operating system, a network communication module, a user interface module, and a device control application program.
In the photographing apparatus 1800 shown in fig. 18, the network interface 1804 may provide network communication functions; and user interface 1803 is primarily an interface for providing input to a user; and the processor 1801 may be configured to invoke a device control application stored in the memory 1805 to perform:
the terminal acquires a preview image;
the terminal determines a target adjustment parameter according to the position of the shot object in the preview image, wherein the target adjustment parameter is used for indicating the adjustment of the position of the shot object in the preview image;
and when receiving indication information for indicating the completion of the adjustment, the terminal performs shooting.
It should be noted that the obtaining unit 1701 and the receiving unit 1703 in fig. 17 may be implemented by the network interface 1804 in fig. 18, and the parameter determining unit 1702, the matching unit 1706, the identifying unit 1708, and the age determining unit 1709 in fig. 17 may be implemented by the processor 1804 in fig. 18.
It should be understood that the photographing apparatus 1800 described in the embodiment of the present invention can perform the description of the photographing method in the embodiment corresponding to any one of fig. 3, fig. 8, fig. 11, and fig. 14, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
Further, here, it is to be noted that: an embodiment of the present invention further provides a computer storage medium, and the computer storage medium stores the aforementioned computer program executed by the photographing apparatus 1700 or 1800, and the computer program includes program instructions, and when the processor executes the program instructions, the method executed in the embodiment corresponding to fig. 3, fig. 8, fig. 11, and fig. 14 can be executed, which will not be described herein again.
In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in the embodiments of the computer storage medium to which the present invention relates, reference is made to the description of the method embodiments of the present invention.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (11)

1. A method of taking a picture, comprising:
the terminal acquires a preview image;
the terminal determines a target adjustment parameter according to the position of the shot object in the preview image, wherein the target adjustment parameter is used for indicating the adjustment of the position of the shot object in the preview image;
and after receiving indication information for indicating the completion of the adjustment, the terminal shoots.
2. The method of claim 1, wherein the terminal determining a target adjustment parameter according to a position of the captured subject in the preview image comprises the terminal determining the target adjustment parameter according to a difference between the position of the captured subject in the preview image and a target position in a target composition, the target position indicating the position of the captured subject in the target composition, the method further comprising:
and the terminal outputs the target adjusting parameter.
3. The method according to claim 2, wherein before determining the target adjustment parameter according to a difference between a position of the subject in the preview image and a target position in a target composition, the method further comprises one of:
the terminal determines the target composition as a composition corresponding to the shape presented by the position of the shot object in the preview image according to the corresponding relation between the shape and the composition;
the terminal determines the target composition as the composition corresponding to the depth of the shot object and the proportion of the pixel points of the shot object in the pixel points of the preview image according to the corresponding relation among the depth, the proportion and the composition;
and the terminal determines the target composition to be the composition corresponding to the number of the shot objects according to the corresponding relation between the number and the composition.
4. The method according to claim 2 or 3, wherein the determining the target adjustment parameter according to the difference between the position of the subject in the preview image and the target position in the target composition comprises:
the terminal determines the target adjusting parameters according to the difference between the position of the shot object in the preview image and the target position in the target composition and the depth of the shot object, wherein the target adjusting parameters comprise the translation amount and the rotation amount of the terminal, and the target adjusting parameters are used for indicating the terminal to adjust the position through the target adjusting parameters.
5. The method of claim 4, wherein the terminal outputs the target adjustment parameter, comprising:
and the terminal sends the target adjustment parameters to a support so that the support adjusts the terminal according to the target adjustment parameters, wherein the support is in communication connection with the terminal.
6. The method according to claim 2 or 3, wherein the determining the target adjustment parameter according to the difference between the position of the subject in the preview image and the target position in the target composition comprises:
and determining the target adjustment parameter according to the difference between the position of the shot object in the preview image and the target position in the target composition and the depth of the shot object, wherein the target adjustment parameter is used for indicating the shot object to carry out position adjustment through the target adjustment parameter.
7. The method according to any one of claims 1 to 3, wherein after receiving the indication information indicating that the adjustment is completed, before the terminal performs shooting, the method further comprises:
and the terminal outputs the target audio and video.
8. The method according to claim 7, wherein before the subject is a person and the terminal outputs the target audio/video, the method further comprises:
the terminal identifies the age of the shot object;
and the terminal determines the audio and video corresponding to the age of the shot object in an audio and video database according to the age of the shot object, wherein the audio and video database comprises the audio and video corresponding to a plurality of age groups.
9. The method according to any one of claims 1 to 3, wherein after receiving the indication information indicating that the adjustment is completed, before the terminal performs shooting, the method further comprises:
the terminal outputs indication information indicating a posture of the subject.
10. A photographing apparatus comprising a processor and a memory, the processor being coupled to the memory, wherein the memory is configured to store program code and the processor is configured to call the program code to implement the method of any one of claims 1-9.
11. A computer-readable storage medium, in which a computer program or computer instructions are stored which, when executed, implement the method according to any one of claims 1-9.
CN201911161610.9A 2019-11-22 2019-11-22 Photographing method, device and equipment Pending CN110868542A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911161610.9A CN110868542A (en) 2019-11-22 2019-11-22 Photographing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911161610.9A CN110868542A (en) 2019-11-22 2019-11-22 Photographing method, device and equipment

Publications (1)

Publication Number Publication Date
CN110868542A true CN110868542A (en) 2020-03-06

Family

ID=69656121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911161610.9A Pending CN110868542A (en) 2019-11-22 2019-11-22 Photographing method, device and equipment

Country Status (1)

Country Link
CN (1) CN110868542A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111757007A (en) * 2020-07-09 2020-10-09 深圳市欢太科技有限公司 Image shooting method, device, terminal and storage medium
CN112653841A (en) * 2020-12-23 2021-04-13 维沃移动通信有限公司 Shooting method and device and electronic equipment
WO2022178724A1 (en) * 2021-02-24 2022-09-01 深圳市大疆创新科技有限公司 Image photographing method, terminal device, photographing apparatus, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205883348U (en) * 2016-08-08 2017-01-11 上海大学 Automatic rotation type cell phone stand
CN107509032A (en) * 2017-09-08 2017-12-22 维沃移动通信有限公司 One kind is taken pictures reminding method and mobile terminal
CN107749947A (en) * 2017-09-28 2018-03-02 努比亚技术有限公司 Photographic method, mobile terminal and computer-readable recording medium
CN108174108A (en) * 2018-03-08 2018-06-15 广州三星通信技术研究有限公司 The method and apparatus and mobile terminal for effect of taking pictures are adjusted in the terminal
US20180183995A1 (en) * 2016-12-28 2018-06-28 Facebook, Inc. Systems and methods for presenting content based on unstructured visual data
CN108289174A (en) * 2018-01-25 2018-07-17 努比亚技术有限公司 A kind of image pickup method, mobile terminal and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205883348U (en) * 2016-08-08 2017-01-11 上海大学 Automatic rotation type cell phone stand
US20180183995A1 (en) * 2016-12-28 2018-06-28 Facebook, Inc. Systems and methods for presenting content based on unstructured visual data
CN107509032A (en) * 2017-09-08 2017-12-22 维沃移动通信有限公司 One kind is taken pictures reminding method and mobile terminal
CN107749947A (en) * 2017-09-28 2018-03-02 努比亚技术有限公司 Photographic method, mobile terminal and computer-readable recording medium
CN108289174A (en) * 2018-01-25 2018-07-17 努比亚技术有限公司 A kind of image pickup method, mobile terminal and computer readable storage medium
CN108174108A (en) * 2018-03-08 2018-06-15 广州三星通信技术研究有限公司 The method and apparatus and mobile terminal for effect of taking pictures are adjusted in the terminal

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111757007A (en) * 2020-07-09 2020-10-09 深圳市欢太科技有限公司 Image shooting method, device, terminal and storage medium
WO2022007518A1 (en) * 2020-07-09 2022-01-13 深圳市欢太科技有限公司 Image photographing method and apparatus, terminal, and storage medium
CN111757007B (en) * 2020-07-09 2022-02-08 深圳市欢太科技有限公司 Image shooting method, device, terminal and storage medium
CN112653841A (en) * 2020-12-23 2021-04-13 维沃移动通信有限公司 Shooting method and device and electronic equipment
WO2022178724A1 (en) * 2021-02-24 2022-09-01 深圳市大疆创新科技有限公司 Image photographing method, terminal device, photographing apparatus, and storage medium

Similar Documents

Publication Publication Date Title
CN108764091B (en) Living body detection method and apparatus, electronic device, and storage medium
CN109889724B (en) Image blurring method and device, electronic equipment and readable storage medium
CN104767933B (en) A method of having the portable digital equipment and screening photo of camera function
CN110868542A (en) Photographing method, device and equipment
EP1800471A1 (en) Method and apparatus for processing document image captured by camera
CN105554389B (en) Shooting method and device
US20170161553A1 (en) Method and electronic device for capturing photo
EP2230836A1 (en) Method for Creating Panorama
CN109756723B (en) Method and apparatus for acquiring image, storage medium and electronic device
CN112040115B (en) Image processing apparatus, control method thereof, and storage medium
CN107944367B (en) Face key point detection method and device
US20100165119A1 (en) Method, apparatus and computer program product for automatically taking photos of oneself
CN108154466B (en) Image processing method and device
US20210406532A1 (en) Method and apparatus for detecting finger occlusion image, and storage medium
KR20120118144A (en) Apparatus and method for capturing subject in photographing device
CN112702521A (en) Image shooting method and device, electronic equipment and computer readable storage medium
CN113194254A (en) Image shooting method and device, electronic equipment and storage medium
CN110213486A (en) Image capturing method, terminal and computer readable storage medium
KR101672691B1 (en) Method and apparatus for generating emoticon in social network service platform
JP2014050022A (en) Image processing device, imaging device, and program
CN104735353B (en) A kind of method and device for the photo that pans
CN106954093B (en) Panoramic video processing method, device and system
CN110047115B (en) Star image shooting method and device, computer equipment and storage medium
JP5332493B2 (en) Camera, image sharing server, and image sharing program
KR20090119640A (en) An apparatus for displaying a preference level for an image and a method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200306

RJ01 Rejection of invention patent application after publication