CN111343382B - Photographing method and device, electronic equipment and storage medium - Google Patents

Photographing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111343382B
CN111343382B CN202010158602.5A CN202010158602A CN111343382B CN 111343382 B CN111343382 B CN 111343382B CN 202010158602 A CN202010158602 A CN 202010158602A CN 111343382 B CN111343382 B CN 111343382B
Authority
CN
China
Prior art keywords
point
composition
shooting
determining
preview image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010158602.5A
Other languages
Chinese (zh)
Other versions
CN111343382A (en
Inventor
罗彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jinsheng Communication Technology Co ltd
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Shanghai Jinsheng Communication Technology Co ltd
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jinsheng Communication Technology Co ltd, Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Shanghai Jinsheng Communication Technology Co ltd
Priority to CN202010158602.5A priority Critical patent/CN111343382B/en
Publication of CN111343382A publication Critical patent/CN111343382A/en
Priority to PCT/CN2021/074205 priority patent/WO2021179831A1/en
Application granted granted Critical
Publication of CN111343382B publication Critical patent/CN111343382B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses a photographing method, a photographing device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a preview image of a shooting scene, and calling a key point identification model to perform key point detection on the preview image to obtain a target key point of a shooting subject in the shooting scene; determining a current composition type according to the target key point, and determining a positioning point corresponding to the shooting subject according to the composition type and the target key point; determining a composition point corresponding to the shooting subject according to the positioning point and the composition type; when the fixed point is not matched with the drawing point, outputting prompt information for indicating and adjusting the shooting posture of the electronic equipment; and when the fixed point is matched with the mapping point, shooting the shooting scene to obtain a shot image. Therefore, composition suggestions are proposed and pictures are taken when the pictures are taken.

Description

Photographing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a photographing method and apparatus, an electronic device, and a storage medium.
Background
With the rapid development of the intelligent device, more and more people start to use the intelligent device to take pictures, and during shooting, a shot person can only guide to put a posture according to the experience of a photographer and select a proper angle to take pictures. However, most of the photographers have no experience in photography, the shot photos are not very beautiful, and the photographer cannot take a proper photographing posture without the experience in photography.
Disclosure of Invention
The embodiment of the application provides a photographing method and device, electronic equipment and a storage medium. Composition suggestions can be provided to the subject and photographed.
In a first aspect, an embodiment of the present application provides a photographing method, where the method includes:
acquiring a preview image of a shooting scene, and calling a key point identification model to perform key point detection on the preview image to obtain a target key point of a shooting subject in the shooting scene;
determining a current composition type according to the target key point, and determining a positioning point corresponding to the shooting subject according to the composition type and the target key point;
determining a composition point corresponding to the shooting subject according to the positioning point and the composition type;
when the positioning point is not matched with the composition point, outputting prompt information for indicating the adjustment of the shooting posture of the electronic equipment;
and when the positioning point is matched with the composition point, shooting the shooting scene to obtain a shot image.
In a second aspect, an embodiment of the present application provides a photographing apparatus, including:
the acquisition module is used for acquiring a preview image of a shooting scene and calling a key point identification model to perform key point detection on the preview image to obtain a target key point of a shooting subject in the shooting scene;
the first determining module is used for determining the current composition type according to the target key point and determining a positioning point corresponding to the shooting subject according to the composition type and the target key point;
the second determination module is used for determining a composition point corresponding to the shooting subject according to the positioning point and the composition type;
the prompting module is used for outputting prompting information used for indicating and adjusting the shooting posture of the electronic equipment when the positioning point is not matched with the composition point;
and the photographing module is used for photographing the photographing scene to obtain a photographed image when the positioning point is matched with the composition point.
In a third aspect, a storage medium is provided in this application, where a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute a photographing method as provided in any embodiment of this application.
In a fourth aspect, the electronic device provided in this embodiment of the present application includes a processor and a memory, where the memory has a computer program, and the processor is configured to execute the photographing method provided in any embodiment of the present application by calling the computer program.
In the embodiment of the application, the key point detection is carried out on the preview image by acquiring the preview image of the shooting scene and calling the key point identification model to obtain the target key point of the shooting subject in the shooting scene; then determining the current composition type according to the target key point, and determining a positioning point corresponding to the shooting subject according to the composition type and the target key point; finally, determining a composition point corresponding to the shooting subject according to the positioning point and the composition type; when the positioning point is not matched with the drawing point, outputting prompt information for indicating and adjusting the shooting posture of the electronic equipment; and when the fixed point is matched with the mapping point, shooting the shooting scene to obtain a shot image. Therefore, composition suggestions are proposed and pictures are taken when the pictures are taken.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a first flowchart of a photographing method according to an embodiment of the present application.
Fig. 2 is a second flow chart of the photographing method according to the embodiment of the present application.
Fig. 3 is a schematic structural diagram of a keypoint identification model provided in an embodiment of the present application.
Fig. 4 is a schematic diagram of a first structure of a second identifier model provided in an embodiment of the present application.
Fig. 5 is a second structural diagram of a second identifier model provided in the embodiment of the present application.
Fig. 6 is a third structural diagram of a second identifier model provided in an embodiment of the present application.
FIG. 7 is a flowchart illustration of determining a composition type provided by an embodiment of the present application.
Fig. 8 is a schematic diagram of a patterned triple line provided in an embodiment of the present application.
FIG. 9 is a schematic diagram of composition information provided in an embodiment of the present application.
Fig. 10 is a first structural schematic diagram of a photographing device according to an embodiment of the present application.
Fig. 11 is a second structural schematic diagram of the photographing apparatus according to the embodiment of the present application.
Fig. 12 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The term "module" as used herein may be considered a software object executing on the computing system. The various modules, engines, and services herein may be considered as objects of implementation on the computing system.
The embodiment of the application provides a photographing method, and an execution main body of the photographing method can be the photographing device provided by the embodiment of the application or an electronic device integrated with the photographing device. The electronic device may be a smart phone, an intelligent wearable device, a tablet computer, a Personal Digital Assistant (PDA), or the like. The following are detailed below.
Referring to fig. 1, fig. 1 is a first flow chart of a photographing method according to an embodiment of the present disclosure. The photographing method can provide composition suggestions to a subject and photograph. The photographing method may include the steps of:
101. and acquiring a preview image of the shooting scene, and calling a key point recognition model to perform key point detection on the preview image to obtain a target key point of a shooting subject in the shooting scene.
It can be understood that, when taking a picture, a preview image is generated on the screen of the electronic device, so that the photographer can view the current picture information at any time. When a user takes a picture, the electronic equipment can automatically detect whether a shooting subject exists in the preview image, and when the shooting subject exists in the preview image, the key point recognition model is automatically called to carry out key point recognition on the preview image.
The subject to be photographed may be a plurality of subjects capable of photographing, such as a human, an animal, a plant, a doll, and a doll. The key point identification model can identify one shooting subject and also can identify a plurality of shooting subjects.
In one embodiment, one or more photographic subjects, each having a corresponding keypoint, may be included in the preview image. Taking the shooting subject as the puppet as an example, the complete image of the puppet includes a plurality of key points, for example, key points exist on the head, the chest, the limbs, the neck, and the joints.
In the preview image, the key points identified by the key point identification model are not necessarily key points on the subject, but may be key points of other objects to be photographed, for example, buildings, passerby, etc. on the sides of the road are not the objects to be photographed actually when the person image is photographed. The key points identified by the key point identification model can be screened, so that the target key points of the shooting subject are obtained.
102. Determining the current composition type according to the target key point, and determining the positioning point corresponding to the shooting subject according to the composition type and the target key point.
The key points of the target of the photographic subject are positioned at different parts of the photographic subject. For example, the shooting subject is a doll, and key points are arranged on multiple parts of the doll, such as the head, the hands and the legs. The composition type of the subject can be determined from these key points.
In some embodiments, a specific part of the photographic subject may be recognized according to a target key point of the photographic subject, for example, if the target key point is a key point on the head of the doll, the current composition type may be determined to be a portrait according to the photographic part. If the shooting parts identified according to the target key points comprise heads, hands, trunks, legs and feet, the current composition type is a whole body image.
Wherein the composition type may be determined according to the number of target key points or a ratio between the target key points. And if the number of the target key points of the head is larger than that of the target key points on the trunk, determining that the head is the main shooting part. Or the ratio of the number of the target key points of the head to the number of the target key points of the trunk is larger than a preset proportion threshold value, and the head is determined to be a main shooting part. The composition type of the subject is photographed as a portrait.
It can be understood that the composition type may also be directly determined according to the distribution of the target key points, for example, if the target key points correspond to the head, the hand, the torso, the legs, the feet, and other parts of the human figure, the composition type of the shooting subject may be determined to be a whole body image according to the distribution of the target key points. It should be noted that different positions and different types of compositions may be provided for different subjects, and the figures in the embodiment of the present application are merely examples of subjects.
In some embodiments, after the composition type of the photographic subject is acquired, the positioning point of the photographic subject may be further determined according to the composition type and the target key point. The anchor point may be a point representing the photographic subject, and the anchor point contains position information of the photographic subject.
For example, if the acquired composition type is a portrait, a midpoint between two target key points with the farthest distance between the faces may be selected as the anchor point. If the composition type is a whole body image, a golden section line of the photographic subject can be obtained, and then a positioning point of the photographic subject is selected on the golden section line. And determining a central area of the distribution of the target key points directly according to the distribution condition of the target key points, and selecting the target key points closest to the geometric center of the central area as positioning points in the central area.
103. And determining the composition point corresponding to the shooting subject according to the positioning point and the composition type.
It is understood that after the composition type and the anchor point are determined, a composition point may be generated to guide the photographer and the subject to take a picture.
In some embodiments, the anchor point has detailed location information in the preview image. For example, the preview image is a rectangular image, a rectangular coordinate system is established for the rectangular image, and the positioning point has specific coordinate position information in the rectangular image, and the coordinate position information may represent position information of the photographic subject in the preview image.
When the composition type is determined, for example, when the photographic subject is a doll, there are composition types such as a whole body image, a half body image, and a portrait, and the composition point can be determined from the composition type and the anchor point. For example, a specific position of the shooting subject in the preview image is determined according to the positioning point, and then a visible composition point is generated in the preview image according to the position and the composition type. For example, after the location point position is obtained, the map database may be selected according to the map type, the map database has matching information of the location point position and the map position, and after the location point position is determined, the corresponding map position may be directly searched in the database according to the location point position.
In some embodiments, after the positioning point and the composition type are obtained, the position of the composition point may also be calculated directly through a preset algorithm, and then the position of the composition point is displayed on the preview image.
It is understood that the position of the composition point corresponding to the positioning point may be one or more. When the fixed point and any composition point meet the preset condition, the composition of the subject is successfully shot. When the Euclidean distance between the positioning point and the composition point is within a preset distance range, the positioning point and the composition point are matched; and if the Euclidean distance between the positioning point and the composition point is not within the preset distance range, indicating that the positioning point is not matched with the composition point.
104. When the positioning point is not matched with the drawing point, prompt information for indicating the adjustment of the shooting posture of the electronic equipment is output.
In some embodiments, the positioning point and the composition point may be matched, and when the positioning point does not match with the composition point, a mark of the positioning point and the composition point is displayed on the preview image, and the user is prompted to adjust the shooting posture. For example, an indication arrow is provided between the positioning point and the composition point to indicate a direction or/and a distance that the positioning point needs to be adjusted, and then the user may adjust a photographing posture of the electronic device or adjust a photographing position of the photographing subject.
105. And when the fixed point is matched with the mapping point, shooting the shooting scene to obtain a shot image.
When the positioning point is matched with the composition point, for example, the Euclidean distance between the positioning point and the composition point is within a preset distance range, it is indicated that the shooting subject meets the composition condition at this time, and the shooting scene can be directly shot to obtain a shot image.
In the embodiment of the application, the key point detection is carried out on the preview image by acquiring the preview image of the shooting scene and calling the key point identification model to obtain the target key point of the shooting subject in the shooting scene; then determining the current composition type according to the target key point, and determining a positioning point corresponding to the shooting subject according to the composition type and the target key point; finally, determining a composition point corresponding to the shooting subject according to the positioning point and the composition type; when the positioning point is not matched with the drawing point, outputting prompt information for indicating and adjusting the shooting posture of the electronic equipment; and when the fixed point is matched with the mapping point, shooting the shooting scene to obtain a shot image.
The method comprises the steps of identifying target key points of a shooting main body to obtain positioning points and corresponding composition points of the shooting main body, realizing composition suggestion during shooting through the positioning points and the composition points, and automatically shooting when the positioning points and the composition points meet preset conditions.
Referring to fig. 2, fig. 2 is a second flow chart of the photographing method according to the embodiment of the present disclosure. The photographing method can provide composition suggestions to a subject and photograph. The photographing method may include the steps of:
201. the method comprises the steps of obtaining a preview image of a shooting scene, detecting a shooting subject in the preview image, and generating a detection frame corresponding to the shooting subject.
When the electronic equipment shoots, a preview image is generated on a camera interface of the electronic equipment, and the preview image is changed according to the shooting posture change of the electronic equipment.
In some embodiments, when the preview interface is captured, the preview interface may be identified, whether a photographic subject exists or not may be determined, and if the photographic subject exists, a detection frame may be generated for the photographic subject. The detection frame includes the subject, and the shape of the detection frame may be a regular shape such as a rectangle, a circle, or an ellipse, or an irregular shape such as a shape of a stroke of the subject.
202. And judging whether the area proportion of the detection frame occupying the preview image exceeds a preset threshold value or not.
After the detection frame corresponding to the shooting subject is generated, the area of the detection frame may be calculated, and then the area of the detection frame is divided by the area of the preview picture to obtain the area ratio of the detection frame occupying the preview image. And then judging whether the area ratio exceeds a preset threshold value.
If the area ratio does not exceed the predetermined threshold, the process continues to step 202.
203. And inputting the preview image into the first sub-model to obtain a characteristic diagram of the preview image.
In the embodiment of the application, the key point identification model comprises a first submodel and a second submodel. Referring to fig. 3 in detail, fig. 3 is a schematic structural diagram of a keypoint identification model according to an embodiment of the present application. After the preview image is acquired, the preview image may be directly input to the first sub-model for recognition, and a feature map (feature map) of the preview image may be acquired.
In some embodiments, the first sub-model may be a mobilene v2 model, mobilene v2 being lighter and faster in processing pictures. When the electronic equipment has strong performance and strong calculation power, the first sub-model can also adopt a model with higher precision, such as a VGG19 model, a resnet50 model and the like, and the detection precision of the human body key points can be improved by using the models to extract features.
204. And inputting the characteristic graph into a second sub-model to obtain the target connection characteristic and the target position characteristic of the preview image.
As can be seen from fig. 3, the second submodel includes a plurality of second submodels, and the plurality of second submodels are connected in sequence. The first submodel is connected with the first second submodel, namely the first submodel is connected with the second submodel (1) in the graph.
In the plurality of second submodels, each of the second submodels is capable of outputting a corresponding position feature and connection feature. The position feature may be a three-dimensional matrix, and may be a height width keypoints three-dimensional matrix, where height represents the height of the picture, width represents the width of the picture, and keypoints represents the number of the key points, where the picture is a picture corresponding to each connection feature. The specific location feature may be a heatmap (heatmap).
The connection feature may be a three-dimensional matrix, and may be a three-dimensional matrix of height width symbols. Wherein height represents the height of the picture, width represents the width of the picture, and limbs represents the number of connectors. The connector may be a connection region between two associated key points, for example, the left-eye and right-eye connection may be one connector, and the connector is a connection region between the left-eye and right-eye. Each link corresponds to a three-dimensional matrix of height width 2, and the connection feature can be considered as a two-pass heat map, where each position in the two-pass heat map includes two values, such as an x-value and a y-value, constituting a vector (x, y), which can indicate the direction of the link corresponding to the position, and when both the x-value and the y-value are zero, it indicates that the position has no limbs.
In some embodiments, the input to the first and second submodels is a profile of the output of the first submodel. And the first second submodel processes the characteristic diagram to obtain the position characteristic and the connection characteristic output by the first second submodel.
Referring to fig. 4, fig. 4 is a first structural schematic diagram of a second sub-model according to an embodiment of the present disclosure. And after the first second submodel processes the characteristic diagram, outputting the position characteristic and the connection characteristic output by the first second submodel. Namely, the second sub-model (1) processes the feature map to obtain the position feature (1) and the connection feature (1).
And inputting the position characteristics (1), the connection characteristics (1) and the characteristic diagram into a second sub-model, namely into the second sub-model (2), and obtaining the position characteristics (2) and the connection characteristics (2) output by the second sub-model (2).
And inputting the position characteristics (2), the connection characteristics (2) and the characteristic diagram into a third second submodel, namely into the second submodel (3), so as to obtain the position characteristics (3) and the connection characteristics (3) output by the second submodel (3). And so on, in the remaining second submodels except the first second submodel, the connection feature and the position feature output by the previous second submodel and the feature map output by the first submodel are used as input, and each second submodel outputs the connection feature and the position feature corresponding to the second submodel. Until the last second submodel outputs the target connection characteristic and the target position characteristic.
Referring to fig. 5, fig. 5 is a second schematic structural diagram of a second sub-model according to an embodiment of the present disclosure. Specifically, fig. 5 shows a schematic structural diagram of the first and second submodels. The characteristic diagram output by the first submodel is used as input and input into the first and second submodels. The second sub-model comprises a connecting module and a position module, wherein the connecting module and the position module are both modules formed by a plurality of different types of convolution layers.
For example, the connection module includes a plurality of first convolution layers and a plurality of second convolution layers, the plurality of first convolution layers are sequentially connected, the plurality of second convolution layers are sequentially connected, and the last first convolution layer is connected to the first second convolution layer. The position module comprises a plurality of first convolution layers and a plurality of second convolution layers, the first convolution layers are connected in sequence, the second convolution layers are connected in sequence, and the last first convolution layer is connected with the first second convolution layer.
In some embodiments, the first convolutional layer may be a 3 × 3 convolutional layer, and the second convolutional layer may be a 1 × 1 convolutional layer. The number of the first convolution layers in the connection module may be three, the number of the second convolution layers may be two, and the structure of the position module may be the same as that of the connection module. In practical applications, the types and the numbers of the first convolution layer and the second convolution layer can be changed according to practical requirements.
As can be seen from fig. 5, the feature map output by the first submodel is input into the first second submodel, and since the second submodel has the connection module and the location module, the connection module and the location module can process the feature map respectively, so as to obtain the connection feature (1) and the location feature (1).
Referring to fig. 6, fig. 6 is a schematic diagram of a third structure of the second sub-model according to the embodiment of the present application. Specifically, fig. 6 is a schematic structural diagram of the remaining second submodels of the first second submodel. Each second submodel comprises a connecting module and a position module, and the connecting module and the position module comprise a plurality of convolution layers of different types.
For example, the connection module includes a plurality of third convolutional layers and a plurality of second convolutional layers, the plurality of third convolutional layers are sequentially connected, the plurality of second convolutional layers are sequentially connected, and the last third convolutional layer is connected to the first second convolutional layer. The position module comprises five third convolution layers and two second convolution layers, wherein the third convolution layers are connected in sequence, the second convolution layers are connected in sequence, and the last third convolution layer is connected with the first second convolution layer.
In some embodiments, the third convolutional layer may be a 7 × 7 convolutional layer, and the second convolutional layer may be a 1 × 1 convolutional layer. The number of the third convolution layers in the connection module may be five, the number of the second convolution layers may be two, and the structure of the position module may be the same as that of the connection module. In practical applications, the types and the numbers of the first convolution layer and the second convolution layer can be changed according to practical requirements.
As can be seen from fig. 6, in the remaining second submodels except the first one, the input of each second submodel is the connection characteristic and the position characteristic outputted by the last second submodel and the characteristic diagram outputted by the first submodel, i.e. the connection characteristic (M-1) and the position characteristic (M-1). Each second submodel may output connection and position features corresponding thereto, i.e., connection and position features (M). It should be noted that the last second submodel outputs the target location characteristic and the target connection characteristic.
In some embodiments, in the remaining second submodels other than the first one, the third convolutional layer may be replaced with the first convolutional layer, thereby reducing the amount of computation and parameters, making the second submodel processing task faster.
205. And determining candidate key points in the key points according to the target position characteristics.
In some embodiments, the position of the maximum value in the target location feature may be selected as the candidate keypoint, for example, the candidate keypoint with the largest pixel value in a heatmap (heatmap) may be selected. In practical application, the heat maps can be maximally pooled, the heat maps before pooling and after pooling are compared, and positions with equal values in the heat maps before pooling and after pooling are used as candidate key points.
206. And determining the target key points of the shooting subject according to the target connection features and the candidate key points.
It can be understood that after the candidate keypoints are acquired, the candidate keypoints can be connected according to the direction of the connecting body in the target connection feature, so as to obtain a complete individual.
In some embodiments, the target connection feature corresponding to one connector may be obtained each time, and the candidate key points at both ends of each connector are connected. Therefore, the confidence of the two candidate key points from the same individual can be obtained, and the confidence can be expressed by the following confidence formula:
Figure BDA0002404954360000101
wherein
Figure BDA0002404954360000102
May be represented as two different candidate keypoints, P (u) is the interpolated position between the two candidate keypoints, LcThe specific formula of p (u) is the value at p (u) in the target connection feature:
Figure BDA0002404954360000103
it will be appreciated that in practical applications, u is sampled at regular intervals between two candidate keypoints, for example, over the interval [0,1], to approximate the integral.
When only one photographic subject exists on the preview image, all the candidate key points can be determined to be from the same photographic subject, and when all the candidate key points are connected, a complete photographic subject can be represented.
When there are multiple shot individuals on the preview image, a keypoint association set may be generated, where the association set has a candidate keypoint set for each individual. For example, there are candidate keypoints corresponding to eyes, candidate keypoints corresponding to noses, and the like on the candidate keypoints. In each shooting individual, the eyes and the nose have corresponding candidate key points, the wrists and the elbows have corresponding key points, and a plurality of candidate key points can form an individual representing the shooting subject. The plurality of candidate keypoint sets form an association set. An optimal association set can be found therein, such as:
Figure BDA0002404954360000111
Figure BDA0002404954360000112
in the association set Z, j1,j2Representing a keypoint category (eyes, nose, wrists, etc.), m and n representing keypoint numbers within the corresponding keypoint category. Using the confidence formula above results in:
Figure BDA0002404954360000113
Ecthe total confidence of all connected connectors, that is, the total confidence of the individual formed by connecting a plurality of connectors. In the matching process, a Hungarian algorithm can be used for matching to obtain an optimal association set.
It should be noted that when
Figure BDA0002404954360000114
When the number is 1, the candidate key point is represented
Figure BDA0002404954360000115
From the same individual, i.e. candidate keypoints
Figure BDA0002404954360000116
Are key points on the same photographic subject.
And determining the association degree between the candidate key points through the confidence degrees, thereby determining the target key points on the shooting subject. I.e., the higher the confidence, the higher the association between candidate keypoints, and the more likely it is from the same individual.
207. And determining the current composition type according to the target key points.
After the target keypoints are acquired, the target keypoints may be identified, for example, whether the target keypoints are mainly focused on the head, or the feet, the chest, the whole body, the local body, and so on.
In some embodiments, a specific part of the photographic subject may be recognized according to a target key point of the photographic subject, for example, if the target key point is a key point on the head of the doll, the current composition type may be determined to be a portrait according to the photographic part. If the shooting parts identified according to the target key points comprise heads, hands, trunks, legs and feet, the current composition type is a whole body image.
Referring to fig. 7 in detail, fig. 7 is a schematic flowchart illustrating a process for determining a composition type according to an embodiment of the present application. Specifically, the composition type may be determined according to the attribute of the target keypoint. Such as:
301. and acquiring target key points.
302. And judging whether the target key points contain head key points or not. If no head key point is included, go to step 303, which illustrates a close-up of the local body.
If the head key point is included, step 304 is entered to check if only the head key point is included. If only head keypoints are included, then step 305 is entered where a facial close-up is illustrated.
If not, step 306 is entered to detect whether the foot keypoints are included. If the foot key points are included, the process proceeds to step 307, which is a close-up of the whole body.
If the foot key points are not included, the process proceeds to step 308, and it is determined whether the hip key points are included.
If a hip key point is included, then the process proceeds to step 309, which depicts a breast close-up
If no hip key points are included, then step 310 is entered, which illustrates a seven-part physical close-up.
208. And determining a preset site selection mode corresponding to the composition type according to the composition type.
In some embodiments, the composition types correspond to different predetermined location selection manners. For example, if the local body is close-up, the central point of the detection frame is taken as the positioning point. It will be appreciated that different composition types correspond to different predetermined site selection patterns.
209. And determining the positioning points according to the preset positioning point selection mode and the position information of the target key points.
And after the selection mode of the preset positioning point is determined, the corresponding positioning point can be determined according to the coordinate information of the key point. For example, if the preset positioning point is selected in a manner corresponding to a facial feature, coordinate information of the target key point is obtained, and if the mean value of the abscissa of the key points of the eyes, the nose and the mouth is within the length range of the labeling box 1/4 in the transverse direction, the side face is considered to be shot, and the center of the detection box is taken as the positioning point.
210. And determining the composition point corresponding to the shooting subject according to the positioning point and the composition type.
In some embodiments, the composition types may be divided into two broad categories, one being face features containing only the head, and one being body features containing the body. The body close-up may or may not include a head. After the composition type is acquired, it may be determined whether the composition type is a face close-up or a body close-up.
And acquiring the horizontal and vertical screen information of the electronic equipment, and generating a three-way line on a preview image interface. Specifically, fig. 8 is a schematic diagram of a composition triple line provided in the embodiment of the present application. Wherein A, B the two three-branches are connected with the longer side length of the preview image, and C, D the two three-branches are connected with the shorter side length of the preview image. As shown in fig. 8, when C, D is a horizontal line, the preview image is a landscape screen; when C, D is a vertical line, the preview image is a portrait screen.
If the composition type is facial close-up, the horizontal and vertical screen information of the preview image can be acquired. When the preview image is a landscape screen, the candidate composition point corresponding to the main subject may be selected as the center of the preview image, the midpoint of the three-line C, and the intersection of the three-line C and the three-line A, B. When the preview image is a vertical screen, the candidate composition point may be the center of the preview image and the midpoint of the three-line a.
If the composition type is body close-up, the horizontal and vertical screen information of the preview image can be acquired. When the preview image is a landscape screen, the candidate composition point corresponding to the main shooting body can be selected as the center of the preview image and the midpoint of the three-line C. When the preview image is a portrait screen, then the candidate composition point selects the intersection of the three-line A, B, C, D.
After the candidate composition points are obtained, a candidate composition point closest to the anchor point may be selected as a final composition point from the candidate composition points. I.e. the construction points of the subject.
211. And judging whether the positioning point is matched with the composition point.
There are various ways to determine whether the anchor point and the composition point are matched. For example, it is determined whether the euclidean distance between the map forming point and the anchor point is smaller than a preset distance threshold, or whether the euclidean distance between the map forming point and the anchor point is within a preset range.
212. And if the positioning point is not matched with the composition point, outputting prompt information for indicating the adjustment of the shooting posture of the electronic equipment.
When the Euclidean distance between the composition point and the positioning point is not smaller than the preset distance threshold value or is not in the preset range, and the positioning point is not matched with the composition point, a prompt message can be generated on the screen at the moment to prompt a user to adjust the shooting posture of the electronic equipment.
For example, as shown in fig. 9, fig. 9 is a schematic diagram of composition information provided in the embodiment of the present application. When the positioning point and the composition point are not matched, an indication arrow of the positioning point towards the direction of the composition point can be generated to prompt a user to adjust the shooting posture of the electronic equipment to recompose.
213. And if the positioning points are matched with the composition points, shooting the shooting scene to obtain a shot image.
When the Euclidean distance between the composition point and the positioning point is smaller than a preset distance threshold value or within a preset range, the positioning point is matched with the composition point, at the moment, the prompt message on the preview image can disappear, and the electronic equipment automatically takes a picture of the shooting main body. Or the prompting information changes the color, generates characters to prompt the user to take a picture, and the user can take a picture manually according to the prompting information.
In summary, in the embodiment of the present application, when the shooting subject is detected during shooting, the key points of the preview image can be automatically identified according to the key point identification model, and the key points of the preview image are processed to obtain the target key points of the shooting subject. And finally, determining a composition type according to the target key point, determining a positioning point according to the composition type and the target key point, and determining a composition point corresponding to the shooting subject according to the positioning point and the composition type. When the fixed point is not matched with the drawing point, outputting prompt information for indicating and adjusting the shooting posture of the electronic equipment; and when the fixed point is matched with the mapping point, shooting the shooting scene to obtain a shot image. Thereby generating a composition suggestion for the subject and taking a picture.
Referring to fig. 10, fig. 10 is a schematic view of a first structure of a photographing device according to an embodiment of the present application. The photographing device comprises an acquisition module 410, a first determination module 420, a second determination module 430, a prompt module 440 and a photographing module 450.
The obtaining module 410 is configured to obtain a preview image of a shooting scene, and call a key point identification model to perform key point detection on the preview image, so as to obtain a target key point of a shooting subject in the shooting scene.
It can be understood that, when taking a picture, a preview image is generated on the screen of the electronic device, so that the photographer can view the current picture information at any time. When the user takes a picture, the obtaining module 410 may automatically detect whether there is a shooting subject in the preview image, and when there is a shooting subject in the preview image, automatically invoke the key point identification model to perform key point identification on the preview image.
In the preview image, the key points identified by the key point identification model are not necessarily key points on the subject, but may be key points of other objects to be photographed, for example, buildings, passerby, etc. on the sides of the road are not the objects to be photographed actually when the person image is photographed. The key points identified by the key point identification model can be screened, so that the target key points of the shooting subject are obtained.
Referring to fig. 11, fig. 11 is a second schematic structural diagram of the photographing device according to the embodiment of the present application. The obtaining module 410 includes a first determining sub-module 411 and a second determining sub-module 412.
The first determining submodule 411 is configured to determine candidate keypoints among the keypoints according to the target position feature.
In some embodiments, the first determining sub-module 411 may select the position of the maximum value in the target location feature as the candidate keypoint, for example, select the candidate keypoint with the largest pixel value in a heat map (heatmap). In practical application, the heat maps can be maximally pooled, the heat maps before pooling and after pooling are compared, and positions with equal values in the heat maps before pooling and after pooling are used as candidate key points.
A second determining sub-module 412, configured to determine the target keypoint of the photographic subject according to the target connection feature and the candidate keypoint.
It can be understood that after the candidate keypoints are acquired, the candidate keypoints can be connected according to the direction of the connecting body in the target connection feature, so as to obtain a complete individual.
In some embodiments, the confidence between the key points may be obtained, and in the case that the confidence is higher, the association degree between the key points is higher, so as to determine the target key points of the photographic subject.
The first determining module 420 is configured to determine a current composition type according to the target key point, and determine a positioning point corresponding to the photographic subject according to the composition type and the target key point.
The key points of the target of the photographic subject are positioned at different parts of the photographic subject. For example, the shooting subject is a doll, and key points are arranged on multiple parts of the doll, such as the head, the hands and the legs. The composition type of the subject can be determined from these key points.
In some embodiments, a specific part of the photographic subject may be recognized according to a target key point of the photographic subject, for example, if the target key point is a key point on the head of the doll, the current composition type may be determined to be a portrait according to the photographic part. If the shooting parts identified according to the target key points comprise heads, hands, trunks, legs and feet, the current composition type is a whole body image.
A second determining module 430, configured to determine a composition point corresponding to the shooting subject according to the positioning point and the composition type.
It is understood that after the composition type and the anchor point are determined, a composition point may be generated to guide the photographer and the subject to take a picture.
In some embodiments, the anchor point has detailed location information in the preview image. For example, the preview image is a rectangular image, a rectangular coordinate system is established for the rectangular image, and the positioning point has specific coordinate position information in the rectangular image, and the coordinate position information may represent position information of the photographic subject in the preview image.
When the composition type is determined, for example, when the photographic subject is a doll, there are composition types such as a whole body image, a half body image, and a portrait, and the composition point can be determined from the composition type and the anchor point. For example, a specific position of the shooting subject in the preview image is determined according to the positioning point, and then a visible composition point is generated in the preview image according to the position and the composition type. For example, after the location point position is obtained, the map database may be selected according to the map type, the map database has matching information of the location point position and the map position, and after the location point position is determined, the corresponding map position may be directly searched in the database according to the location point position.
As shown in fig. 11, the second determining module 430 further includes an obtaining sub-module 431, a third determining sub-module 432, and a selecting sub-module 433.
And the obtaining submodule 431 is used for obtaining the horizontal and vertical screen information of the preview image.
And acquiring the horizontal and vertical screen information of the electronic equipment, and generating a three-way line on a preview image interface. Specifically, fig. 8 is a schematic diagram of a composition triple line provided in the embodiment of the present application. Wherein A, B the two three-branches are connected with the longer side length of the preview image, and C, D the two three-branches are connected with the shorter side length of the preview image. As shown in fig. 8, when C, D is a horizontal line, the preview image is a landscape screen; when C, D is a vertical line, the preview image is a portrait screen.
A third determining submodule 432, configured to determine, according to the horizontal and vertical screen information of the preview image, the positioning point, and the composition type, a plurality of candidate composition points corresponding to the shooting subject.
For example, if the composition type is a facial close-up, the landscape and portrait screen information of the preview image can be acquired. When the preview image is a landscape screen, the candidate composition point corresponding to the main subject may be selected as the center of the preview image, the midpoint of the three-line C, and the intersection of the three-line C and the three-line A, B. When the preview image is a vertical screen, the candidate composition point may be the center of the preview image and the midpoint of the three-line a.
If the composition type is body close-up, the horizontal and vertical screen information of the preview image can be acquired. When the preview image is a landscape screen, the candidate composition point corresponding to the main shooting body can be selected as the center of the preview image and the midpoint of the three-line C. When the preview image is a portrait screen, then the candidate composition point selects the intersection of the three-line A, B, C, D.
The selecting submodule 433 is configured to select, as the map-making point, a candidate map-making point that is closest to the anchor point from the plurality of candidate map-making points.
And the prompt module 440 is configured to output prompt information for instructing to adjust a shooting posture of the electronic device when the positioning point is not matched with the composition point.
In some embodiments, the positioning point and the composition point may be matched, and when the positioning point does not match with the composition point, a mark of the positioning point and the composition point is displayed on the preview image, and the user is prompted to adjust the shooting posture. For example, an indication arrow is provided between the positioning point and the composition point to indicate a direction or/and a distance that the positioning point needs to be adjusted, and then the user may adjust a photographing posture of the electronic device or adjust a photographing position of the photographing subject.
And the photographing module 450 is configured to photograph the photographing scene to obtain a photographed image when the positioning point is matched with the composition point.
When the positioning point is matched with the composition point, for example, the Euclidean distance between the positioning point and the composition point is within a preset distance range, it is indicated that the shooting subject meets the composition condition at this time, and the shooting scene can be directly shot to obtain a shot image.
In summary, in the embodiment of the application, a preview image of a shooting scene is obtained, and a key point identification model is called to perform key point detection on the preview image, so as to obtain a target key point of a shooting subject in the shooting scene; determining a current composition type according to the target key point, and determining a positioning point corresponding to the shooting subject according to the composition type and the target key point; determining a composition point corresponding to the shooting subject according to the positioning point and the composition type; when the fixed point is not matched with the drawing point, outputting prompt information for indicating and adjusting the shooting posture of the electronic equipment; and when the fixed point is matched with the mapping point, shooting the shooting scene to obtain a shot image. Therefore, composition suggestions are proposed and pictures are taken when the pictures are taken.
Correspondingly, an embodiment of the present application further provides an electronic device, as shown in fig. 12, fig. 12 is a schematic structural diagram of the electronic device provided in the embodiment of the present application. The electronic device may include, among other components, an input unit 510 including one or more computer-readable storage media, a display unit 520, a power supply 530, a WIFI module 540, a sensor 550, a memory 560, and a processor 570 including one or more processing cores. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 12 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the input unit 510 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 570, and can receive and execute commands sent by the processor 570. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 510 may include other input devices in addition to the touch-sensitive surface.
The Display unit 520 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 570 to determine the type of touch event, and the processor 570 provides a corresponding visual output on the display panel according to the type of touch event. Although in FIG. 12 the touch sensitive surface and the display panel are two separate components to implement input and output functions, in some embodiments the touch sensitive surface may be integrated with the display panel to implement input and output functions.
WiFi belongs to short-distance wireless transmission technology, and the electronic equipment can help a user to receive and send files, browse webpages, WiFi positioning and the like through the WiFi module 540, and provides wireless broadband Internet access for the user.
The electronic device may also include at least one sensor 550, such as a light sensor, motion sensor, and other sensors. In particular, the light sensor may include an ambient light sensor and a proximity sensor. The motion sensor can comprise a gravity acceleration sensor, a gyroscope and other sensors; the electronic device may further include other sensors such as barometer, hygrometer, thermometer, infrared sensor, etc., which are not described herein.
The memory 560 may be used to store software programs and modules, and the processor 570 performs various functional applications and data processing by operating the software programs and modules stored in the memory 560. The memory 560 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the electronic device, and the like. Further, the memory 560 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 560 may also include a memory controller to provide the processor 570 and the input unit 510 access to the memory 560.
The processor 570 is a control center of the electronic device, connects various parts of the entire cellular phone using various interfaces and lines, and performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 560 and calling data stored in the memory 560, thereby monitoring the entire cellular phone. Optionally, processor 570 may include one or more processing cores; preferably, the processor 570 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 570.
The electronic device also includes a power supply 530 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 570 via a power management system to manage charging, discharging, and power consumption via the power management system. The power supply 530 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the electronic device may further include a camera, a bluetooth module, and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 570 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 560 according to the following instructions, and the processor 570 runs the application programs stored in the memory 560, so as to implement various functions:
acquiring a preview image of a shooting scene, and calling a key point identification model to perform key point detection on the preview image to obtain a target key point of a shooting subject in the shooting scene;
determining a current composition type according to the target key point, and determining a positioning point corresponding to the shooting subject according to the composition type and the target key point;
determining a composition point corresponding to the shooting subject according to the positioning point and the composition type;
when the positioning point is not matched with the composition point, outputting prompt information for indicating the adjustment of the shooting posture of the electronic equipment;
and when the positioning point is matched with the composition point, shooting the shooting scene to obtain a shot image.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a storage medium, in which a plurality of instructions are stored, where the instructions can be loaded by a processor to execute the steps in any one of the photographing methods provided in the embodiments of the present application. For example, the instructions may perform the steps of:
acquiring a preview image of a shooting scene, and calling a key point identification model to perform key point detection on the preview image to obtain a target key point of a shooting subject in the shooting scene;
determining a current composition type according to the target key point, and determining a positioning point corresponding to the shooting subject according to the composition type and the target key point;
determining a composition point corresponding to the shooting subject according to the positioning point and the composition type;
when the positioning point is not matched with the composition point, outputting prompt information for indicating the adjustment of the shooting posture of the electronic equipment;
and when the positioning point is matched with the composition point, shooting the shooting scene to obtain a shot image.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any photographing method provided in the embodiments of the present application, the beneficial effects that can be achieved by any photographing method provided in the embodiments of the present application can be achieved, and for details, refer to the foregoing embodiments, and are not described herein again.
The photographing method, the photographing device, the photographing storage medium, and the photographing electronic device provided in the embodiments of the present application are described in detail above, and a specific example is applied in the description to explain the principles and the embodiments of the present application, and the description of the embodiments is only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (15)

1. A method of taking a picture, the method comprising:
acquiring a preview image of a shooting scene, and calling a key point identification model to perform key point detection on the preview image to obtain a target key point of a shooting subject in the shooting scene;
determining a current composition type according to the target key point, wherein the composition type comprises a portrait, and determining a preset positioning point selection mode corresponding to the composition type according to the composition type;
determining the positioning points of the shooting subject according to the preset positioning point selection mode and the position information of the target key points;
determining a composition point corresponding to the shooting subject according to the positioning point and the composition type;
when the positioning point is not matched with the composition point, outputting prompt information for indicating the adjustment of the shooting posture of the electronic equipment;
and when the positioning point is matched with the composition point, shooting the shooting scene to obtain a shot image.
2. The photographing method according to claim 1, wherein the key point recognition model includes: a first submodel and a second submodel;
the method for detecting the key points of the preview image by calling the key point identification model comprises the following steps:
inputting the preview image into the first sub-model to obtain a characteristic diagram of the preview image;
inputting the feature map into the second sub-model to obtain a target connection feature and a target position feature of the preview image;
and carrying out key point detection on the preview image according to the target connection characteristic and the target position characteristic.
3. The photographing method according to claim 2, wherein the keypoint identification model comprises a plurality of the second submodels, the plurality of the second submodels are connected in sequence, and the first submodel is connected with a first one of the second submodels;
inputting the feature map into the second submodel to obtain the target connection feature and the target position feature of the preview image, wherein the method comprises the following steps:
inputting the characteristic diagram into a first sub-model to obtain the connection characteristic and the position characteristic output by the first sub-model;
and inputting the feature diagram and the connection feature and the position feature output by the previous second submodel into the next second submodel except the rest second submodel of the first second submodel to obtain the connection feature and the position feature output by the next second submodel until the target connection feature and the target position feature output by the last second submodel are obtained.
4. The photographing method of claim 3, wherein the second sub-model comprises: a connection module and a location module;
the connection module of the first and second submodels comprises a plurality of first convolution layers and a plurality of second convolution layers, the first convolution layers are connected in sequence, the second convolution layers are connected in sequence, the last first convolution layer is connected with the first and second convolution layers, and the last second convolution layer outputs the connection characteristics of the first and second submodels;
the position module of the first second submodel comprises a plurality of first convolution layers and a plurality of second convolution layers, the first convolution layers are sequentially connected, the second convolution layers are sequentially connected, the last first convolution layer is connected with the first second convolution layer, and the last second convolution layer outputs the position characteristics of the first second submodel.
5. The photographing method of claim 3, wherein the second sub-model comprises: a connection module and a location module;
the connecting module of the remaining second submodel comprises a plurality of third convolution layers and a plurality of second convolution layers, the third convolution layers are sequentially connected, the second convolution layers are sequentially connected, the last third convolution layer is sequentially connected with the first second convolution layer, and the last second convolution layer outputs the connecting characteristics of the remaining second submodel;
the position module of the remaining second submodel comprises a plurality of third convolution layers and a plurality of second convolution layers, the third convolution layers are sequentially connected, the second convolution layers are sequentially connected, the last third convolution layer is sequentially connected with the first second convolution layer, and the last second convolution layer outputs the position characteristics of the remaining second submodel.
6. The photographing method of claim 2, wherein the calling a key point recognition model to perform key point detection on the preview image to obtain a target key point of a photographic subject in the photographic scene comprises:
determining candidate key points in the key points according to the target position characteristics;
and determining the target key point of the shooting subject according to the target connection feature and the candidate key point.
7. The photographing method according to claim 6, wherein the determining candidate keypoints among the keypoints according to the target position feature comprises:
determining a location of a maximum in the target location feature;
and taking the position of the maximum value as the candidate key point.
8. The photographing method according to any one of claims 1 to 5, wherein before the determining of the composition point corresponding to the photographic subject according to the anchor point and the composition type, the method further comprises:
acquiring horizontal and vertical screen information of the preview image;
the determining of the composition point corresponding to the shooting subject according to the positioning point and the composition type includes:
determining a plurality of candidate drawing points corresponding to the shooting subject according to the horizontal and vertical screen information of the preview image, the positioning points and the drawing type;
and selecting the candidate composition point which is closest to the positioning point from the plurality of candidate composition points as the composition point.
9. The photographing method according to any one of claims 1 to 5, wherein before the calling of the keypoint recognition model to perform keypoint detection on the preview image to obtain a target keypoint of a photographic subject in the photographic scene, the method further comprises:
generating a detection frame corresponding to the shooting subject, wherein the detection frame comprises the shooting subject;
judging whether the area proportion of the preview image occupied by the detection frame exceeds a preset threshold value or not;
and if so, calling a key point identification model to perform key point detection on the preview image to obtain a target key point of the shooting subject in the shooting scene.
10. The photographing method according to any one of claims 1 to 5, wherein when the positioning point matches the composition point, photographing the photographing scene to obtain a photographed image, including:
obtaining the distance between the positioning point and the mapping point;
and when the distance between the positioning point and the composition point is smaller than a preset distance threshold value, matching the positioning point with the composition point, and shooting the shooting scene to obtain a shot image.
11. A photographing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a preview image of a shooting scene and calling a key point identification model to perform key point detection on the preview image to obtain a target key point of a shooting subject in the shooting scene;
the first determining module is used for determining a current composition type according to the target key point, wherein the composition type comprises a portrait, and determining a preset positioning point selecting mode corresponding to the composition type according to the composition type; determining the positioning points of the shooting subject according to the preset positioning point selection mode and the position information of the target key points;
the second determination module is used for determining a composition point corresponding to the shooting subject according to the positioning point and the composition type;
the prompting module is used for outputting prompting information used for indicating and adjusting the shooting posture of the electronic equipment when the positioning point is not matched with the composition point;
and the photographing module is used for photographing the photographing scene to obtain a photographed image when the positioning point is matched with the composition point.
12. The photographing device according to claim 11, wherein the acquiring module comprises:
the first determining submodule is used for determining candidate key points in the key points according to the target position characteristics;
and the second determining submodule is used for determining the target key point of the shooting subject according to the target connection feature and the candidate key point.
13. The photographing apparatus according to claim 11, wherein the second determining module includes:
the acquisition submodule is used for acquiring the horizontal and vertical screen information of the preview image;
a third determining submodule, configured to determine, according to the horizontal and vertical screen information of the preview image, the positioning point, and the composition type, a plurality of candidate composition points corresponding to the photographic subject;
and the selection submodule is used for selecting the candidate composition point which is closest to the positioning point from the plurality of candidate composition points as the composition point.
14. An electronic device, comprising:
a memory storing executable program code, a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the steps in the photographing method according to any one of claims 1 to 10.
15. A storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the photographing method according to any one of claims 1 to 10.
CN202010158602.5A 2020-03-09 2020-03-09 Photographing method and device, electronic equipment and storage medium Active CN111343382B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010158602.5A CN111343382B (en) 2020-03-09 2020-03-09 Photographing method and device, electronic equipment and storage medium
PCT/CN2021/074205 WO2021179831A1 (en) 2020-03-09 2021-01-28 Photographing method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010158602.5A CN111343382B (en) 2020-03-09 2020-03-09 Photographing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111343382A CN111343382A (en) 2020-06-26
CN111343382B true CN111343382B (en) 2021-09-10

Family

ID=71187962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010158602.5A Active CN111343382B (en) 2020-03-09 2020-03-09 Photographing method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN111343382B (en)
WO (1) WO2021179831A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111343382B (en) * 2020-03-09 2021-09-10 Oppo广东移动通信有限公司 Photographing method and device, electronic equipment and storage medium
CN111953907B (en) * 2020-08-28 2021-11-23 维沃移动通信有限公司 Composition method and device
CN112843722B (en) * 2020-12-31 2023-05-12 上海米哈游天命科技有限公司 Shooting method, shooting device, shooting equipment and storage medium
CN113055593B (en) * 2021-03-11 2022-08-16 百度在线网络技术(北京)有限公司 Image processing method and device
CN113301251B (en) * 2021-05-20 2023-10-20 努比亚技术有限公司 Auxiliary shooting method, mobile terminal and computer readable storage medium
CN113643376B (en) * 2021-07-13 2024-05-03 杭州群核信息技术有限公司 Camera view angle generation method, device, computing equipment and storage medium
CN117135441A (en) * 2023-02-23 2023-11-28 荣耀终端有限公司 Image snapshot method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102332034A (en) * 2011-10-21 2012-01-25 中国科学院计算技术研究所 Portrait picture retrieval method and device
CN103402058A (en) * 2013-08-22 2013-11-20 深圳市金立通信设备有限公司 Shot image processing method and device
CN107509032A (en) * 2017-09-08 2017-12-22 维沃移动通信有限公司 One kind is taken pictures reminding method and mobile terminal
CN108769536A (en) * 2018-07-06 2018-11-06 深圳市赛亿科技开发有限公司 Patterning process, patterning apparatus and the computer readable storage medium of photograph taking
CN110336945A (en) * 2019-07-09 2019-10-15 上海泰大建筑科技有限公司 A kind of intelligence assisted tomography patterning process and system

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009088710A (en) * 2007-09-27 2009-04-23 Fujifilm Corp Photographic apparatus, photographing method, and photographing program
JP2015231192A (en) * 2014-06-06 2015-12-21 オリンパス株式会社 Imaging device and exposure control method of imaging device
CN104679831B (en) * 2015-02-04 2020-07-07 腾讯科技(深圳)有限公司 Method and device for matching human body model
CN105357436B (en) * 2015-11-03 2018-07-03 广东欧珀移动通信有限公司 For the image cropping method and system in image taking
US10932006B2 (en) * 2017-12-22 2021-02-23 Facebook, Inc. Systems and methods for previewing content
CN110472462A (en) * 2018-05-11 2019-11-19 北京三星通信技术研究有限公司 Attitude estimation method, the processing method based on Attitude estimation and electronic equipment
CN109660719A (en) * 2018-12-11 2019-04-19 维沃移动通信有限公司 A kind of information cuing method and mobile terminal
CN109788191A (en) * 2018-12-21 2019-05-21 中国科学院自动化研究所南京人工智能芯片创新研究院 Photographic method, device, computer equipment and storage medium
CN111343382B (en) * 2020-03-09 2021-09-10 Oppo广东移动通信有限公司 Photographing method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102332034A (en) * 2011-10-21 2012-01-25 中国科学院计算技术研究所 Portrait picture retrieval method and device
CN103402058A (en) * 2013-08-22 2013-11-20 深圳市金立通信设备有限公司 Shot image processing method and device
CN107509032A (en) * 2017-09-08 2017-12-22 维沃移动通信有限公司 One kind is taken pictures reminding method and mobile terminal
CN108769536A (en) * 2018-07-06 2018-11-06 深圳市赛亿科技开发有限公司 Patterning process, patterning apparatus and the computer readable storage medium of photograph taking
CN110336945A (en) * 2019-07-09 2019-10-15 上海泰大建筑科技有限公司 A kind of intelligence assisted tomography patterning process and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
浅析摄影构图的技巧;刘大林;《科技传播》;20191225;全文 *

Also Published As

Publication number Publication date
CN111343382A (en) 2020-06-26
WO2021179831A1 (en) 2021-09-16

Similar Documents

Publication Publication Date Title
CN111343382B (en) Photographing method and device, electronic equipment and storage medium
CN111327828B (en) Photographing method and device, electronic equipment and storage medium
JP7249390B2 (en) Method and system for real-time 3D capture and live feedback using a monocular camera
CN110473141B (en) Image processing method, device, storage medium and electronic equipment
JP5799521B2 (en) Information processing apparatus, authoring method, and program
CN110059661A (en) Action identification method, man-machine interaction method, device and storage medium
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
WO2021237914A1 (en) Sitting posture monitoring system based on monocular camera sitting posture recognition technology
WO2020093799A1 (en) Image processing method and apparatus
CN103425964B (en) Image processing equipment and image processing method
US11417095B2 (en) Image recognition method and apparatus, electronic device, and readable storage medium using an update on body extraction parameter and alignment parameter
US11508087B2 (en) Texture-based pose validation
WO2022174594A1 (en) Multi-camera-based bare hand tracking and display method and system, and apparatus
CN110399809A (en) The face critical point detection method and device of multiple features fusion
CN105096353B (en) Image processing method and device
CN112348937A (en) Face image processing method and electronic equipment
CN110868538A (en) Method and electronic equipment for recommending shooting posture
JP6651086B1 (en) Image analysis program, information processing terminal, and image analysis system
JP2015005220A (en) Information display device and information display method
CN109858402B (en) Image detection method, device, terminal and storage medium
CN113342157A (en) Eyeball tracking processing method and related device
CN106952217B (en) Intelligent robot-oriented facial expression enhancement method and device
CN115880348B (en) Face depth determining method, electronic equipment and storage medium
US11887252B1 (en) Body model composition update from two-dimensional face images
CN113255586B (en) Face anti-cheating method based on RGB image and IR image alignment and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant