CN111787215A - Shooting method and device, electronic equipment and storage medium - Google Patents

Shooting method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111787215A
CN111787215A CN201910265224.8A CN201910265224A CN111787215A CN 111787215 A CN111787215 A CN 111787215A CN 201910265224 A CN201910265224 A CN 201910265224A CN 111787215 A CN111787215 A CN 111787215A
Authority
CN
China
Prior art keywords
shooting
gesture
information
key point
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910265224.8A
Other languages
Chinese (zh)
Inventor
吴峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910265224.8A priority Critical patent/CN111787215A/en
Publication of CN111787215A publication Critical patent/CN111787215A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Abstract

The disclosure relates to a shooting method and device, an electronic device and a storage medium. The method comprises the following steps: receiving a gesture shooting request, wherein the gesture shooting request is used for requesting shooting based on the gesture of a shooting object; acquiring a preview image based on the gesture shooting request, and acquiring gesture information of the shooting object in the preview image; and if the attitude information of the shot object is matched with the preset attitude information, triggering the shooting operation. This openly can improve the flexibility of shooing, can break through the restriction of limbs length, need not to utilize the support or from rapping bar, also need not to carry out wired connection or wireless connection, and complex operation degree is lower, and can avoid shooing under the condition that the object of shooing does not make preparation to can improve the shooting effect.

Description

Shooting method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image technologies, and in particular, to a shooting method and apparatus, an electronic device, and a storage medium.
Background
Shooting refers to framing through a camera of a shooting device such as a mobile phone or an electronic digital camera, and shooting images or recording videos of a framed picture. In the related art, there are several photographing modes: the first is to click a shooting button to start shooting; the second is to select time-delay shooting, and start automatic shooting after countdown is finished; the third is to take a picture with a stand or from a selfie stick. Wherein, the first shooting mode can not break through the limitation of the length of the limbs; the second shooting mode can break through the limitation of the limb length to a certain extent, but has the problem that a photographer may not make sufficient preparation at the end of countdown; the third shooting mode cannot trigger shooting by manually clicking the shooting device, needs to establish wired connection or wireless connection in advance, and is high in complexity of operation.
Disclosure of Invention
The present disclosure provides a shooting technical scheme.
According to an aspect of the present disclosure, there is provided a photographing method including:
receiving a gesture shooting request, wherein the gesture shooting request is used for requesting shooting based on the gesture of a shooting object;
acquiring a preview image based on the gesture shooting request, and acquiring gesture information of the shooting object in the preview image;
and if the attitude information of the shot object is matched with the preset attitude information, triggering the shooting operation.
In one possible implementation, the gesture capture request is an image capture request or a video capture request.
In a possible implementation manner, if the gesture shooting request is a video shooting request, after the triggering of the shooting operation, the method further includes:
acquiring attitude information of the shot object in a shot video frame;
and if the posture information of the shot object is matched with the preset posture information, finishing shooting.
In a possible implementation manner, if the posture information of the photographic object matches with the information of the preset posture, triggering a photographing operation includes:
and if a plurality of shot objects exist in the preview image and the posture information of at least one shot object in the plurality of shot objects is matched with the preset posture information, triggering shooting operation.
In one possible implementation, the triggering a shooting operation includes:
the countdown is started and the shooting is started after the countdown is finished.
In one possible implementation, the gesture shooting request includes a gesture shooting mode starting request;
after the receiving the gesture capture request, the method further comprises:
and starting the posture shooting mode based on the posture shooting mode starting request.
In one possible implementation, the gesture capture request further includes a gesture selection request;
after the receiving the gesture capture request, the method further comprises:
determining one or more of the candidate gestures as the preset gesture based on the gesture selection request.
In one possible implementation, the method further includes:
and after shooting is finished, if the shooting object cannot be detected in the preview image, closing the posture shooting mode.
In one possible implementation manner, the pose information of the photographic subject includes relative position information of a first key point of the photographic subject relative to a second key point of the photographic subject, wherein the first key point represents any one key point except the second key point in the key points of the photographic subject;
the information of the preset posture comprises relative position information of a third key point of the preset posture relative to a fourth key point of the preset posture, wherein the third key point represents any one key point except the fourth key point in the key points of the preset posture;
after the acquiring of the pose information of the photographic subject in the preview image, the method further includes:
determining a position deviation value of the first key point according to the relative position information of the first key point relative to the second key point and the relative position information of the third key point relative to the fourth key point;
and judging whether the attitude information of the shot object is matched with the preset attitude information or not according to the position deviation value of the first key point.
In one possible implementation, the pose information of the photographic subject includes coordinates of a plurality of key points of the photographic subject;
the information of the preset gesture comprises coordinates of a plurality of key points of the preset gesture;
after the acquiring of the pose information of the photographic subject in the preview image, the method further includes:
determining the similarity between the plurality of key points of the shot object and the plurality of key points of the preset posture according to the coordinates of the plurality of key points of the shot object and the coordinates of the plurality of key points of the preset posture;
and judging whether the attitude information of the shooting object is matched with the information of the preset attitude or not according to the similarity.
According to another aspect of the present disclosure, there is provided a photographing apparatus including:
the device comprises a receiving module, a shooting module and a shooting module, wherein the receiving module is used for receiving a gesture shooting request, and the gesture shooting request is used for requesting shooting based on the gesture of a shooting object;
the first acquisition module is used for acquiring a preview image based on the gesture shooting request and acquiring gesture information of the shooting object in the preview image;
and the triggering module is used for triggering shooting operation if the posture information of the shooting object is matched with the information of the preset posture.
In one possible implementation, the gesture capture request is an image capture request or a video capture request.
In one possible implementation, the apparatus further includes:
the second acquisition module is used for acquiring the attitude information of the shot object in the shot video frame;
and the shooting ending module is used for ending the shooting if the posture information of the shot object is matched with the preset posture information.
In one possible implementation, the triggering module is configured to:
and if a plurality of shot objects exist in the preview image and the posture information of at least one shot object in the plurality of shot objects is matched with the preset posture information, triggering shooting operation.
In one possible implementation, the triggering module is configured to:
the countdown is started and the shooting is started after the countdown is finished.
In one possible implementation, the gesture shooting request includes a gesture shooting mode starting request;
the device further comprises:
and the starting module is used for starting the posture shooting mode based on the posture shooting mode starting request.
In one possible implementation, the gesture capture request further includes a gesture selection request;
the device further comprises:
and the first determination module is used for determining one or more of the alternative gestures as the preset gesture based on the gesture selection request.
In one possible implementation, the apparatus further includes:
and the closing module is used for closing the gesture shooting mode if the shooting object cannot be detected in the preview image after shooting is finished.
In one possible implementation manner, the pose information of the photographic subject includes relative position information of a first key point of the photographic subject relative to a second key point of the photographic subject, wherein the first key point represents any one key point except the second key point in the key points of the photographic subject;
the information of the preset posture comprises relative position information of a third key point of the preset posture relative to a fourth key point of the preset posture, wherein the third key point represents any one key point except the fourth key point in the key points of the preset posture;
the device further comprises:
a second determining module, configured to determine a position deviation value of the first keypoint according to relative position information of the first keypoint with respect to the second keypoint and relative position information of the third keypoint with respect to the fourth keypoint;
and the first judgment module is used for judging whether the attitude information of the shot object is matched with the preset attitude information or not according to the position deviation value of the first key point.
In one possible implementation, the pose information of the photographic subject includes coordinates of a plurality of key points of the photographic subject;
the information of the preset gesture comprises coordinates of a plurality of key points of the preset gesture;
the device further comprises:
a third determining module, configured to determine, according to the coordinates of the multiple key points of the photographic object and the coordinates of the multiple key points of the preset posture, similarities between the multiple key points of the photographic object and the multiple key points of the preset posture;
and the second judgment module is used for judging whether the attitude information of the shot object is matched with the information of the preset attitude or not according to the similarity.
According to another aspect of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the above-described photographing method is performed.
According to another aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described photographing method.
In the embodiment of the disclosure, the request is shot through receiving the gesture, the request is shot based on the gesture, the preview image is collected, the gesture information of the shooting object in the preview image is obtained, if the gesture information of the shooting object is matched with the information of the preset gesture, the shooting operation is triggered, the shooting flexibility can be improved, the limitation of the length of the limbs can be broken through, a support or a selfie stick is not needed, wired connection or wireless connection is not needed, the operation complexity is low, the shooting under the condition that the shooting object is not prepared can be avoided, and the shooting effect can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 illustrates a flowchart of a photographing method according to an embodiment of the present disclosure.
Fig. 2 illustrates a schematic diagram of a gesture photographing button in a photographing method according to an embodiment of the present disclosure.
Fig. 3 shows a schematic diagram of an alternative gesture selection interface in a shooting method according to an embodiment of the present disclosure.
FIG. 4 shows a block diagram of a camera according to an embodiment of the disclosure
Fig. 5 is a block diagram illustrating an electronic device 800 in accordance with an example embodiment.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 illustrates a flowchart of a photographing method according to an embodiment of the present disclosure. The subject of execution of the photographing method may be a photographing apparatus. For example, the photographing method may be performed by a terminal device or other processing device. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, or a wearable device. In some possible implementations, the photographing method may be implemented by a processor calling computer-readable instructions stored in a memory. As shown in fig. 1, the method includes steps S11 through S13.
In step S11, a posture shooting request for requesting shooting based on the posture of the photographic subject is received.
In one possible implementation manner, when the gesture shooting button in the shooting interface is in the off state, if it is detected that the gesture shooting button is clicked, it is determined that the gesture shooting request is received, and the gesture shooting button is switched to the on state.
In one possible implementation, the gesture capture request is an image capture request. In this implementation, the gesture capture request may be for capturing an image.
In another possible implementation, the gesture capture request is a video capture request. In this implementation, the gesture capture request may be for capturing video.
In one possible implementation, the gesture capture request includes a gesture capture mode start request; after receiving the gesture photographing request, the method further includes: and starting the posture shooting mode based on the posture shooting mode starting request.
In one example, the gesture capture request may include only the gesture capture mode turn-on request, and not the gesture selection request. In this example, all of the alternative gestures may be determined to be preset gestures, respectively, without requiring the user to select from among the alternative gestures.
Fig. 2 illustrates a schematic diagram of a gesture photographing button in a photographing method according to an embodiment of the present disclosure. In fig. 2, the posture shooting button 21 is in an off state, that is, the posture shooting mode is in an off state.
In one example, the gesture capture mode is in an off state by default. That is, when the user starts the camera application, the gesture shooting button and the gesture shooting mode are in the off state by default. The gesture shooting mode is defaulted to the closed state, and the memory occupation can be reduced.
In one example, as shown in fig. 2, the gesture shooting button is one of the function items of the shooting interface, and the priority presented may be the same as the delay shooting button.
In one possible implementation, the gesture capture request further includes a gesture selection request; after receiving the gesture photographing request, the method further includes: based on the gesture selection request, one or more of the alternative gestures are determined to be a preset gesture. In this implementation, the preset gesture may be selected by the user from the alternative gestures.
Fig. 3 shows a schematic diagram of an alternative gesture selection interface in a shooting method according to an embodiment of the present disclosure. In fig. 3, the posture shooting button 21 is in an on state, that is, the posture shooting mode is in an on state. In the example shown in FIG. 3, the alternative poses include a "horse dance" pose, a "space step" pose, and a running pose.
It should be noted that although alternative attitudes have been described above with "horse dance" attitude, "space step" attitude, and running attitude as examples, those skilled in the art will appreciate that the present disclosure should not be limited thereto. Those skilled in the art can flexibly set alternative gestures according to the requirements of the actual application scene and/or personal preference.
In step S12, based on the orientation photographing request, a preview image is captured, and orientation information of the photographic subject in the preview image is acquired.
In one possible implementation, the method further includes: and determining the object of the specified type in the preview image as the shooting object. For example, the specified type is a human body or at least a part of a human body.
In one possible implementation manner, the preview image is continuously acquired in response to the gesture shooting request, and the gesture information of the shooting object in the preview image is acquired in real time.
In step S13, if the posture information of the photographic subject matches the information of the preset posture, a photographing operation is triggered.
In the embodiment of the present disclosure, the number of the preset gestures may be one or more.
In a possible implementation manner, if the number of the preset gestures is multiple, when the gesture information of the shooting object matches with the information of any one preset gesture, the shooting operation is triggered.
In a possible implementation manner, the attitude information of the photographic subject may be matched with the information of the preset attitude in real time to determine whether the attitude information of the photographic subject is matched with the information of the preset attitude.
In a possible implementation manner, if the posture information of the photographic object matches with the information of the preset posture, triggering a photographing operation includes: and if the preview image has a plurality of shot objects and the posture information of at least one shot object in the plurality of shot objects is matched with the preset posture information, triggering the shooting operation.
In one possible implementation, triggering a shooting operation includes: the countdown is started and the shooting is started after the countdown is finished. In one example, the countdown may be a 3 second countdown of 3, 2, 1. In this example, in response to the posture information of the photographic subject matching the information of the preset posture, 3, 2, 1 of 3 second countdown is started, and the photographing is started after the countdown is ended.
In the embodiment of the present disclosure, the key point of the photographic subject may be a feature point capable of representing the posture of the photographic subject. For example, the number of key points of the photographic object is 18 or 20, which is not limited by the embodiment of the present disclosure.
In one possible implementation, the key points of the photographic subject are determined at least from the joint points of the photographic subject. For example, the key points of the photographic subject include a plurality of or all of a nose key point, a neck key point, a left shoulder key point, a right shoulder key point, a left upper arm key point, a right upper arm key point, a left elbow key point, a right elbow key point, a left wrist key point, a right wrist key point, a thoracic vertebra key point, a lumbar vertebra key point, a sacral vertebra key point, a caudal vertebra key point, a left hip key point, a right hip key point, a left knee key point, a right knee key point, a left ankle key point, and a right ankle key point.
In one possible implementation manner, the attitude information of the photographic subject comprises relative position information of a first key point of the photographic subject relative to a second key point of the photographic subject, wherein the first key point represents any key point except the second key point in the key points of the photographic subject; the information of the preset posture comprises relative position information of a third key point of the preset posture relative to a fourth key point of the preset posture, wherein the third key point represents any one key point except the fourth key point in the key points of the preset posture; after acquiring the pose information of the photographic subject in the preview image, the method further includes: determining a position deviation value of the first key point according to the relative position information of the first key point relative to the second key point and the relative position information of the third key point relative to the fourth key point; and judging whether the attitude information of the shot object is matched with the preset attitude information or not according to the position deviation value of the first key point.
In one example, the relative position information may be relative coordinates. For example, for a photographic subject, the relative coordinates of a first key point of the photographic subject with respect to a second key point may be determined with the second key point of the photographic subject as the origin; for the preset posture, the relative coordinates of the third key point of the preset posture relative to the fourth key point can be determined by taking the fourth key point of the preset posture as an origin.
In another example, the relative position information may be a vector. For example, the relative position information of the first keypoint with respect to the second keypoint may be a vector of the first keypoint pointing to the second keypoint; alternatively, the relative position information of the first keypoint with respect to the second keypoint may be a vector of the second keypoint pointing to the first keypoint.
In one example, determining whether the posture information of the photographic subject matches the information of the preset posture according to the position deviation value of the first key point includes: if the average value of the position deviation values of the first key points of the shot object is smaller than a first threshold value, judging that the attitude information of the shot object is matched with the information of the preset attitude; and if the average value of the position deviation values of the first key points of the shot object is greater than or equal to the first threshold, judging that the posture information of the shot object is not matched with the preset posture information.
In another example, determining whether the posture information of the photographic subject matches the information of the preset posture according to the position deviation value of the first key point includes: if the sum of the position deviation values of all the first key points of the shot object is smaller than a second threshold value, judging that the attitude information of the shot object is matched with the information of the preset attitude; and if the sum of the position deviation values of the first key points of the shot object is greater than or equal to the second threshold, judging that the posture information of the shot object is not matched with the information of the preset posture.
In one example, the second key point is a lumbar key point of the photographic subject, the fourth key point is a lumbar key point in the preset posture, the first key point of the photographic subject includes a nose key point, a neck key point, a left shoulder key point, a right shoulder key point, a left upper arm key point, a right upper arm key point, a left elbow key point, a right elbow key point, a left wrist key point, a right wrist key point, a thoracic vertebra key point, a sacral vertebra key point, a caudal vertebra key point, a left hip key point, a right hip key point, a left knee key point, a right knee key point, a left ankle key point and a right ankle key point of the photographic subject, and the third key point of the preset posture includes a nose key point, a neck key point, a left shoulder key point, a right shoulder key point, a left upper arm key point, a right upper arm key point, a left elbow key point, a right elbow key point, a left wrist key point, a right wrist key point, a nose key point, a neck key point, Thoracic vertebra, sacral vertebra, coccygeal vertebra, left hip, right hip, left knee, right knee, left ankle and right ankle key points.
For example, if the relative position information is a relative coordinate, a distance between a relative coordinate of a nose key point of the photographic subject with respect to a lumbar key point and a relative coordinate of a nose key point of a preset posture with respect to a lumbar key point may be determined as a position deviation value of the nose key point of the photographic subject. By analogy, the position deviation value of each first key point of the photographic subject can be determined.
For another example, if the relative position information is a vector, an included angle between a vector of the nose key point of the photographic subject relative to the lumbar key point and a vector of the nose key point of the preset posture relative to the lumbar key point may be determined as a position deviation value of the nose key point of the photographic subject. By analogy, the position deviation value of each first key point of the photographic subject can be determined.
In another possible implementation, the pose information of the photographic subject includes coordinates of a plurality of key points of the photographic subject; the preset posture information comprises coordinates of a plurality of key points of the preset posture; after acquiring the pose information of the photographic subject in the preview image, the method further includes: determining the similarity between the plurality of key points of the shot object and the plurality of key points of the preset posture according to the coordinates of the plurality of key points of the shot object and the coordinates of the plurality of key points of the preset posture; and judging whether the posture information of the shooting object is matched with the preset posture information or not according to the similarity.
In this implementation, similarity between a plurality of key points of the photographic subject and a plurality of key points of the preset pose may be determined using a feature matching model in the related art.
In one example, if the similarity between the plurality of key points of the photographic object and the plurality of key points of the preset posture is greater than a third threshold, determining that the posture information of the photographic object matches the information of the preset posture; and if the similarity between the plurality of key points of the shooting object and the plurality of key points of the preset posture is smaller than or equal to a third threshold value, judging that the posture information of the shooting object is not matched with the information of the preset posture.
In one possible implementation manner, if the gesture shooting request is a video shooting request, after triggering the shooting operation, the method further includes: acquiring attitude information of a shot object in a shot video frame; and if the posture information of the shooting object is matched with the preset posture information, ending the shooting.
In one example, the preset posture corresponding to ending the photographing may be different from the preset posture corresponding to starting the photographing.
In another example, the preset posture corresponding to the end of the photographing may be the same as the preset posture corresponding to the start of the photographing.
In the embodiment of the disclosure, the request is shot through receiving the gesture, the request is shot based on the gesture, the preview image is collected, the gesture information of the shooting object in the preview image is obtained, if the gesture information of the shooting object is matched with the information of the preset gesture, the shooting operation is triggered, the shooting flexibility can be improved, the limitation of the length of the limbs can be broken through, a support or a selfie stick is not needed, wired connection or wireless connection is not needed, the operation complexity is low, the shooting under the condition that the shooting object is not prepared can be avoided, and the shooting effect can be improved. The limit of the length of the limbs can be broken through, and shooting can be avoided under the condition that a shooting object is not prepared, so that the shooting comfort can be improved.
In one possible implementation, the method further includes: after the shooting is finished, if the shooting object cannot be detected in the preview image, the posture shooting mode is closed. In this implementation, the gesture photographing mode may be automatically turned off.
In another possible implementation manner, when the gesture shooting button is in the on state, if it is detected that the gesture shooting button is clicked, the gesture shooting button is switched to the off state, and the gesture shooting mode is turned off. In this implementation, the user may manually turn off the gesture photographing mode.
In one possible implementation manner, after the shooting is finished, if the shooting object is continuously detected in the preview image, namely the shooting object does not leave the shooting interface, the posture shooting mode is kept in the opening state.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the present disclosure also provides a shooting device, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the shooting methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the methods section are omitted for brevity.
Fig. 4 illustrates a block diagram of a photographing apparatus according to an embodiment of the present disclosure. As shown in fig. 4, the photographing apparatus includes: a receiving module 41 configured to receive a posture shooting request for requesting shooting based on a posture of a shooting object; a first obtaining module 42, configured to collect a preview image based on the gesture shooting request, and obtain gesture information of a shooting object in the preview image; and the triggering module 43 is configured to trigger a shooting operation if the posture information of the shooting object matches with the information of the preset posture.
In one possible implementation, the gesture capture request is an image capture request or a video capture request.
In one possible implementation, the apparatus further includes: the second acquisition module is used for acquiring the attitude information of the shot object in the shot video frame; and the shooting ending module is used for ending the shooting if the posture information of the shot object is matched with the preset posture information.
In one possible implementation, the triggering module 43 is configured to: and if the preview image has a plurality of shot objects and the posture information of at least one shot object in the plurality of shot objects is matched with the preset posture information, triggering the shooting operation.
In one possible implementation, the triggering module 43 is configured to: the countdown is started and the shooting is started after the countdown is finished.
In one possible implementation, the gesture capture request includes a gesture capture mode start request; the device also includes: and the starting module is used for starting the gesture shooting mode based on the gesture shooting mode starting request.
In one possible implementation, the gesture capture request further includes a gesture selection request; the device also includes: and the first determination module is used for determining one or more of the alternative gestures as preset gestures based on the gesture selection request.
In one possible implementation, the apparatus further includes: and the closing module is used for closing the gesture shooting mode if the shooting object cannot be detected in the preview image after the shooting is finished.
In one possible implementation manner, the attitude information of the photographic subject comprises relative position information of a first key point of the photographic subject relative to a second key point of the photographic subject, wherein the first key point represents any key point except the second key point in the key points of the photographic subject; the information of the preset posture comprises relative position information of a third key point of the preset posture relative to a fourth key point of the preset posture, wherein the third key point represents any one key point except the fourth key point in the key points of the preset posture; the device also includes: the second determining module is used for determining the position deviation value of the first key point according to the relative position information of the first key point relative to the second key point and the relative position information of the third key point relative to the fourth key point; and the first judgment module is used for judging whether the attitude information of the shooting object is matched with the information of the preset attitude or not according to the position deviation value of the first key point.
In one possible implementation, the pose information of the photographic subject includes coordinates of a plurality of key points of the photographic subject; the preset posture information comprises coordinates of a plurality of key points of the preset posture; the device also includes: the third determining module is used for determining the similarity between the plurality of key points of the shooting object and the plurality of key points of the preset posture according to the coordinates of the plurality of key points of the shooting object and the coordinates of the plurality of key points of the preset posture; and the second judging module is used for judging whether the attitude information of the shooting object is matched with the information of the preset attitude or not according to the similarity.
In the embodiment of the disclosure, the request is shot through receiving the gesture, the request is shot based on the gesture, the preview image is collected, the gesture information of the shooting object in the preview image is obtained, if the gesture information of the shooting object is matched with the information of the preset gesture, the shooting operation is triggered, the shooting flexibility can be improved, the limitation of the length of the limbs can be broken through, a support or a selfie stick is not needed, wired connection or wireless connection is not needed, the operation complexity is low, the shooting under the condition that the shooting object is not prepared can be avoided, and the shooting effect can be improved.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 5 is a block diagram illustrating an electronic device 800 in accordance with an example embodiment. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 5, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (13)

1. A photographing method, characterized by comprising:
receiving a gesture shooting request, wherein the gesture shooting request is used for requesting shooting based on the gesture of a shooting object;
acquiring a preview image based on the gesture shooting request, and acquiring gesture information of the shooting object in the preview image;
and if the attitude information of the shot object is matched with the preset attitude information, triggering the shooting operation.
2. The method of claim 1, wherein the gesture capture request is an image capture request or a video capture request.
3. The method according to claim 2, wherein if the gesture shooting request is a video shooting request, after the triggering a shooting operation, the method further comprises:
acquiring attitude information of the shot object in a shot video frame;
and if the posture information of the shot object is matched with the preset posture information, finishing shooting.
4. The method according to claim 1, wherein if the posture information of the photographic subject matches with the information of the preset posture, triggering a photographing operation comprises:
and if a plurality of shot objects exist in the preview image and the posture information of at least one shot object in the plurality of shot objects is matched with the preset posture information, triggering shooting operation.
5. The method of claim 1, wherein the triggering a capture operation comprises:
the countdown is started and the shooting is started after the countdown is finished.
6. The method of claim 1, wherein the gesture capture request comprises a gesture capture mode turn on request;
after the receiving the gesture capture request, the method further comprises:
and starting the posture shooting mode based on the posture shooting mode starting request.
7. The method of claim 6, wherein the gesture capture request further comprises a gesture selection request;
after the receiving the gesture capture request, the method further comprises:
determining one or more of the candidate gestures as the preset gesture based on the gesture selection request.
8. The method according to claim 6 or 7, characterized in that the method further comprises:
and after shooting is finished, if the shooting object cannot be detected in the preview image, closing the posture shooting mode.
9. The method according to claim 1, wherein the posture information of the photographic subject includes relative position information of a first key point of the photographic subject with respect to a second key point of the photographic subject, wherein the first key point represents any one of the key points of the photographic subject other than the second key point;
the information of the preset posture comprises relative position information of a third key point of the preset posture relative to a fourth key point of the preset posture, wherein the third key point represents any one key point except the fourth key point in the key points of the preset posture;
after the acquiring of the pose information of the photographic subject in the preview image, the method further includes:
determining a position deviation value of the first key point according to the relative position information of the first key point relative to the second key point and the relative position information of the third key point relative to the fourth key point;
and judging whether the attitude information of the shot object is matched with the preset attitude information or not according to the position deviation value of the first key point.
10. The method according to claim 1, wherein the posture information of the photographic subject includes coordinates of a plurality of key points of the photographic subject;
the information of the preset gesture comprises coordinates of a plurality of key points of the preset gesture;
after the acquiring of the pose information of the photographic subject in the preview image, the method further includes:
determining the similarity between the plurality of key points of the shot object and the plurality of key points of the preset posture according to the coordinates of the plurality of key points of the shot object and the coordinates of the plurality of key points of the preset posture;
and judging whether the attitude information of the shooting object is matched with the information of the preset attitude or not according to the similarity.
11. A camera, comprising:
the device comprises a receiving module, a shooting module and a shooting module, wherein the receiving module is used for receiving a gesture shooting request, and the gesture shooting request is used for requesting shooting based on the gesture of a shooting object;
the first acquisition module is used for acquiring a preview image based on the gesture shooting request and acquiring gesture information of the shooting object in the preview image;
and the triggering module is used for triggering shooting operation if the posture information of the shooting object is matched with the information of the preset posture.
12. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any one of claims 1 to 10.
13. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 10.
CN201910265224.8A 2019-04-03 2019-04-03 Shooting method and device, electronic equipment and storage medium Pending CN111787215A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910265224.8A CN111787215A (en) 2019-04-03 2019-04-03 Shooting method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910265224.8A CN111787215A (en) 2019-04-03 2019-04-03 Shooting method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111787215A true CN111787215A (en) 2020-10-16

Family

ID=72754860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910265224.8A Pending CN111787215A (en) 2019-04-03 2019-04-03 Shooting method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111787215A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112843722A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Shooting method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105120144A (en) * 2015-07-31 2015-12-02 小米科技有限责任公司 Image shooting method and device
CN107370942A (en) * 2017-06-30 2017-11-21 广东欧珀移动通信有限公司 Photographic method, device, storage medium and terminal
CN108307116A (en) * 2018-02-07 2018-07-20 腾讯科技(深圳)有限公司 Image capturing method, device, computer equipment and storage medium
CN109005336A (en) * 2018-07-04 2018-12-14 维沃移动通信有限公司 A kind of image capturing method and terminal device
CN109194879A (en) * 2018-11-19 2019-01-11 Oppo广东移动通信有限公司 Photographic method, device, storage medium and mobile terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105120144A (en) * 2015-07-31 2015-12-02 小米科技有限责任公司 Image shooting method and device
CN107370942A (en) * 2017-06-30 2017-11-21 广东欧珀移动通信有限公司 Photographic method, device, storage medium and terminal
CN108307116A (en) * 2018-02-07 2018-07-20 腾讯科技(深圳)有限公司 Image capturing method, device, computer equipment and storage medium
CN109005336A (en) * 2018-07-04 2018-12-14 维沃移动通信有限公司 A kind of image capturing method and terminal device
CN109194879A (en) * 2018-11-19 2019-01-11 Oppo广东移动通信有限公司 Photographic method, device, storage medium and mobile terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112843722A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Shooting method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
EP3125154A1 (en) Photo sharing method and device
EP3099042A1 (en) Methods and devices for sending cloud card
CN110928627B (en) Interface display method and device, electronic equipment and storage medium
CN112001321A (en) Network training method, pedestrian re-identification method, network training device, pedestrian re-identification device, electronic equipment and storage medium
CN112991553B (en) Information display method and device, electronic equipment and storage medium
CN112328090B (en) Gesture recognition method and device, electronic equipment and storage medium
CN109325908B (en) Image processing method and device, electronic equipment and storage medium
CN111523346B (en) Image recognition method and device, electronic equipment and storage medium
CN111563138B (en) Positioning method and device, electronic equipment and storage medium
CN110933488A (en) Video editing method and device
CN112541971A (en) Point cloud map construction method and device, electronic equipment and storage medium
CN104850643B (en) Picture comparison method and device
CN113807253A (en) Face recognition method and device, electronic equipment and storage medium
CN112004020B (en) Image processing method, image processing device, electronic equipment and storage medium
CN110929545A (en) Human face image sorting method and device
CN110955800A (en) Video retrieval method and device
CN107948876B (en) Method, device and medium for controlling sound box equipment
CN111787215A (en) Shooting method and device, electronic equipment and storage medium
CN107239490B (en) Method and device for naming face image and computer readable storage medium
CN113506324B (en) Image processing method and device, electronic equipment and storage medium
CN113506325B (en) Image processing method and device, electronic equipment and storage medium
CN113315904B (en) Shooting method, shooting device and storage medium
CN114519794A (en) Feature point matching method and device, electronic equipment and storage medium
CN111586296B (en) Image capturing method, image capturing apparatus, and storage medium
CN110544335B (en) Object recognition system and method, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201016