CN112752016A - Shooting method, shooting device, computer equipment and storage medium - Google Patents

Shooting method, shooting device, computer equipment and storage medium Download PDF

Info

Publication number
CN112752016A
CN112752016A CN202010092824.1A CN202010092824A CN112752016A CN 112752016 A CN112752016 A CN 112752016A CN 202010092824 A CN202010092824 A CN 202010092824A CN 112752016 A CN112752016 A CN 112752016A
Authority
CN
China
Prior art keywords
target
part action
action type
action
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010092824.1A
Other languages
Chinese (zh)
Other versions
CN112752016B (en
Inventor
覃华峥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010092824.1A priority Critical patent/CN112752016B/en
Publication of CN112752016A publication Critical patent/CN112752016A/en
Application granted granted Critical
Publication of CN112752016B publication Critical patent/CN112752016B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The embodiment of the application discloses a shooting method, a shooting device, computer equipment and a storage medium; the method includes the steps that an image shooting page is displayed, and the image shooting page comprises a real-time preview picture; identifying the part action in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises the identified part action types; when detecting that at least two target part action types exist in the part action type set and the quantity of each target part action type exceeds the corresponding target quantity, triggering image shooting; according to the scheme, when the real-time preview picture meets the preset shooting condition, image shooting can be automatically triggered, and the image shooting efficiency is remarkably improved.

Description

Shooting method, shooting device, computer equipment and storage medium
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a shooting method, an apparatus, a computer device, and a storage medium.
Background
In the process of image shooting, the content in the real-time preview picture of the camera can be judged and the shooting is triggered. If the user wants to select the content meeting the expectation in the real-time preview picture for shooting operation, the user needs to observe the content in the real-time preview picture by himself or herself, and when the time corresponding to the content meeting the expectation arrives, the user manually triggers to shoot. In the process of research and practice of the prior art, the inventor of the application finds that the prior art has the defect of low shooting efficiency.
Disclosure of Invention
The embodiment of the application provides a shooting method, a shooting device, computer equipment and a storage medium, and the shooting efficiency can be remarkably improved.
The embodiment of the application provides a shooting method, which comprises the following steps:
displaying an image shooting page, wherein the image shooting page comprises a real-time preview picture;
identifying the part action in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises the identified part action type;
and triggering image shooting when detecting that at least two target part action types exist in the part action type set and the quantity of each target part action type exceeds the corresponding target quantity.
Correspondingly, the embodiment of the application provides a shooting device, including:
the shooting page display module is used for displaying an image shooting page, and the image shooting page comprises a real-time preview picture;
the identification module is used for identifying the part action in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises the identified part action type;
and the shooting module is used for triggering image shooting when at least two target part action types exist in the part action type set and the quantity of each target part action type exceeds the corresponding target quantity.
In some embodiments of the present application, the photographing apparatus further includes:
the device comprises a setting page display module, a position action setting module and a position action setting module, wherein the setting page display module is used for displaying a shooting setting page which comprises a position action setting control;
and the target determining module is used for determining at least two target part action types which trigger shooting and the target number corresponding to each target part action type based on the setting operation aiming at the part action setting control.
In some embodiments of the present application, the image capture page further comprises a capture setting control, and the setting page display module is configured to:
when an operation for a shooting setting control on the image shooting page is detected, the shooting setting page is displayed.
In some embodiments of the present application, the part motion setting control includes at least two part motion types to be selected, and a part motion number setting control corresponding to each part motion type to be selected,
the goal determination module includes:
and the target determining submodule is used for determining the action type of the part to be selected as a target part action type and the target number corresponding to the action type of the target part based on the setting operation of the part action number setting control corresponding to the action type of the part to be selected, so as to obtain at least two action types of the target part and the target number corresponding to each action type of the target part.
In some embodiments of the present application, the shooting setting page further includes a setting determination control, and the target determination sub-module includes:
the setting quantity display unit is used for setting the setting operation of the control according to the part action quantity corresponding to the action type of the part to be selected and displaying the setting quantity corresponding to the action type of the part to be selected;
and the target determining unit is used for determining the action type of the part to be selected as the action type of the target part and determining the set number as the target number of the action type of the target part when the trigger operation aiming at the setting determination control is detected, so as to obtain at least two action types of the target part and the target number corresponding to each action type of the target part.
In some embodiments of the present application, the target determination unit is configured to:
when the trigger operation aiming at the setting determination control is detected, acquiring the setting number corresponding to the action type of the part to be selected;
when the set number of the action types of the parts to be selected is larger than the preset number, determining that the action types of the parts to be selected are target part action types and the set number is the target number corresponding to the action types of the target parts, and obtaining at least two target part action types and the target number corresponding to each target part action type.
In some embodiments of the present application, the goal determination module comprises:
the target type determination submodule is used for determining at least two target part action types based on the setting operation of the part action setting control;
and the target number determining submodule is used for detecting the number of the objects in the real-time preview picture so as to set the number of the objects as the target number corresponding to each target part action type.
In some embodiments of the present application, the photographing setting page further includes a target object number control, and the photographing apparatus further includes:
the target object quantity determining module is used for determining the quantity of the target objects based on the setting operation aiming at the target object quantity control;
at this time, the photographing module is configured to: and triggering image shooting when detecting that at least two target part action types exist in the part action type set, the number of the target objects is matched with the number of all the target part action types, and the number of each target part action type exceeds the corresponding target number.
In some embodiments of the present application, the image capturing page further includes an object part motion setting control, and the capturing apparatus further includes:
the candidate list display module is used for displaying a candidate part action list of the object in the real-time preview picture based on the trigger operation of the object part action setting control;
the determination module is used for taking the object as a target object for triggering shooting, taking the selected action type as a target part action type corresponding to the target object and determining the target quantity corresponding to each target part action type when the selection operation of the candidate part action list aiming at the object is detected.
At this time, the photographing module is configured to: and triggering image shooting when detecting that at least two target part action types exist in the part action type set, the object of each target part action type is the corresponding target object, and the number of each target part action type exceeds the corresponding target number.
In some embodiments of the present application, the candidate list display module is to:
displaying a candidate part action display control of the object based on the trigger operation of the control set for the object part action;
when a determination operation for a candidate part action display control of an object is detected, a candidate part action list of the object is displayed.
In some embodiments of the present application, the part motion set includes a part motion type to which the part motion belongs, and a part motion value of the part motion, where the part motion value represents a probability that the part motion belongs to a standard part motion corresponding to the part motion type, and the capturing module includes:
a candidate type determining submodule for determining a candidate part action type corresponding to the target part action type from the part action type set;
a target type determination sub-module, configured to determine that the candidate part action type is a target part action type when the part action value of the candidate part action type matches a target threshold of the target part action type;
and the shooting sub-module is used for triggering image shooting when at least two target part action types exist and the number of each target part action type exceeds the corresponding target number.
In some embodiments of the present application, the photographing apparatus further includes:
the candidate image shooting module is used for shooting at least two candidate images;
the candidate image identification module is used for identifying the candidate images based on the target part action types to obtain the actual part action types and the corresponding actual quantity;
the comparison module is used for comparing the actual part action type and the corresponding actual quantity of each candidate image with the target part action type and the corresponding target quantity respectively to obtain a recognition difference result;
and the target image determining module is used for determining a target image from the candidate images according to the identification difference result.
Correspondingly, the embodiment of the present application further provides a storage medium, where the storage medium stores a computer program, and the computer program is suitable for being loaded by a processor to execute any one of the shooting methods provided by the embodiment of the present application.
Correspondingly, the embodiment of the present application further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements any one of the shooting methods provided by the embodiment of the present application when executing the computer program.
The method comprises the steps of firstly displaying an image shooting page, wherein the image shooting page comprises a real-time preview picture, then identifying part actions in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises the identified part action types, and finally triggering image shooting when at least two target part action types exist in the part action type set and the quantity of each target part action type exceeds the corresponding target quantity. According to the method and the device, the part action in the real-time preview picture can be automatically identified, and when the part action meets the preset shooting condition, the image shooting is automatically triggered, so that the image shooting efficiency is remarkably improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a scene schematic diagram of a shooting method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a shooting method provided in an embodiment of the present application;
FIG. 3 is a partial page operation diagram of a shooting method provided by an embodiment of the present application;
FIG. 4 is another partial page operation diagram of the shooting method provided by the embodiment of the present application;
FIG. 5 is a schematic view of another part of page operation of the shooting method provided by the embodiment of the present application;
FIG. 6 is another partial page operation diagram of the shooting method provided by the embodiment of the present application;
FIG. 7 is a schematic view of another part of page operation of the shooting method provided by the embodiment of the present application;
FIG. 8 is a schematic view of another part of page operation of the shooting method provided by the embodiment of the present application;
FIG. 9 is a schematic view of another part of page operation of the shooting method provided by the embodiment of the present application;
fig. 10 is another schematic flow chart of a shooting method provided in the embodiment of the present application;
FIG. 11 is a schematic view of another part of page operation of the shooting method provided in the embodiment of the present application;
FIG. 12 is a schematic view of another part of page operation of the shooting method provided in the embodiment of the present application;
fig. 13 is a diagram illustrating an overall flow of a shooting method provided in an embodiment of the present application;
fig. 14 is a partial flowchart illustration of a shooting method provided in an embodiment of the present application;
fig. 15 is a schematic structural diagram of a shooting device provided in an embodiment of the present application;
fig. 16 is another schematic structural diagram of a shooting device provided in the embodiment of the present application;
fig. 17 is another schematic structural diagram of a shooting device provided in the embodiment of the present application;
FIG. 18 is a schematic structural diagram of a computer device provided in an embodiment of the present application;
fig. 19 is an alternative structure diagram of the distributed system 110 applied to the blockchain system according to the embodiment of the present application;
fig. 20 is an alternative schematic diagram of a block structure provided in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a shooting method, a shooting device, computer equipment and a storage medium. Specifically, the embodiment of the application may be integrated in a first shooting device and a second shooting device, the first shooting device may be integrated in a first computer device, the first computer device may be an electronic device such as a terminal or a server, the terminal may be an electronic device capable of shooting images such as a camera, a video camera, a smart phone, a tablet computer, a notebook computer, or a personal computer, and the server may be a single server or a server cluster.
The second photographing apparatus may be integrated in a second computer device, the second computer device may be an electronic device such as a terminal or a server, the terminal may be an electronic device capable of photographing images such as a camera, a video camera, a smart phone, a tablet computer, a notebook computer, and a personal computer, and the server may be a single server or a server cluster. The server may be a web server, an application server, a data server, and the like.
In the following, the shooting method is described in the embodiment of the present application, taking the first computing device as a terminal and the second computing device as a server as an example.
As shown in fig. 1, in the solution of the embodiment of the present application, an interaction is performed between a terminal and a server, and the embodiment describes a shooting system, which can be developed by taking a first shooting device integrated on the terminal 10 and a second shooting device integrated on the server 20 as an example.
Specifically, the terminal 10 may display an image capturing page, where the image capturing page includes a real-time preview picture, and then recognize a part motion in the real-time preview picture to obtain a part motion type set, where the part motion type set includes recognized part motion types, and finally, when at least two target part motion types exist in the part motion type set and the number of each target part motion type exceeds the corresponding target number, the terminal 10 receives a capturing instruction sent by the server 20, and triggers image capturing.
Specifically, the server 20 may store a part action type set, preset shooting conditions (the preset shooting conditions may be that at least two target part action types exist and the number of each target part action type exceeds the corresponding target number), a target part action type, and a target number corresponding to each target part action type, and determine a target threshold corresponding to each target part action type, and the server 20 may further detect the part action type set, and send a shooting instruction triggering image shooting to the terminal 10 when it is detected that the part action type set satisfies the preset shooting conditions.
The following are detailed below. It should be noted that the order of description of the following embodiments is not intended to limit the order of the embodiments.
Embodiments of the present invention will be described from the perspective of a first camera, which may be particularly integrated in a terminal capable of image capture.
In an embodiment of the present invention, a shooting method may be executed by a processor of a terminal, and as shown in fig. 2, a flow of the shooting method may be as follows:
201. and displaying an image shooting page, wherein the image shooting page comprises a real-time preview picture.
In the embodiment of the application, an image can be obtained by shooting with the aid of the electronic equipment with a shooting function, a single picture can be obtained by shooting once, a video composed of a plurality of images can also be obtained, the content of the image can be determined according to the shooting time and the content collected by the electronic equipment, and the content of the image is any content which can be identified and shot by a shooting component of the electronic equipment in a range. Such as human expression, moon, or rain. The image can be shot based on the personal interests or work content of the user, or shot as a special purpose, for example, the image can be the material of an identity verification scene.
The image shooting page is a page capable of performing operations related to image shooting, the image shooting page can exist in any computer equipment capable of performing image shooting, the computer equipment can be electronic equipment with a built-in shooting component, such as a camera, a smart phone, a tablet computer, a notebook computer and the like, and can also be electronic equipment connected with the shooting component through a wired transmission or wireless transmission mode, such as a server externally connected with a camera, a personal computer and the like.
On the image shooting page, shooting conditions can be checked and adjusted, and the shooting conditions can be shooting parameters such as contrast, saturation, resolution and the like; the device can be a setting of an auxiliary shooting tool, such as a flash lamp, a reference line and the like; can be an auxiliary photographing function, such as a photographing shortcut key, automatic photographing and the like; but also a photographing mode such as portrait mode, landscape mode, or professional mode. Specifically, the content displayed on the image shooting page can be directly viewed by viewing the shooting condition, the content can be viewed by triggering a shooting condition control correspondingly set on the image shooting page, and the like. Specifically, the adjustment of the shooting condition may be automatic adjustment of the electronic device according to the actual situation of the shooting scene, or adjustment of the user by a manual setting mode based on the actual needs of the user, or the like.
The image shooting page can also trigger shooting, specifically, the image shooting page can contain a control for triggering shooting, and when the operation of the control for triggering shooting is detected, the image shooting can be performed.
The real-time preview picture is one of contents which can be displayed on the image shooting page, the real-time preview picture is the contents which are collected by the shooting component and displayed on the image shooting page in real time before shooting operation is carried out, and the contents which are collected by the shooting component can be observed in real time and judged so as to determine the best shooting opportunity. When the shooting conditions are adjusted, the content in the real-time preview picture can also change according to the changed shooting conditions, the change of the content display effect caused by the change of the shooting conditions can be observed, and if the portrait shooting is carried out, the portrait is adjusted to be in a beauty mode, and the portrait after being rubbed in the preview picture can be observed; when the portrait shooting is carried out, the shooting brightness is adjusted, the content in the real-time preview picture can be observed to become brighter and even possibly bright and dazzling, and the like.
For example, the king user uses the camera software X on the mobile terminal to photograph AA and BB, the king user opens the X, the X displays an image photographing page, the image photographing page includes a real-time preview picture, and the real-time preview picture displays the content containing AA and BB acquired by the photographing component of the mobile terminal through which the king user passes.
For another example, if the user logs in a website Y that needs to verify the identity through a self-timer video, when the web address corresponding to the website Y is opened by the north, an image shooting page is displayed, the image shooting page includes an implementation preview picture, and the implementation preview picture displays the head and the face of the north collected by the electronic device.
202. And identifying the part action in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises the identified part action type.
The part motion refers to the content displayed in real time on a real-time preview picture, the part motion is the motion of an object to be photographed, the object to be photographed may be a person, the part motion may be the local motion of the object to be photographed such as stretching, raising legs, raising hands, bending waist and the like, the motion may be the facial motion of smiling, opening eyes or raising head to wait for the object to be photographed, the motion may be the hand motion such as putting a V-letter with fingers, making a fist with fingers, or expressing different values through the number and sequence of the extended fingers, or the motion may be the part motion to represent the posture of the object to be photographed, for example, the posture may be the posture in which the person stands (for example, the two feet are at a set angle or the two shoulders are flush), the posture in which the person sits down (for example, the legs are overlapped or the two knees are close together), or the part motion may be displayed through cooperation of at least two objects to be photographed, for example, a hexagonal shape displayed through the bent arms, a plurality, Or a triangle obtained by folding a plurality of people, and the like, and may be a certain part motion, such as a straight horse, sneezing, salutation, and the like.
The object to be photographed may be other creatures which are in a changing state and the changing state cannot be directly and rapidly acquired through the electronic device according to the needs of the photographer, if the object to be photographed is an animal, the part motion may be a local motion, a facial motion, a limb motion, and the like of the animal, such as a vertical ear, a claw, a tail, and the like, or may be a motion made by at least two animals with interaction, such as two beaks and food (for feeding the bird baby by the bird mother) connecting the two beaks. If the subject is a plant or an organism other than a human or an animal, the local action may be a part action when a certain state is reached or a certain process is performed, such as a process of eating mosquitoes by a predator, a process of cell division, or the like.
The part motion is a motion of an object to be photographed, but in the process of recognizing the part motion, the object to be photographed does not have to be determined, for example, if the part motion is raising a hand and the user only wants to photograph a raised hand, the object to which the raised hand belongs does not need to be determined, but only the raising of the hand by the part motion is needed to be recognized, or for example, if the part motion is smiling and the user wants to obtain a smile of a specific object to be photographed, the smile and the object to which the smile belongs need to be recognized, and the like.
The part motion can be identified by a common identification algorithm, such as a principal component analysis algorithm or a histogram of oriented gradients feature extraction algorithm, and the part motion type to which the part motion belongs can be obtained by identifying the part motion.
The type and number of the part actions to be recognized can be set according to actual requirements, and the part actions to be recognized can be configured by electronic equipment, or can be configured by a user, for example, the user can select the part actions to be recognized from the part actions to be recognized provided by the electronic equipment based on own requirements, or for example, the user can input an image or an image sequence of a standard part action, the electronic equipment recognizes key features of the standard part action, or the user directly inputs the key features of the standard part action, the electronic equipment determines the standard part action as the part action to be recognized, and the key features are used as a reference in the process of recognizing the standard part action. The process of configuring the type and the number of the part actions to be identified by the user or the electronic equipment may not be completed at one time, and may be a process that is continuously updated according to the user requirement and the progress of the electronic equipment terminal identification technology.
Recognizing the part action to obtain a part action type set, wherein the part action type set is a database on which preset shooting condition judgment is performed, and the part action type set is the basis of the technical scheme expressed by the embodiment and is the key for efficient and automatic shooting.
For example, the part actions of the AA and the BB are recognized by the X according to a real-time preview picture which is acquired by the king of the user and displayed on an image shooting page, so that a part action type set is obtained, wherein the part action type set can comprise salutation 1, salutation 2, salutation 3, shoulder-level 1, shoulder-level 2, head-up 1, head-up 2, smile 1 and smile 2.
For another example, the website Y identifies a part motion in the image sequence in the live preview screen to obtain a part motion type set, where the identified part motion may be preset by the website Y, the part motion type set includes a part motion type, and the part motion type may include blinking, shaking, mouth opening, head lifting, and the like.
203. And triggering image shooting when detecting that at least two target part action types exist in the part action type set and the quantity of each target part action type exceeds the corresponding target quantity.
The target part motion type may be a part motion type determined based on a user or an electronic device that may trigger image capture, and the target number is the number that the target part motion type that may trigger image capture must reach.
Specifically, the action type of the target part and the corresponding target number can be set in various ways, and can be set for a user, for example, the user can enter a setting page of a camera in the electronic equipment before image shooting, and set the action type of the target part and the corresponding target number; if the user can directly set the action type of the target part and the corresponding target number on a setting page of the electronic equipment; for another example, the user can also shoot through a setting area or a setting control of the action type of the target part and the corresponding target quantity on the image shooting page.
The target part action type and the corresponding target number can be set for the electronic equipment, the electronic equipment can set the only target part action type and the corresponding target number, can also be flexibly set based on the actual shooting condition, and can also be set based on the shooting mode or the shooting condition selected by the user, if the shooting mode selected by the user can be regarded as needing to have multiple target part action types, and the number of each target part action type needs to meet the corresponding target number, the electronic equipment determines the target part action type and the corresponding target number based on the shooting mode. A
The action type of the target part corresponds to the action of the standard part, and the action type of the same part can correspond to different action types of the target part, for example, the action type of the part is smiling, the action type of the target part can be smiling, laughing and the like, and the action of the smiling standard part is different from the action of the laughing standard part.
And triggering image shooting when detecting that at least two target part action types exist in the part action type set and the quantity of each target part action type exceeds the corresponding target quantity. In addition, because the user has user reaction time between observing the content meeting the preset shooting condition and automatically triggering shooting, the electronic equipment has program running time, and although the user reaction time of different users is different, the user reaction time is far longer than the program running time on the whole, so the image precision of automatic shooting of the electronic equipment is obviously higher than the image precision of manual shooting of the user, and the image precision can be the ratio of the content of the actually shot image to the content meeting the expected content (the content meeting the preset shooting condition).
In addition to detecting and determining the part motion types at the terminal, the operation may be performed at the server, and specifically, the part motion type set, the target part motion types, and the corresponding target number may be transmitted to the server, and whether at least two target part motion types exist in the part motion type set and whether the number of each target part motion type exceeds the corresponding target number may be detected at the server side. If both conditions are met, the server can send a shooting instruction for triggering image shooting to the terminal, and the terminal triggers image shooting when receiving the shooting instruction.
For example, the king of the user may set a preset shooting condition in X as 2 salutations and 2 heads-up, the X is detected in the part action type set according to the target part action types (i.e., head-up and salutation) in the preset shooting condition, and determines salutation 1, salutation 2, head-up 1, and head-up 2 in the part action type set as the target part action types, i.e., 2 heads-up and 2 salutations, and the X automatically triggers image shooting.
For another example, the website Y informs the little north that the action types of the target part are open mouth, non-blinking and head-up through a voice prompt or video demonstration mode, and knows that the action types of the target part and the corresponding number are 1 open mouth, 1 non-blinking and 1 head-down, when the Y detects that the action types of the part of an image sequence collectively identify that the action types of the target part are open mouth, non-blinking and head-up, the shooting of the image sequence is automatically triggered, that is, the image sequence is automatically stored, and the little north is prompted to pass identity verification, so that the homepage of the website Y can be accessed.
In one embodiment, the part action set includes a part action type to which the part action belongs and a part action value of the part action, the part action value characterizes a probability that the part action belongs to a standard part action corresponding to the part action type,
triggering image shooting when detecting that at least two target part action types exist in the part action type set and the quantity of each target part action type exceeds the corresponding target quantity, wherein the step can comprise:
determining a candidate part action type corresponding to the target part action type from a part action type set;
when the part action value of the candidate part action type is matched with the target threshold value of the target part action type, determining the candidate part action type as the target part action type;
when at least two target part action types exist and the number of each target part action type exceeds the corresponding target number, image shooting is triggered.
The part action in the implementation preview picture can correspond to the action type of the target part, the action type of the target part corresponds to the only standard part action, and the standard part action is the best expression mode of the action type of the target part, for example, the standard part action of the salutation of the action type of the target part can be that the right palm forms a plane, the tip of the middle finger and the eyes are on a horizontal line, and the actual salutation in the implementation preview picture can be that the right palm forms a curved surface, and the middle finger tip and the cheekbones are on a horizontal line, so that when the part is identified, the actual salutation can be identified according to the bending degree of the curved surface formed by the right palm and the distance between the cheekbones and the eyes, and the probability can be expressed in a numerical value form, namely, the part action numerical value.
The candidate part action type corresponding to the target part action type is determined from the part action type set, the part action types which are irrelevant to the target part action type in the part action type set can be removed, and compared with the method for determining the target part action type from the part action type set, the method for determining the target part action type from the candidate part action type can obviously reduce the comparison frequency of part action numerical values and a target threshold value.
And when the part action value of the candidate part action type is matched with the target threshold value of the target part action type, determining the candidate part action type as the target part action type.
In an actual application scenario, a part motion that approximates or approaches the standard part motion may be regarded as the standard part motion, and the target threshold may be regarded as a minimum value of a part motion value corresponding to the part motion of the standard part motion, that is, may be regarded as a minimum value of a part motion type of the target part motion type. The matching of the part motion value of the candidate part motion type with the target threshold value of the target part motion type at this time may be such that the part motion value of the candidate part motion type is not less than the target threshold value of the target part motion type.
It should be noted that, if the comparison standard and the numerical expression are changed, the understanding of the target threshold may also be changed accordingly, and the understanding that the part motion numerical value of the candidate part motion type matches the target threshold of the target part motion type may also be changed, which may be flexibly set in an actual scene, and is not described herein again.
When at least two target part action types exist and the number of each target part action type exceeds the corresponding target number, image shooting is triggered.
For example, the king of the user may set a target part action type and a target number corresponding to each part action type on X, if preset shooting conditions are set that the target part action type is 2 salutations and 2 heads up, X determines a corresponding target threshold according to the target part action type salutations, compares the target threshold corresponding to the target part action type salutations with a part action value corresponding to salutation 1, a part action value corresponding to salutation 2, and a part action value corresponding to salutation 3, finally determines salutation 1 and salutation 2 as the target part action types according to the comparison results, determines a corresponding target threshold according to the target part action type heads up, compares the target threshold corresponding to heads up with a part action value corresponding to heads up 1 and a part action value corresponding to heads up 2, and finally confirming that the head-up 1 and the head-up 2 are the target part action types according to the comparison result, knowing that two target part action types (head-up and salute) exist in the part action type set, the number of the head-up is 2, the salute number is 2, and the image shooting can be triggered when the preset shooting condition is met.
In an embodiment, the photographing method may further include the steps of:
and displaying a shooting setting page, wherein the shooting setting page comprises a part action setting control, and at least two target part action types triggering shooting and the target quantity corresponding to each target part action type are determined based on the setting operation aiming at the part action setting control.
In this embodiment, the representation of the control may be a button, an input box, or the like.
The part action setting control can determine at least two target part action types and the target number corresponding to each target part action type, wherein the control can be an image input frame, can input an image or an image sequence containing at least two part action types into the image input frame, recognizes the image or the image sequence, determines the at least two part action types as the target part image types, and confirms the target number corresponding to each target part image type.
The part action setting control can be a text input box, at least two marks of target part action types (such as smile marks, digital marks 15, symbol marks #, and the like) and target quantity corresponding to each target part action mark can be input into the text input box, the input target part action types and the corresponding target quantity can be input together in one text box, and can also be input in two text boxes respectively; the input of different types of target part action identifiers can be performed by successive input in one or a group of text boxes, and one type of target part action identifier and the corresponding target number are determined at a time; or the target part action types and the target quantity corresponding to each target part action type can be determined at a single time by respectively inputting in at least two or two groups of text boxes.
Similarly, the part motion setting control may also be an audio input control, the terminal recognizes the voice and determines at least two target part motion types and the target number corresponding to each target part motion type, and the specific display form and recognition mode may be flexibly set according to the actual situation, which is not described herein again.
The part action setting control page may be a part action setting button, and when the part action setting button is triggered, a specific part action setting area is displayed, and at least two target part action types and a target number corresponding to each target part action type are determined for a control, a text and the like in the part action setting area, and the like.
For example, referring to fig. 3, based on an input operation for a part motion setting input box, a target part motion type and its corresponding number that triggers shooting may be determined: 2 hearts, 1 five stars, and 5 legs lifts.
In an embodiment, the image capture page further includes a capture setting control, and the step of displaying the capture setting page may include: when an operation for a shooting setting control on the image shooting page is detected, the shooting setting page is displayed.
When the image shooting page is displayed, the image shooting page comprises a shooting setting control, the shooting setting control is triggered, and the shooting setting page can be displayed.
The photographing setting page may include a plurality of setting controls for setting image photographing conditions and image photographing parameters, including a part motion setting control.
The shooting setting control can be a button, and a shooting setting page can be displayed by triggering the button; the user may display a shooting setting page by sliding or the like according to a prompt message expressed by the identifier, or the like.
For example, referring to fig. 4, a shooting setting control is included on the image shooting page, and when a trigger operation for the shooting setting control is detected, the shooting setting page is displayed.
In an embodiment, the part action setting control includes at least two types of part action types to be selected and a part action number setting control corresponding to each type of part action to be selected, and the step of determining at least two types of target part action types for triggering shooting and a target number corresponding to each type of target part action based on the setting operation for the part action setting control may include:
and determining the action type of the part to be selected as a target part action type and the target quantity corresponding to the action type of the target part based on the setting operation of the control set for the part action quantity corresponding to the action type of the part to be selected, and obtaining at least two target part action types and the target quantity corresponding to each target part action type.
In this embodiment, the part action setting control includes at least two action types of the part to be selected, and a part action number setting control corresponding to each action type of the part to be selected, where the action type of the part to be selected may be a part action type that can be given by the electronic device as a target part action type, when the electronic device identifies the part action for implementing the preview screen, the electronic device inevitably identifies the part actions corresponding to all the action types of the part to be selected, and the action type of the part to be selected may be displayed in the form of a logo (such as a text, a symbol, etc.), and corresponds to the part action number setting control corresponding to the action type of the part to be selected, so as to prompt that the action type of the part to be selected corresponds to the action type of the part to be selected when the part action.
The part action setting quantity setting control can set the quantity which is not less than 0 for the action setting type of the part to be selected corresponding to the part action setting control. And determining at least two target part action types which trigger shooting and the target quantity corresponding to each target part action type based on the setting operation of the part action quantity setting control corresponding to the action type of the part to be selected.
The scheme can be used for flexibly setting the target quantity under the condition of giving the action type range of the target part, and the action type of the target part and the corresponding target quantity can be determined only by setting the quantity.
For example, referring to fig. 5, the candidate part motion types include leg lifting, eye opening, and five stars, the number of the controls is set to be 2 based on the part motion setting number corresponding to leg lifting, it is determined that leg lifting is the target part motion type, and the target number corresponding to leg lifting is 2; the number of the controls is set to be 0 based on the position action setting number corresponding to the eyes being opened, the number of the controls is set to be 1 based on the position action setting number corresponding to the five-pointed star when the leg is determined not to be lifted to be the target position action type, the five-pointed star is determined to be the target position action type, and the target number corresponding to the five-pointed star is 1.
In an embodiment, the shooting the setting page further includes setting a determination control, and the step of determining, based on a setting operation of the setting control for a part action number corresponding to the part action type to be selected, that the part action type to be selected is a target part action type and a target number corresponding to the target part action type, to obtain at least two target part action types and a target number corresponding to each target part action type may include:
setting operation of a control is set based on the part action number corresponding to the action type of the part to be selected, and the setting number corresponding to the action type of the part to be selected is displayed;
when the trigger operation aiming at the setting determination control is detected, determining the action type of the part to be selected as the action type of the target part and determining the set number as the target number of the action type of the target part to obtain at least two action types of the target part and the target number corresponding to each action type of the target part.
The setting determination module may determine that the setting of the control for the number of actions of the part is valid, and if the setting determination control does not exist, there may also exist a plurality of confirmation modes to achieve a mode of determining that the setting of the control for the number of actions of the part is valid, for example, in a certain time, if the setting operation for the setting of the control for the number of actions of the part is not detected, prompt information may be displayed, and the prompt information may display at least two types of target part action types previously determined based on the setting operation for the setting of the control for the number of actions of the part, and a target number corresponding to each type of target part action, and so on.
The setting determination module can be set in any scene in which at least two target part action types and the target number corresponding to each target part action type are to be determined, and finally the at least two target part action types and the target number corresponding to each target part action type are determined through the setting determination module.
For example, referring to fig. 6, based on the setting operation of the part motion setting number setting control corresponding to the candidate part motion type, a setting number 2 of the candidate part motion type leg lift, a setting number 0 of the candidate part motion type eye open, and a setting number 1 of the candidate part motion type five-pointed star may be displayed, and when the trigger operation for the determination button is detected, the target part motion type and the target number corresponding to each part motion type are determined: 2 lifting legs and 1 five-pointed star.
In an embodiment, when the step detects a trigger operation for setting the determination control, determining that the motion type of the to-be-selected portion is the motion type of the target portion, and determining that the set number is the target number of the motion types of the target portion, to obtain at least two motion types of the target portion, and a target number corresponding to each motion type of the target portion, may include:
when the trigger operation aiming at the setting determination control is detected, acquiring the setting number corresponding to the action type of the part to be selected;
and when the set number of the action types of the parts to be selected is larger than the preset number, determining that the action types of the parts to be selected are target part action types and the set number is the target number corresponding to the action types of the target parts, and obtaining at least two target part action types and the target number corresponding to each target part action type.
The method can quickly determine whether the action type of the part to be selected is the action type of the target part, the user operation is simple, the electronic equipment is simple to detect, and therefore the shooting efficiency is remarkably improved.
For example, if the set number displayed by the part action number setting control corresponding to the eye opening of the to-be-selected part action type is 0, determining that the eye opening is not the target part action type; and if the set quantity displayed by the part action quantity setting control corresponding to the head-up of the part action type to be selected is 2, determining that the head-up is the target part action type, and the corresponding target quantity is 2.
In an embodiment, the step of determining at least two target part action types triggering the shooting and a target number corresponding to each target part action type based on the setting operation for the part action setting control may include:
determining at least two target part action types based on the setting operation of the part action setting control;
and detecting the number of the objects in the real-time preview picture, and setting the number of the objects as the number of the targets corresponding to each target part action type.
In this embodiment, the target number corresponding to each target portion action type cannot be determined through the control on the shooting setting page, and the electronic device may detect the number of objects on the implementation preview screen and use the number as the target number corresponding to each target portion action type. When the number of the objects in the preview image is too large or the objects cannot be fixed within a certain time period, the image shooting is triggered and the shooting effect is optimized under the condition that the part action of each object in the real-time preview image meets at least two target part action types in a mode of detecting the number of the objects in the preview image.
For example, if 30 objects are detected in the live preview screen, the number 30 is set as the number of objects corresponding to the eye opening and the number of objects corresponding to the head raising of the target part motion type.
In an embodiment, the shooting setting page further includes a target object number control, and the shooting method further includes:
determining the number of the target objects based on the setting operation aiming at the target object number control;
in this case, the triggering step, when it is detected that there are at least two target part motion types in the part motion type set and the number of each target part motion type exceeds the corresponding target number, may include:
and triggering image shooting when detecting that at least two target part action types exist in the part action type set, the number of target objects is matched with the number of all target part action types, and the number of each target part action type exceeds the corresponding target number.
For example, referring to fig. 7, according to the setting operation of the number control corresponding to the number of recognized people (i.e., the target object number control), the number of target objects is determined to be 5, and according to the part action number setting control corresponding to the candidate part action type, the target part action type and the target number corresponding to each target part action type are determined: 2 lifting legs and 3 opening eyes. When 2 raised legs and 3 open eyes in the part motion type set are detected, image capturing is triggered.
In an embodiment, the image capturing page further includes a subject portion action setting control, and the capturing method further includes:
displaying a candidate part action list of the object in a real-time preview picture based on a trigger operation of a control set for the object part action;
when the selection operation of the candidate part action list aiming at the object is detected, the object is used as a target object for triggering shooting, the selected action type is used as a target part action type corresponding to the target object, and the target number corresponding to each target part action type is determined.
In this case, the triggering step, when it is detected that there are at least two target part motion types in the part motion type set and the number of each target part motion type exceeds the corresponding target number, may include:
and triggering image shooting when detecting that at least two target part action types exist in the part action type set, the object of each target part action type is the corresponding target object, and the number of each target part action type exceeds the corresponding target number.
In this embodiment, the target object corresponds to at least two target portion action types, and the types of the target portion action types included in all the target portion types and the target number corresponding to each target portion action type may be determined through the determination operation for the target portion action type of the target object.
For example, referring to fig. 8, an image capturing page includes a target part motion setting control, and based on a trigger operation of the target part motion setting control, a candidate part motion list (including candidate part motion types, which are eye opening, head raising, and smiling, respectively) of a target nail is displayed, and a target part motion of the target nail is determined to be eye opening and head raising for a selection operation of the candidate part motion list of the nail; displaying a candidate part action list (comprising candidate part action types, namely eye opening, head raising and smiling), determining the target part action of the target object B as head raising according to the selection operation of the candidate part action list of the object B, and further determining the target part action types and the target quantity corresponding to each target part action type as follows: 1 open eye and 2 heads up.
In an embodiment, the step of displaying a candidate part action list of the object in the real-time preview screen based on the trigger operation of the object part action setting control may include:
and displaying the candidate part action display control of the object based on the trigger operation of the control set for the object part action, and displaying the candidate part action list of the object when the determination operation of the candidate part action display control of the object is detected.
For example, referring to fig. 9, based on the trigger operation of the target part motion setting control, a display button of the target a (i.e., a candidate part motion display control of the target a) and a display button of the target b (i.e., a candidate part motion display control of the target b) are displayed, and when the trigger operation of the display button of the target a is detected, a candidate part motion list of the target a is displayed; when the trigger operation of the display button for the object B is detected, the candidate part action list of the object B is displayed.
In an embodiment, the photographing method may further include:
shooting at least two candidate images, identifying the candidate images based on the action type of the target part to obtain the action type of the actual part and the corresponding actual quantity of the actual part, comparing the action type of the actual part of each candidate image and the corresponding actual quantity of the actual part with the action type of the target part and the corresponding target quantity of the target part respectively to obtain an identification difference result, and determining the target image from the candidate images according to the identification difference result.
For example, 3 candidate pictures are taken, based on the previously determined target part action types, eyes are opened and heads are raised, part actions in each candidate picture are recognized, actual part action types are obtained, the actual number of each actual part action type is determined, the actual part action types and the actual number of each actual part action type recognized in each candidate picture are compared with the target part action number and the target number corresponding to each target part action type, recognition difference results of each candidate picture are obtained, 3 recognition difference results are compared, a target picture is determined from the candidate pictures according to the recognition difference results, and the target picture is the candidate picture with the smallest difference from the preset shooting conditions.
The method comprises the steps of firstly displaying an image shooting page, wherein the image shooting page comprises a real-time preview picture, then identifying part actions in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises the identified part action types, and finally triggering image shooting when at least two target part action types exist in the part action type set and the quantity of each target part action type exceeds the corresponding target quantity. According to the method and the device, the part action in the real-time preview picture can be automatically identified, and when the part action meets the preset shooting condition, the image shooting is automatically triggered, so that the image shooting efficiency is remarkably improved.
The method described in the above embodiments is further illustrated in detail by way of example.
The present embodiment takes an example in which a camera is specifically integrated in an electronic device.
In the present embodiment, the imaging method is described in detail by taking a part motion as an example of a head motion.
As shown in fig. 10, fig. 10 is a schematic flow chart of the shooting method of the present application. The photographing method may include:
301. the electronic equipment displays an image shooting page, and the image shooting page comprises a real-time preview picture and a shooting setting control.
As shown in fig. 11 and 12, the image capturing page includes a trigger setting button (i.e., a capturing setting control) and a finder frame (i.e., a live preview screen), and further includes a capturing button for the user to trigger manual capturing.
302. When the operation aiming at the shooting setting control is detected, the electronic equipment displays a shooting setting page, and the shooting setting page comprises a part action setting control and a setting determination control.
As in fig. 11, when an operation for the trigger setting button is detected, the electronic apparatus displays a shooting setting page including a trigger condition setting control (i.e., a part action setting control): the smile number setting control, the eye opening number setting control and the head raising number setting control, and further comprise a determination button (namely, a setting determination control).
As in fig. 12, when an operation for the trigger setting button is detected, the electronic apparatus displays a shooting setting page including a trigger condition setting control: the system comprises a smiling control, an eye opening control, a head raising control and a number of people recognition control, wherein the number of people recognition control and the triggering condition setting control are part action setting controls, the action types of target parts are determined by the triggering condition determining control, and the target number corresponding to each action type of the target parts is determined by the number of people recognition.
303. And determining at least two target part action types which are triggered to shoot and the target number corresponding to each target part action type when the setting operation of the control is set aiming at the part action and the triggering operation of the setting determination control are carried out, and displaying an image shooting page.
As shown in fig. 11, by setting the number of targets for smiling to 1, the number of targets for opening eyes to 2, and the number of targets for raising the head to 0 by the trigger condition setting control, when the trigger operation for the decision button is detected, the target site action type can be determined, and the number of targets for each target site action type is: it is also possible to determine the trigger condition set TC { { type ═ t, num ═ 1}, { type ═ p, and num ═ 2} according to the setting operation, and display an image capture page including a live preview screen in the finder frame.
As shown in fig. 12, the action type of the target part is determined to be smiling and eyes are opened through the trigger condition setting control, and the setting number corresponding to each action type of the target part is displayed by setting operation of the number-of-people identifying control, and can be set to be 3 and other numbers, or can be set to be all people as shown in the figure, i.e., all objects contained in the live preview screen, the electronic device detects the number of objects in the live preview screen at this time, and the detected number of objects is taken as a target number, and when a trigger operation for the decision button is detected, the number of objects in the real-time preview picture can be determined as the number of objects corresponding to each target part action type, the target part action types are smiling and eyes are opened, a trigger condition set TC { t, p } can be determined according to setting operation, and an image shooting page containing the preview picture in the view frame is displayed.
304. The electronic equipment identifies the head action of the object in the real-time picture preview to obtain a part action type set of the object, wherein the part action type set comprises the identified part action type of the object.
For example, based on the application scenario of fig. 11 or 12, the electronic device continuously recognizes head movements of objects in the real-time preview screen, the head movements recognized by the electronic device include smiling, eye opening, and head raising, the real-time preview screen includes object 1, object 2, object 3, and object 4, the electronic device recognizes the smiling, eye opening, and head raising of each object in the real-time preview screen, and obtains a movement value of each head movement of each object, the movement value represents a probability that the head movement is a standard movement corresponding to the head movement, all values are stored in a centralized manner, and a recognition result set ER is obtained, ER { e1, e2, e3,34}, where e1, e2, e3, and e4 respectively correspond to object 1, object 2, object 3, and object 4, ei (i takes values of 1, 2, 3, and 4) contain movement values corresponding to the part movements of the object, where, the motion of smiling part can be marked by t, the eyes can be marked by p, and the head can be marked by h.
305. When the electronic equipment detects that at least two target part action types exist in the part action type set and the number of each target part action type exceeds the corresponding target number, image shooting is triggered.
For example, according to the fact that the action type of the target part is smiling and eyes are opened, the action numerical value corresponding to the smiling and eyes opening of each object identified in the ER is determined, whether the numerical value meets the threshold range corresponding to the smiling or eyes opening is judged, if yes, the head action corresponding to the numerical value of the object is determined to be the action type of the target part, then, the number of the action types of each target part is obtained, the action types of each target part are compared with the number of the corresponding target part, and the number of the action types of each target part exceeds the target number, so that image shooting can be conducted. The threshold range corresponding to the smile is 0.8 to 0.9, and if the detected motion values corresponding to the smiles of the object 1 and the object 3 satisfy the threshold range, it is determined that the image corresponding to the ER includes 2 smiles, and similarly, it is determined that the image includes 4 open eyes, and at this time, image capturing can be triggered.
And if the ER does not meet the trigger condition set, continuously identifying the real-time preview picture and comparing the identification result with the trigger condition until the ER meets the trigger condition set, namely, shooting the image.
For another example, if the trigger condition set is TC ═ t, p, then the number of objects needs to be determined for the live preview, for example, the live preview including object 1, object 2, object 3, and object 4 can determine that the number of objects is 4, and then, it is determined whether the acquired ER satisfies the trigger condition set, and if so, image capturing can be performed; and if the real-time preview picture does not meet the requirements, continuously identifying the head action of the real-time preview picture and comparing the head action with the triggering conditions until the head action meets the requirements, namely, shooting the image. If the number of the objects in the real-time preview picture is large or the objects are not fixed, if the objects go in and out, the shooting efficiency can be effectively improved by identifying the number of people, and the possibility that a user shoots a more satisfactory picture is improved.
306. The electronic equipment shoots at least two candidate images and determines a target image from the candidate images.
For example, after triggering shooting, the electronic device automatically shoots at least 5 images, selects a candidate image closest to the triggering shooting condition based on the triggering shooting condition, and displays the candidate image as a target image on the electronic device.
Referring to fig. 13, fig. 13 is a diagram illustrating an overall flow of recognizing expressions and automatically shooting, first setting an expression recognition mode, such as an expression to be recognized, a number of expressions to be recognized (i.e. a target portion action type, and a target number corresponding to each target portion action type), by a shooting object, then continuously acquiring image data by a camera through a camera (the image data may be displayed in a form of a real-time preview picture on an image shooting page), the camera may perform expression recognition on the acquired image data to obtain an expression set of the image data, if the expression set of the image data can satisfy the preset expression to be recognized and the number of each expression to be recognized, i.e. satisfy a trigger condition, and if the trigger condition is not satisfied, recognizing a next frame of the image data until image data satisfying the trigger condition is detected, then the camera automatically shoots 3 pictures, then the shot pictures are identified according to the triggering conditions, and the identification results are compared, and finally the picture (namely the target image) closest to the triggering conditions is obtained.
Referring to fig. 14, fig. 14 is a partial flow diagram of a shooting method, a camera picture (i.e., an image shooting page) displays a collected real-time preview picture, an expression recognition SDK (i.e., an expression recognition software development kit) can perform frame-by-frame detection on image data included in the real-time preview picture, and obtains expression recognition results of all faces in the image data, an expression trigger condition can be obtained by setting, it is determined whether the expression recognition results of all faces in the image data satisfy the expression trigger condition, if not, the image data is identified again, a process of determining whether the recognition results satisfy the expression trigger condition is performed until the recognition results satisfy the expression trigger condition, and a shooting operation is triggered.
The electronic equipment displays an image shooting page, the image shooting page comprises a real-time preview picture and a shooting setting control, when the operation aiming at the shooting setting control is detected, the electronic equipment displays the shooting setting page, the shooting setting page comprises a part action setting control and a setting determining control, at least two target part action types triggering shooting and the target number corresponding to each target part action type are determined based on the setting operation aiming at the part action setting control and the triggering operation aiming at the setting determining control, the image shooting page is displayed, the electronic equipment identifies the head action of an object in the real-time picture preview to obtain a part action type set of the object, wherein the part action type set comprises the identified part action types of the object, and when the electronic equipment detects that at least two target part action types exist in the part action type set, And when the number of each target part action type exceeds the number of the corresponding targets, triggering image shooting, and finally shooting at least two candidate images by the electronic equipment and determining the target images from the candidate images. The scheme can provide the action of the part to be selected, and the action type of the target part and the target quantity of each action type of the target part are determined by self setting of a user, the action type of the target part can be more than one, the quantity of each action type of the target part can be more than one, the requirement that in an actual shooting scene, a picture with higher quality (richer part action) is expected to be shot is better met, the triggering condition for automatically triggering shooting is enriched, and the shooting efficiency is obviously improved.
In order to better implement the shooting method provided by the embodiment of the present application, the embodiment of the present application further provides an apparatus based on the shooting method. The meaning of the noun is the same as that in the above-mentioned shooting method, and specific implementation details can refer to the description in the method embodiment.
As shown in fig. 15, fig. 15 is a schematic structural diagram of a camera according to an embodiment of the present disclosure, where the camera may include a shot page display module 401, an identification module 402, and a shooting module 403, where,
a shooting page display module 401, configured to display an image shooting page, where the image shooting page includes a real-time preview picture;
an identifying module 402, configured to identify a part action in a real-time preview screen to obtain a part action type set, where the part action type set includes an identified part action type;
a shooting module 403, configured to trigger image shooting when at least two target part motion types are detected in the part motion type set and the number of each target part motion type exceeds the corresponding target number.
In some embodiments of the present application, referring to fig. 16, the apparatus further comprises:
a setting page display module 404, configured to display a shooting setting page, where the shooting setting page includes a part action setting control;
and a target determining module 405, configured to determine, based on a setting operation for the part motion setting control, at least two target part motion types that trigger shooting, and a target number corresponding to each target part motion type.
In some embodiments of the present application, the image capture page further includes a capture setting control, and the setting page display module 404 is specifically configured to:
when an operation for a shooting setting control on the image shooting page is detected, the shooting setting page is displayed.
In some embodiments of the present application, the part motion setting control includes at least two types of part motion to be selected, and a part motion number setting control corresponding to each type of part motion to be selected, and the target determining module 405 includes:
and the target determining submodule is used for determining the action type of the part to be selected as the action type of the target part and the target number corresponding to the action type of the target part based on the setting operation of the part action number setting control corresponding to the action type of the part to be selected, and obtaining at least two action types of the target part and the target number corresponding to each action type of the target part.
In some embodiments of the present application, the shooting setting page further includes a setting determination control, and the target determination sub-module includes:
the setting quantity display unit is used for setting the setting operation of the control based on the part action quantity corresponding to the action type of the part to be selected and displaying the setting quantity corresponding to the action type of the part to be selected;
and the target determining unit is used for determining the action type of the part to be selected as the action type of the target part and determining the set number as the target number of the action type of the target part when the trigger operation aiming at the setting determination control is detected, so as to obtain at least two action types of the target part and the target number corresponding to each action type of the target part.
In some embodiments of the present application, the target determination unit is specifically configured to:
when the trigger operation aiming at the setting determination control is detected, acquiring the setting number corresponding to the action type of the part to be selected;
and when the set number of the action types of the parts to be selected is larger than the preset number, determining that the action types of the parts to be selected are target part action types and the set number is the target number corresponding to the action types of the target parts, and obtaining at least two target part action types and the target number corresponding to each target part action type.
In some embodiments of the present application, the goal determination module 405 comprises:
the target type determining submodule is used for determining at least two target part action types based on the setting operation of the part action setting control;
and the target number determining submodule is used for detecting the number of the objects in the real-time preview picture so as to set the number of the objects as the target number corresponding to each target part action type.
In some embodiments of the present application, the shooting setting page further includes a target object number control, and the apparatus further includes:
the target object quantity determining module is used for determining the quantity of the target objects based on the setting operation aiming at the target object quantity control;
at this time, the shooting module 403 is configured to: and triggering image shooting when detecting that at least two target part action types exist in the part action type set, the number of target objects is matched with the number of all target part action types, and the number of each target part action type exceeds the corresponding target number.
In some embodiments of the present application, the image capturing page further includes a subject part motion setting control, and the image capturing apparatus further includes:
the candidate list display module is used for displaying a candidate part action list of the object in the real-time preview picture based on the trigger operation of the object part action setting control;
and the determining module is used for taking the object as a target object for triggering shooting, taking the selected action type as a target part action type corresponding to the target object and determining the target quantity corresponding to each target part action type when the selection operation of the candidate part action list aiming at the object is detected.
At this time, the shooting module 403 is configured to: and triggering image shooting when detecting that at least two target part action types exist in the part action type set, the object of each target part action type is the corresponding target object, and the number of each target part action type exceeds the corresponding target number.
In some embodiments of the present application, the candidate list display module is to:
displaying a candidate part action display control of the object based on a trigger operation of the control set for the object part action; when a determination operation for the candidate part action display control of the object is detected, a candidate part action list of the object is displayed.
In some embodiments of the present application, the part motion set includes a part motion type to which the part motion belongs, and a part motion value of the part motion, and the part motion value represents a probability that the part motion belongs to a standard part motion corresponding to the part motion type, referring to fig. 17, the capturing module 403 includes:
a candidate type determining submodule 4031 for determining a candidate part action type corresponding to the target part action type from within the part action type set;
a target type determination submodule 4032 for determining the candidate part action type as the target part action type when the part action value of the candidate part action type matches the target threshold value of the target part action type;
a capturing sub-module 4033 for triggering image capturing when there are at least two target site action types and the number of each target site action type exceeds its corresponding target number.
In some embodiments of the present application, the photographing apparatus further includes:
the candidate image shooting module is used for shooting at least two candidate images;
the candidate image identification module is used for identifying the candidate images based on the action types of the target parts to obtain the action types of the actual parts and the corresponding actual quantity of the actual parts;
the comparison module is used for comparing the actual part action type and the corresponding actual quantity of each candidate image with the target part action type and the corresponding target quantity respectively to obtain a recognition difference result;
and the target image determining module is used for determining a target image from the candidate images according to the identification difference result.
The shooting page display module 401 of the embodiment of the application first displays an image shooting page, where the image shooting page includes a real-time preview picture, then the recognition module 402 recognizes a part action in the real-time preview picture to obtain a part action type set, where the part action type set includes recognized part action types, and finally, when it is detected that at least two target part action types exist in the part action type set and the number of each target part action type exceeds the number of targets corresponding to the target part action types, the shooting module 403 triggers image shooting. According to the method and the device, the part action in the real-time preview picture can be automatically identified, and when the part action meets the preset shooting condition, the image shooting is automatically triggered, so that the image shooting efficiency is remarkably improved.
In addition, an embodiment of the present application further provides a computer device, where the computer device may be a terminal or a server, as shown in fig. 18, which shows a schematic structural diagram of the computer device according to the embodiment of the present application, and specifically:
the computer device may include components such as a processor 701 of one or more processing cores, memory 702 of one or more computer-readable storage media, a power supply 703, and an input unit 704. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 18 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components in combination, or a different arrangement of components. Wherein:
the processor 701 is a control center of the computer apparatus, connects various parts of the entire computer apparatus using various interfaces and lines, and performs various functions of the computer apparatus and processes data by running or executing software programs and/or modules stored in the memory 702 and calling data stored in the memory 702, thereby monitoring the computer apparatus as a whole. Optionally, processor 701 may include one or more processing cores; preferably, the processor 701 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 701.
The memory 702 may be used to store software programs and modules, and the processor 701 executes various functional applications and data processing by operating the software programs and modules stored in the memory 702. The memory 702 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the computer device, and the like. Further, the memory 702 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 702 may also include a memory controller to provide the processor 701 with access to the memory 702.
The computer device further includes a power supply 703 for supplying power to the various components, and preferably, the power supply 703 is logically connected to the processor 701 through a power management system, so that functions of managing charging, discharging, and power consumption are implemented through the power management system. The power supply 703 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The computer device may also include an input unit 704, the input unit 704 being operable to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the computer device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 701 in the computer device loads the executable file corresponding to the process of one or more application programs into the memory 702 according to the following instructions, and the processor 701 runs the application program stored in the memory 702, thereby implementing various functions as follows:
displaying an image shooting page, wherein the image shooting page comprises a real-time preview picture; identifying the part action in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises the identified part action types; and triggering image shooting when detecting that at least two target part action types exist in the part action type set and the quantity of each target part action type exceeds the corresponding target quantity.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application further provides a storage medium, in which a computer program is stored, where the computer program can be loaded by a processor to execute the steps in any one of the shooting methods provided in the present application. For example, the computer program may perform the steps of:
displaying an image shooting page, wherein the image shooting page comprises a real-time preview picture; identifying the part action in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises the identified part action types; and triggering image shooting when detecting that at least two target part action types exist in the part action type set and the quantity of each target part action type exceeds the corresponding target quantity.
The system related to the embodiment of the application can be a distributed system formed by connecting a client and a plurality of nodes (computer devices in any form in an access network, such as servers and terminals) in a network communication mode.
Taking a distributed system as a blockchain system as an example, referring To fig. 19, fig. 19 is an optional structural schematic diagram of the distributed system 110 applied To the blockchain system provided in this embodiment of the present application, and is formed by a plurality of nodes 1101 (computing devices in any form in an access network, such as servers and user terminals) and a client 1102, a Peer-To-Peer (P2P, Peer To Peer) network is formed between the nodes, and the P2P Protocol is an application layer Protocol operating on a Transmission Control Protocol (TCP). In a distributed system, any machine, such as a server or a terminal, can join to become a node, and the node comprises a hardware layer, a middle layer, an operating system layer and an application layer.
Referring to the functions of each node in the blockchain system shown in fig. 19, the functions involved include:
1) routing, the basic function that a node has, is used to support communication between nodes.
Besides the routing function, the node may also have the following functions:
2) the application is used for being deployed in a block chain, realizing specific services according to actual service requirements, recording data related to the realization functions to form recording data, carrying a digital signature in the recording data to represent a source of task data, and sending the recording data to other nodes in the block chain system, so that the other nodes add the recording data to a temporary block when the source and integrity of the recording data are verified successfully.
For example, the services implemented by the application include:
2.1) wallet, for providing the function of transaction of electronic money, including initiating transaction (i.e. sending the transaction record of current transaction to other nodes in the blockchain system, after the other nodes are successfully verified, storing the record data of transaction in the temporary blocks of the blockchain as the response of confirming the transaction is valid; of course, the wallet also supports the querying of the remaining electronic money in the electronic money address;
and 2.2) sharing the account book, wherein the shared account book is used for providing functions of operations such as storage, query and modification of account data, record data of the operations on the account data are sent to other nodes in the block chain system, and after the other nodes verify the validity, the record data are stored in a temporary block as a response for acknowledging that the account data are valid, and confirmation can be sent to the node initiating the operations.
2.3) Intelligent contracts, computerized agreements, which can enforce the terms of a contract, implemented by codes deployed on a shared ledger for execution when certain conditions are met, for completing automated transactions according to actual business requirement codes, such as querying the logistics status of goods purchased by a buyer, transferring the buyer's electronic money to the merchant's address after the buyer signs for the goods; of course, smart contracts are not limited to executing contracts for trading, but may also execute contracts that process received information.
3) And the Block chain comprises a series of blocks (blocks) which are mutually connected according to the generated chronological order, new blocks cannot be removed once being added into the Block chain, and recorded data submitted by nodes in the Block chain system are recorded in the blocks.
The target part action type and the corresponding target quantity, the target object, and the target image in this embodiment may be stored in a shared book of the area chain through the node, and the computer device (e.g., a terminal or a server) may acquire the target part action type and the corresponding target quantity, the target object, and the target image based on data stored in the shared book.
Referring to fig. 20, fig. 20 is an optional schematic diagram of a Block Structure (Block Structure) provided in this embodiment, where each Block includes a hash value of a transaction record stored in the Block (hash value of the Block) and a hash value of a previous Block, and the blocks are connected by the hash value to form a Block chain. The block may include information such as a time stamp at the time of block generation. A block chain (Blockchain), which is essentially a decentralized database, is a string of data blocks associated by using cryptography, and each data block contains related information for verifying the validity (anti-counterfeiting) of the information and generating a next block.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the storage medium can execute the steps in any of the shooting methods provided in the embodiments of the present application, beneficial effects that can be achieved by any of the shooting methods provided in the embodiments of the present application can be achieved, and detailed descriptions thereof are omitted here for the sake of detail in the foregoing embodiments.
The foregoing describes in detail a photographing method, apparatus, computer device and storage medium provided by embodiments of the present application, and specific examples are applied herein to explain the principles and implementations of the present application, and the description of the foregoing embodiments is only used to help understand the method and core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (15)

1. A photographing method, characterized by comprising:
displaying an image shooting page, wherein the image shooting page comprises a real-time preview picture;
identifying the part action in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises the identified part action type;
and triggering image shooting when detecting that at least two target part action types exist in the part action type set and the quantity of each target part action type exceeds the corresponding target quantity.
2. The method of claim 1, further comprising:
displaying a shooting setting page, wherein the shooting setting page comprises a part action setting control;
and determining at least two target part action types for triggering shooting and the target number corresponding to each target part action type based on the setting operation aiming at the part action setting control.
3. The method of claim 2, wherein the image capture page further comprises a capture settings control, the displaying the capture settings page comprising:
when an operation for a shooting setting control on the image shooting page is detected, the shooting setting page is displayed.
4. The method according to claim 2, wherein the part motion setting control comprises at least two part motion types to be selected and a part motion number setting control corresponding to each part motion type to be selected,
the determining, based on the setting operation for the part motion setting control, at least two target part motion types that trigger shooting and the number of targets corresponding to each target part motion type includes:
and determining the action type of the part to be selected as a target part action type and the target quantity corresponding to the target part action type based on the setting operation of the part action quantity setting control corresponding to the action type of the part to be selected, and obtaining at least two target part action types and the target quantity corresponding to each target part action type.
5. The method according to claim 4, wherein the shooting setting page further includes a setting determination control,
the setting operation of the control is set based on the part action number corresponding to the action type of the part to be selected, the action type of the part to be selected is determined to be the action type of the target part, the target number corresponding to the action type of the target part is determined, and at least two action types of the target part and the target number corresponding to each action type of the target part are obtained, and the method comprises the following steps:
setting operation of a control is set based on the part action number corresponding to the action type of the part to be selected, and the set number corresponding to the action type of the part to be selected is displayed;
when the trigger operation aiming at the setting determination control is detected, determining that the action type of the part to be selected is the action type of the target part and determining that the set number is the target number of the action type of the target part, and obtaining at least two action types of the target part and the target number corresponding to each action type of the target part.
6. The method according to claim 5, wherein when the trigger operation for the setting determination control is detected, determining that the to-be-selected part action type is a target part action type and determining that the set number is a target number of the target part action types to obtain at least two target part action types and a target number corresponding to each target part action type comprises:
when the trigger operation aiming at the setting determination control is detected, acquiring the setting number corresponding to the action type of the part to be selected;
when the set number of the action types of the parts to be selected is larger than the preset number, determining that the action types of the parts to be selected are target part action types and the set number is the target number corresponding to the action types of the target parts, and obtaining at least two target part action types and the target number corresponding to each target part action type.
7. The method according to claim 2, wherein the determining at least two target part action types for triggering shooting and the target number corresponding to each target part action type based on the setting operation for the part action setting control comprises:
determining at least two target part action types based on the setting operation of the part action setting control;
and detecting the number of the objects in the real-time preview picture so as to set the number of the objects as the number of the targets corresponding to each target part action type.
8. The method of claim 2, wherein the shot settings page further comprises a target object number control, the method further comprising:
determining the number of the target objects based on the setting operation aiming at the target object number control;
when it is detected that at least two target part action types exist in the part action type set and the number of each target part action type exceeds the corresponding target number, image shooting is triggered, and the method comprises the following steps:
and triggering image shooting when detecting that at least two target part action types exist in the part action type set, the number of the target objects is matched with the number of all the target part action types, and the number of each target part action type exceeds the corresponding target number.
9. The method of claim 1, wherein the image capture page further comprises a subject part motion setting control, the method further comprising:
displaying a candidate part action list of the object in a real-time preview picture based on a trigger operation of a control set for the object part action;
when the selection operation of the candidate part action list aiming at the object is detected, the object is used as a target object for triggering shooting, the selected action type is used as a target part action type corresponding to the target object, and the target number corresponding to each target part action type is determined.
When it is detected that at least two target part action types exist in the part action type set and the number of each target part action type exceeds the corresponding target number, image shooting is triggered, and the method comprises the following steps:
and triggering image shooting when detecting that at least two target part action types exist in the part action type set, the object of each target part action type is the corresponding target object, and the number of each target part action type exceeds the corresponding target number.
10. The method according to claim 9, wherein the displaying a candidate part action list of the object in a real-time preview screen based on a trigger operation of a control set for the object part action comprises:
displaying a candidate part action display control of the object based on the trigger operation of the control set for the object part action;
when a determination operation for a candidate part action display control of an object is detected, a candidate part action list of the object is displayed.
11. The method of claim 1, wherein the part action set comprises a part action type to which a part action belongs and a part action value of the part action, wherein the part action value represents a probability that the part action belongs to a standard part action corresponding to the part action type,
when it is detected that at least two target part action types exist in the part action type set and the number of each target part action type exceeds the corresponding target number, image shooting is triggered, and the method comprises the following steps:
determining a candidate part action type corresponding to the target part action type from the part action type set;
when the part motion value of the candidate part motion type is matched with the target threshold value of the target part motion type, determining the candidate part motion type as the target part motion type;
when at least two target part action types exist and the number of each target part action type exceeds the corresponding target number, image shooting is triggered.
12. The method according to claim 1, wherein when it is detected that at least two target part motion types exist in the part motion type set and the number of each target part motion type exceeds the corresponding target number, after triggering image capturing, the method comprises:
shooting at least two candidate images;
based on the target part action type, identifying the candidate images to obtain an actual part action type and an actual number corresponding to the actual part action type;
comparing the actual part action type and the corresponding actual quantity of each candidate image with the target part action type and the corresponding target quantity respectively to obtain a recognition difference result;
and determining a target image from the candidate images according to the identification difference result.
13. A camera, comprising:
the shooting page display module is used for displaying an image shooting page, and the image shooting page comprises a real-time preview picture;
the identification module is used for identifying the part action in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises the identified part action type;
and the shooting module is used for triggering image shooting when at least two target part action types exist in the part action type set and the quantity of each target part action type exceeds the corresponding target quantity.
14. A storage medium, characterized in that it stores a plurality of computer programs adapted to be loaded by a processor for performing the steps of the method according to any one of claims 1 to 12.
15. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method according to any of claims 1 to 12 are implemented when the computer program is executed by the processor.
CN202010092824.1A 2020-02-14 2020-02-14 Shooting method, shooting device, computer equipment and storage medium Active CN112752016B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010092824.1A CN112752016B (en) 2020-02-14 2020-02-14 Shooting method, shooting device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010092824.1A CN112752016B (en) 2020-02-14 2020-02-14 Shooting method, shooting device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112752016A true CN112752016A (en) 2021-05-04
CN112752016B CN112752016B (en) 2023-06-16

Family

ID=75645147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010092824.1A Active CN112752016B (en) 2020-02-14 2020-02-14 Shooting method, shooting device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112752016B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009065382A (en) * 2007-09-05 2009-03-26 Nikon Corp Imaging apparatus
JP2010034685A (en) * 2008-07-25 2010-02-12 Nikon Corp Digital camera
CN103209304A (en) * 2012-01-16 2013-07-17 卡西欧计算机株式会社 Imaging device and imaging method
CN103312945A (en) * 2012-03-07 2013-09-18 华晶科技股份有限公司 Image pickup device and image pickup method thereof, and figure recognition photo-taking system
CN107911614A (en) * 2017-12-25 2018-04-13 腾讯数码(天津)有限公司 A kind of image capturing method based on gesture, device and storage medium
CN108307116A (en) * 2018-02-07 2018-07-20 腾讯科技(深圳)有限公司 Image capturing method, device, computer equipment and storage medium
US20180211693A1 (en) * 2015-08-03 2018-07-26 Sony Corporation Information processing system, information processing method, and recording medium
CN109005336A (en) * 2018-07-04 2018-12-14 维沃移动通信有限公司 A kind of image capturing method and terminal device
CN109194879A (en) * 2018-11-19 2019-01-11 Oppo广东移动通信有限公司 Photographic method, device, storage medium and mobile terminal
CN109343764A (en) * 2018-07-18 2019-02-15 奇酷互联网络科技(深圳)有限公司 The method, apparatus of mobile terminal and control operation control
CN109348135A (en) * 2018-11-21 2019-02-15 Oppo广东移动通信有限公司 Photographic method, device, storage medium and terminal device
US20190109991A1 (en) * 2011-11-17 2019-04-11 Samsung Electronics Co., Ltd. Method and apparatus for self camera shooting
CN110337806A (en) * 2018-05-30 2019-10-15 深圳市大疆创新科技有限公司 Group picture image pickup method and device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009065382A (en) * 2007-09-05 2009-03-26 Nikon Corp Imaging apparatus
JP2010034685A (en) * 2008-07-25 2010-02-12 Nikon Corp Digital camera
US20190109991A1 (en) * 2011-11-17 2019-04-11 Samsung Electronics Co., Ltd. Method and apparatus for self camera shooting
CN103209304A (en) * 2012-01-16 2013-07-17 卡西欧计算机株式会社 Imaging device and imaging method
CN103312945A (en) * 2012-03-07 2013-09-18 华晶科技股份有限公司 Image pickup device and image pickup method thereof, and figure recognition photo-taking system
US20180211693A1 (en) * 2015-08-03 2018-07-26 Sony Corporation Information processing system, information processing method, and recording medium
CN107911614A (en) * 2017-12-25 2018-04-13 腾讯数码(天津)有限公司 A kind of image capturing method based on gesture, device and storage medium
CN108307116A (en) * 2018-02-07 2018-07-20 腾讯科技(深圳)有限公司 Image capturing method, device, computer equipment and storage medium
CN110337806A (en) * 2018-05-30 2019-10-15 深圳市大疆创新科技有限公司 Group picture image pickup method and device
CN109005336A (en) * 2018-07-04 2018-12-14 维沃移动通信有限公司 A kind of image capturing method and terminal device
CN109343764A (en) * 2018-07-18 2019-02-15 奇酷互联网络科技(深圳)有限公司 The method, apparatus of mobile terminal and control operation control
CN109194879A (en) * 2018-11-19 2019-01-11 Oppo广东移动通信有限公司 Photographic method, device, storage medium and mobile terminal
CN109348135A (en) * 2018-11-21 2019-02-15 Oppo广东移动通信有限公司 Photographic method, device, storage medium and terminal device

Also Published As

Publication number Publication date
CN112752016B (en) 2023-06-16

Similar Documents

Publication Publication Date Title
CN108229369B (en) Image shooting method and device, storage medium and electronic equipment
CN110012210B (en) Photographing method and device, storage medium and electronic equipment
WO2016054989A1 (en) Method and device for establishing photographing template database and providing photographing recommendation information
CN108512670B (en) Group creation method and terminal thereof
CN108875452A (en) Face identification method, device, system and computer-readable medium
US20190332854A1 (en) Hybrid deep learning method for recognizing facial expressions
CN108198177A (en) Image acquiring method, device, terminal and storage medium
CN105704386B (en) A kind of image acquiring method, electronic equipment and electronic device
CN108525305A (en) Image processing method, device, storage medium and electronic equipment
CN108198130A (en) Image processing method, device, storage medium and electronic equipment
CN111652601B (en) Virtual article issuing and receiving method and device
US11783192B2 (en) Hybrid deep learning method for recognizing facial expressions
CN106375193A (en) Remote group photographing method
CN109474785A (en) The focus of electronic device and electronic device tracks photographic method
CN108156385A (en) Image acquiring method and image acquiring device
CN104901939B (en) Method for broadcasting multimedia file and terminal and server
CN112989922A (en) Face recognition method, device, equipment and storage medium based on artificial intelligence
CN113610953A (en) Information processing method and device and computer readable storage medium
CN108600604A (en) Image pickup method, dual-screen mobile terminal and storage medium
Azhaguraj et al. Smart attendance marking system using face recognition
CN112752016B (en) Shooting method, shooting device, computer equipment and storage medium
CN115396715B (en) Table game interaction method, system and storage medium
CN111461005B (en) Gesture recognition method and device, computer equipment and storage medium
CN108898169A (en) Image processing method, picture processing unit and terminal device
CN111711753B (en) Photo uploading method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40043508

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant