CN112752016B - Shooting method, shooting device, computer equipment and storage medium - Google Patents

Shooting method, shooting device, computer equipment and storage medium Download PDF

Info

Publication number
CN112752016B
CN112752016B CN202010092824.1A CN202010092824A CN112752016B CN 112752016 B CN112752016 B CN 112752016B CN 202010092824 A CN202010092824 A CN 202010092824A CN 112752016 B CN112752016 B CN 112752016B
Authority
CN
China
Prior art keywords
target
part action
action type
action
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010092824.1A
Other languages
Chinese (zh)
Other versions
CN112752016A (en
Inventor
覃华峥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010092824.1A priority Critical patent/CN112752016B/en
Publication of CN112752016A publication Critical patent/CN112752016A/en
Application granted granted Critical
Publication of CN112752016B publication Critical patent/CN112752016B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a shooting method, a shooting device, computer equipment and a storage medium; the embodiment of the application displays an image shooting page, wherein the image shooting page comprises a real-time preview picture; identifying part actions in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises the identified part action types; triggering image shooting when detecting that at least two target part action types exist in the part action type set and the number of each target part action type exceeds the corresponding target number; according to the scheme, when the real-time preview picture meets the preset shooting condition, the image shooting can be automatically triggered, and the image shooting efficiency is remarkably improved.

Description

Shooting method, shooting device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of image recognition technologies, and in particular, to a shooting method, a shooting device, a computer device, and a storage medium.
Background
In the process of image shooting, the content in a real-time preview picture of the camera can be judged and shooting is triggered. If the user wants to select the content meeting the expectations in the real-time preview picture to perform shooting operation, the user is required to observe the content in the real-time preview picture by himself, and when the moment corresponding to the content meeting the expectations arrives, the user manually triggers shooting. In the course of research and practice of the prior art, the inventors of the present application found that the prior art has a defect of low photographing efficiency.
Disclosure of Invention
The embodiment of the application provides a shooting method, a shooting device, computer equipment and a storage medium, which can remarkably improve shooting efficiency.
The embodiment of the application provides a shooting method, which comprises the following steps:
displaying an image shooting page, wherein the image shooting page comprises a real-time preview picture;
identifying part actions in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises the identified part action types;
when detecting that at least two target part action types exist in the part action type set and the number of each target part action type exceeds the corresponding target number, triggering image shooting.
Accordingly, an embodiment of the present application provides a photographing apparatus, including:
the shooting page display module is used for displaying an image shooting page, and the image shooting page comprises a real-time preview picture;
the identification module is used for identifying the part actions in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises the identified part action types;
and the shooting module is used for triggering image shooting when detecting that at least two target part action types exist in the part action type set and the number of each target part action type exceeds the corresponding target number.
In some embodiments of the present application, the photographing apparatus further includes:
the setting page display module is used for displaying a shooting setting page, and the shooting setting page comprises a part action setting control;
and the target determining module is used for determining at least two target part action types for triggering shooting and the corresponding target quantity of each target part action type based on the setting operation of the part action setting control.
In some embodiments of the present application, the image capturing page further includes a capturing setting control, and the setting page display module is configured to:
when an operation for a photographing setting control on an image photographing page is detected, the photographing setting page is displayed.
In some embodiments of the present application, the site action setting controls include at least two candidate site action types, and site action number setting controls corresponding to each candidate site action type,
the target determination module includes:
the target determination submodule is used for determining the to-be-selected part action type as a target part action type and the target quantity corresponding to the target part action type based on the setting operation of the part action quantity setting control corresponding to the to-be-selected part action type, and obtaining at least two target part action types and the target quantity corresponding to each target part action type.
In some embodiments of the present application, the shooting settings page further includes a settings determination control, and the targeting submodule includes:
the setting number display unit is used for displaying the setting number corresponding to the action type of the part to be selected based on the setting operation of the part action number setting control corresponding to the action type of the part to be selected;
and the target determining unit is used for determining that the action type of the part to be selected is the target part action type and determining that the set quantity is the target quantity of the target part action type when the trigger operation of the setting determining control is detected, so as to obtain at least two target part action types and the target quantity corresponding to each target part action type.
In some embodiments of the present application, the targeting unit is configured to:
when the triggering operation of the setting determination control is detected, acquiring the setting quantity corresponding to the action type of the part to be selected;
when the set number of the action types of the to-be-selected part is larger than the preset number, determining that the action type of the to-be-selected part is the target part action type and the set number is the target number corresponding to the target part action type, and obtaining at least two target part action types and the target number corresponding to each target part action type.
In some embodiments of the present application, the targeting module includes:
the target type determining submodule is used for determining at least two target part action types based on the setting operation of the part action setting control;
the target number determining sub-module is used for detecting the number of objects in the real-time preview picture so as to set the number of the objects as the corresponding target number of each target position action type.
In some embodiments of the present application, the shooting setting page further includes a target object number control, and the shooting device further includes:
the target object quantity determining module is used for determining the quantity of target objects based on the setting operation of the target object quantity control;
at this time, the photographing module is configured to: when at least two target part action types exist in the part action type set, the number of the target objects is matched with the number of all the target part action types, and the number of each target part action type exceeds the corresponding target number, the image shooting is triggered.
In some embodiments of the present application, the image capturing page further includes a subject site action setting control, and the capturing device further includes:
The candidate list display module is used for displaying a candidate part action list of the object in the real-time preview picture based on the triggering operation of the target part action setting control;
and the determining module is used for taking the object as a target object for triggering shooting, taking the selected action type as a target part action type corresponding to the target object and determining the target quantity corresponding to each target part action type when detecting the selected operation of the candidate part action list aiming at the object.
At this time, the photographing module is configured to: when detecting that at least two target part action types exist in the part action type set, each target part action type is a corresponding target object, and the number of each target part action type exceeds the corresponding target number, triggering image shooting.
In some embodiments of the present application, the candidate list display module is configured to:
based on the triggering operation of the target part action setting control, displaying a candidate part action display control of the target;
when a determination operation for a candidate part action display control of an object is detected, a candidate part action list of the object is displayed.
In some embodiments of the present application, the part action set includes a part action type to which the part action belongs, and a part action value of the part action, the part action value characterizes a probability that the part action belongs to a standard part action corresponding to the part action type, and the photographing module includes:
a candidate type determining sub-module, configured to determine a candidate part action type corresponding to a target part action type from the part action type set;
a target type determining sub-module, configured to determine, when a part action value of the candidate part action type matches a target threshold of the target part action type, that the candidate part action type is a target part action type;
and the shooting sub-module is used for triggering image shooting when at least two target part action types exist and the number of each target part action type exceeds the corresponding target number.
In some embodiments of the present application, the photographing apparatus further includes:
the candidate image shooting module is used for shooting at least two candidate images;
the candidate image recognition module is used for recognizing the candidate image based on the target part action type to obtain the actual part action type and the corresponding actual quantity;
The comparison module is used for respectively comparing the actual part action type and the corresponding actual quantity of each candidate image with the target part action type and the corresponding target quantity to obtain an identification difference result;
and the target image determining module is used for determining a target image from the candidate images according to the recognition difference result.
Accordingly, the present embodiments also provide a storage medium storing a computer program adapted to be loaded by a processor to perform any of the photographing methods provided in the embodiments of the present application.
Correspondingly, the embodiment of the application also provides computer equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes any shooting method provided by the embodiment of the application when executing the computer program.
The method comprises the steps of firstly displaying an image shooting page, wherein the image shooting page comprises a real-time preview picture, then identifying part actions in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises identified part action types, and finally triggering image shooting when at least two target part action types exist in the part action type set and the number of each target part action type exceeds the corresponding target number. The method and the device can automatically identify the part actions in the real-time preview picture, automatically trigger image shooting when the part actions meet the preset shooting conditions, and remarkably improve the image shooting efficiency.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of a shooting method according to an embodiment of the present application;
fig. 2 is a flow chart of a shooting method provided in an embodiment of the present application;
fig. 3 is a schematic view of a part of page operation of a shooting method according to an embodiment of the present application;
fig. 4 is another schematic view of a part of page operation of the photographing method according to the embodiment of the present application;
fig. 5 is another schematic view of a part of page operation of the photographing method according to the embodiment of the present application;
fig. 6 is another schematic view of a part of page operation of the photographing method according to the embodiment of the present application;
fig. 7 is another schematic view of a part of page operation of the photographing method according to the embodiment of the present application;
fig. 8 is another schematic view of a part of page operation of the photographing method according to the embodiment of the present application;
fig. 9 is another schematic view of a part of page operation of the photographing method according to the embodiment of the present application;
Fig. 10 is another flow chart of the photographing method provided in the embodiment of the present application;
fig. 11 is another schematic view of a part of page operation of the photographing method provided in the embodiment of the present application;
fig. 12 is another schematic view of a part of page operation of the photographing method provided in the embodiment of the present application;
fig. 13 is a general flowchart illustrating a photographing method provided in the embodiment of the present application;
fig. 14 is a partial flowchart illustrating a photographing method according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present application;
fig. 16 is another schematic structural diagram of a photographing apparatus according to an embodiment of the present disclosure;
fig. 17 is another schematic structural diagram of a photographing apparatus according to an embodiment of the present application;
FIG. 18 is a schematic diagram of a computer device according to an embodiment of the present application;
FIG. 19 is a schematic diagram of an alternative architecture of a distributed system 110 for use in a blockchain system provided by embodiments of the present application;
fig. 20 is an alternative schematic diagram of a block structure according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The embodiment of the application provides a shooting method, a shooting device, computer equipment and a storage medium. Specifically, the embodiment of the application may be integrated in a first shooting device and a second shooting device, where the first shooting device may be integrated in a first computer device, the first computer device may be an electronic device such as a terminal, or a server, and the terminal may be an electronic device capable of performing image shooting, such as a camera, a video camera, a smart phone, a tablet computer, a notebook computer, or a personal computer, and the server may be a single server or a server cluster.
The second shooting device may be integrated in a second computer device, where the second computer device may be an electronic device such as a terminal, or a server, and the terminal may be an electronic device capable of shooting an image, such as a camera, a video camera, a smart phone, a tablet computer, a notebook computer, and a personal computer, and the server may be a single server, or may be a server cluster, where the terminal may be a camera, a video camera, a smart phone, a tablet computer, a notebook computer, and a personal computer. The server may be a web server, an application server, a data server, or the like.
In the following, in this embodiment of the present application, a shooting method is described by taking a first computing device as a terminal and a second computing device as a server as an example.
As shown in fig. 1, in the solution of the embodiment of the present application, interaction is performed between the terminal and the server, and this embodiment describes a shooting system that can be developed by taking, as an example, a first shooting device integrated on the terminal 10 and a second shooting device integrated on the server 20.
Specifically, the terminal 10 may display an image capturing page, where the image capturing page includes a real-time preview image, and then identifies a part action in the real-time preview image to obtain a part action type set, where the part action type set includes identified part action types, and finally when at least two target part action types exist in the part action type set and the number of each target part action type exceeds the number of targets corresponding to the target part action types, the terminal 10 receives a capturing instruction sent by the server 20, and triggers image capturing.
Specifically, the server 20 may store a set of part action types, preset photographing conditions (the preset photographing conditions may be that there are at least two target part action types and the number of each target part action type exceeds its corresponding target number), target part action types, and target numbers corresponding to each target part action type, which are transmitted by the terminal 10, and determine target thresholds corresponding to each target part action type, and the server 20 may detect the set of part action types, and when detecting that the set of part action types satisfies the preset photographing conditions, transmit a photographing instruction to trigger image photographing to the terminal 10.
The following will describe in detail. Note that the order of description of the following embodiments is not limited to the order of the embodiments.
The embodiment of the present invention will be described from the perspective of a first photographing device, which may be integrated in a terminal capable of performing image photographing.
The shooting method provided by the embodiment of the invention can be executed by a processor of a terminal, as shown in fig. 2, and the flow of the shooting method can be as follows:
201. and displaying an image shooting page, wherein the image shooting page comprises a real-time preview picture.
In the embodiment of the application, the electronic equipment with the shooting function is used for shooting to obtain the image, a single picture can be obtained by one-time shooting, videos consisting of a plurality of images can also be obtained, the content of the image can be determined according to the content acquired by the electronic equipment at the shooting moment, and the content of the image is any content which can be identified and shot by the shooting component of the electronic equipment from the range. Such as the expression of a person, moon, or rain. The image can be obtained based on the personal interests of the user or the work content, or can be obtained as a specific purpose, such as the image can be the material of an authentication scene.
The image capturing page may be any computer device capable of performing image capturing, and the computer device may be an electronic device with a built-in capturing component, such as a camera, a smart phone, a tablet computer, or a notebook computer, or may be an electronic device connected to the capturing component by a wired transmission or a wireless transmission, such as a server with an external camera, or a personal computer.
On an image photographing page, photographing conditions, which may be photographing parameters such as contrast, saturation, or resolution, may be checked and adjusted; may be a setting of an auxiliary photographing tool, such as a flash, or a reference line, etc.; the camera can be an auxiliary photographing function, such as a photographing shortcut key, or automatic photographing, and the like; a photographing mode such as portrait mode, landscape mode, professional mode, etc. is also possible. Specifically, viewing the shooting condition may directly view the content displayed on the image shooting page, may view the content by triggering a shooting condition control set correspondingly to the image shooting page, and so on. Specifically, the shooting condition may be automatically adjusted by the electronic device according to the actual situation of the shooting scene, or may be manually set by the user based on the actual requirement of the user, and so on.
The image capturing page may further include a control for triggering the capturing, and when an operation for the control for triggering the capturing is detected, the image capturing may be performed.
The real-time preview picture is one of the contents which can be displayed on the image shooting page, and the real-time preview picture is the content which is acquired by the shooting component and displayed on the image shooting page in real time before shooting operation, so that the content acquired by the shooting component can be observed in real time and judged to determine the optimal shooting time. When the shooting conditions are adjusted, the content in the real-time preview picture can be changed according to the changed shooting conditions, so that the change of the content display effect caused by the change of the shooting conditions can be observed, for example, when the portrait shooting is carried out, the portrait is adjusted to a beautifying mode, and the portrait after being ground in the preview picture can be observed; when the portrait shooting is performed, the shooting brightness is adjusted, so that the content in the real-time preview picture can be observed to be brighter, even possibly bright and dazzling, and the like.
For example, a user king uses camera software X on the mobile terminal to photograph AA and BB, the user king opens X, the X displays an image photographing page, the image photographing page comprises a real-time preview picture, and the real-time preview picture displays the content containing AA and BB acquired by a photographing component of the mobile terminal through which the user king passes in real time.
For another example, when the user logs in to the website Y with the identity verified by the self-timer video, the north is displaying an image shooting page when the network address corresponding to the Y is opened, the image shooting page comprises a preview implementation picture, and the preview implementation picture displays the head and the face of the north acquired by the electronic equipment.
202. And identifying the part actions in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises the identified part action types.
The part action refers to the content displayed in real time on the real-time preview screen, the part action is the action of the object to be shot, the object to be shot can be a person, the part action can be the local action of the object to be shot such as stretching hands, lifting legs, lifting hands or bending down, the part action can be smiling, opening eyes or lifting the head to wait for the object to be shot, the part action can be the action of hands such as swinging a finger into a V shape, holding a fist or expressing different values through the number and sequence of finger extension, the part action can also be the part action which reflects the gesture of the object to be shot, for example, the gesture can be the gesture (such as setting angles of two feet or the flushing of two shoulders) when standing, the gesture (such as leg overlapping or leg folding) when sitting down, the part action can also be displayed through cooperation of at least two objects to be shot, such as the shape of two persons to make a heart through bending, the shape of a plurality of people to make a hexagon through arm matching, or the triangle obtained by stacking a plurality of people, and the like, and the part actions can also be specific part actions such as a character, a horse, a gift, a sneeze, or the like.
The object to be shot can be other living things which are in a changing state and can not be directly and quickly acquired through the electronic equipment according to the requirement of a photographer, if the object to be shot is an animal, the part actions can be local actions, facial actions, limb actions and the like of the animal, such as vertical ears, lifting claws, shaking tails and the like, and can also be actions made by at least two animals with interaction, such as two beak and food connecting the two beak (a baby is fed by the baby). If the object is a plant or other living things other than human and animal, the local action may be a part action when a certain state is reached or a certain process is performed, such as a process of eating a person to flower or eat mosquitoes, a process of cell division, and the like.
The part action is an action of the object to be shot, but in the process of identifying the part action, the object to be shot is not necessarily determined, for example, if the part action is a lifting hand and the user only wants to shoot the lifted hand, the object to which the lifted hand belongs is not required to be determined, and only the lifting hand is required to be identified, for example, if the part action is a smile and the user wants to acquire the smile of the specific object to be shot, at this time, the smile and the object to which the smile belongs are required to be identified, and the like.
The recognition of the part action can be performed by a common recognition algorithm, such as a principal component analysis algorithm, a direction gradient histogram feature extraction algorithm, and the like, and the part action can be recognized to obtain the part action type to which the part action belongs.
The type and the number of the part actions to be identified can be set according to actual requirements, and can be configured by the electronic equipment, or can be configured by a user, for example, the user can select the part actions to be identified from the part actions to be identified provided by the electronic equipment based on the self requirements, for example, the user can input images or image sequences of standard part actions, the electronic equipment identifies key features of the standard part actions, or the user directly inputs the key features of the standard part actions, the electronic equipment determines the standard part actions as the part actions to be identified, and the key features are used as reference standards in the process of identifying the standard part actions. The process of configuring the type and number of part actions to be identified by the user or the electronic device may not be completed at one time, and may be a process of continuously updating according to the user requirements and the progress of the electronic device end identification technology.
The part actions are identified, and a part action type set is obtained, and is a database according to which preset shooting condition judgment is performed, and is a basis of the technical scheme expressed by the embodiment, and is a key of whether efficient automatic shooting can be performed.
For example, the X identifies the part actions of AA and BB according to the real-time preview picture including AA and BB acquired by the user king and displayed on the image shooting page, and the part action type set is obtained, and the part action type set can include salute 1, salute 2, salute 3, double shoulder level 1, double shoulder level 2, head 1, head 2, smile 1 and smile 2.
For another example, the website Y identifies the part actions in the image sequence in the real-time preview screen to obtain a part action type set, where the identified part actions may be preset by the website Y, and the part action type set includes part action types, and the part action types may include blinking, waving, opening mouth, lifting head, and the like.
203. When detecting that at least two target part action types exist in the part action type set and the number of each target part action type exceeds the corresponding target number, triggering image shooting.
The target site action type may be a site action type that can trigger image capturing determined based on a user or an electronic device, and the target number is a number that must be reached for the target site action type that can trigger image capturing.
Specifically, the action type of the target part and the corresponding target quantity can be set in various modes, and can be set for a user by himself, for example, the user can enter a setting page of a camera in the electronic equipment before image shooting, and the action type of the target part and the corresponding target quantity are set; if the user can directly set the action type of the target part and the corresponding target quantity on the setting page of the electronic equipment; for another example, the user can shoot through a setting area or a setting control of the action type of the target part and the corresponding target quantity on the image shooting page.
The electronic equipment can be set, the electronic equipment can set a unique target part action type and corresponding target quantity, can be flexibly set based on the condition in actual shooting, can be set based on a shooting mode or shooting condition selected by a user, and can determine the target part action type and the corresponding target quantity based on the shooting mode if the shooting mode selected by the user can be regarded as the existence of multiple target part action types and the quantity of each target part action type needs to meet the corresponding target quantity. A step of
The target part action type corresponds to the standard part action, the same part action type can correspond to different target part action types, for example, the part action type is smile, the target part action type can be smile, laugh and the like, and the standard part action of smile and the standard part action of laugh are different at the moment.
When detecting that at least two target part action types exist in the part action type set and the number of each target part action type exceeds the corresponding target number, triggering image shooting. The process is automatically recognized and shot by the electronic equipment, the user is not required to observe the content of the real-time preview picture and select shooting time, so that the shooting efficiency can be remarkably improved, in addition, the electronic equipment has program running time from the detection of the content meeting the preset shooting condition to the automatic triggering of shooting due to the existence of user reaction time from the observation of the content meeting the expected condition to the manual triggering of shooting by the user, and the user reaction time of different users is far longer than the program running time, so that the image accuracy of the automatic shooting of the electronic equipment is remarkably higher than the image accuracy of the manual shooting of the user, and the image accuracy can be the ratio of the content of the actually shot image to the content meeting the expected condition (the content meeting the preset shooting condition).
In addition to detecting and judging the part action type at the terminal, the operation may be performed at the server, and specifically, the part action type set, the target part action type, and the corresponding target number may be transmitted to the server, and it may be detected at the server side whether at least two target part action types exist in the part action type set, and whether the number of each target part action type exceeds the corresponding target number. If both conditions are satisfied, the server may send a shooting instruction for triggering image shooting to the terminal, and when the terminal receives the shooting instruction, the terminal triggers image shooting.
For example, the user king can set preset shooting conditions as 2 gifts and 2 heads in the X, the X detects in the part action type set according to the target part action types (namely the heads and the gifts) in the preset shooting conditions, and determines the gifts 1, the gifts 2, the heads 1 and the heads 2 in the part action type set as the target part action types, namely the 2 heads and the 2 gifts, so that the X automatically triggers image shooting.
For example, the website Y informs the north of the action types of the target part of mouth opening, no blinking and head raising in a voice prompt or video demonstration mode, the action types of the target part and the corresponding number of the target part are 1 mouth opening, 1 no blinking and 1 head lowering, when the action types of the part of the image sequence detected by the website Y intensively identify the action types of the target part, the mouth opening, the no blinking and the head raising, the image sequence shooting is automatically triggered, namely the image sequence is automatically stored, and the north is prompted to pass the identity verification, so that the website Y homepage can be accessed.
In one embodiment, the set of site actions includes a site action type to which the site action belongs, and a site action value for the site action, the site action value characterizing a probability that the site action belongs to a standard site action corresponding to the site action type,
when detecting that at least two target part action types exist in the part action type set and the number of each target part action type exceeds the corresponding target number, triggering image shooting can comprise the following steps:
determining candidate part action types corresponding to the target part action types from the part action type set;
when the part action value of the candidate part action type is matched with the target threshold value of the target part action type, determining the candidate part action type as the target part action type;
when at least two target part action types exist and the number of each target part action type exceeds the corresponding target number, triggering image shooting.
The position action in the implementation preview picture can correspond to the target position action type, the target position action type corresponds to the unique standard position action, the standard position action is the best expression mode of the target position action type, for example, the standard position action of the target position action type for the gift can be that the right palm forms a plane, the middle fingertip and eyes are on a horizontal line, the actual gift can be that the right palm forms a curved surface in the implementation preview picture, the middle fingertip and the cheekbones are on a horizontal line, when the position action is identified, the actual gift can be identified according to the bending degree of the curved surface formed by the right palm, the distance between the cheekbones and the eyes, and the probability that the actual gift can be used as the standard gift can be obtained, and the probability can be expressed in a numerical form, namely the position action numerical value.
The candidate part action types corresponding to the target part action types are determined from the part action type set, the part action types which are irrelevant to the target part action types in the part action type set can be removed, and compared with the target part action types determined from the part action type set, the number of times that the part action values are compared with the target threshold can be remarkably reduced by determining the target part action types from the candidate part action types.
And when the part action value of the candidate part action type is matched with the target threshold value of the target part action type, determining the candidate part action type as the target part action type.
In the actual application scenario, the part motion that approximates or approaches the standard part motion can be regarded as the standard part motion, and the target threshold can be regarded as the minimum value of the part motion values corresponding to the part motion of the standard part motion, that is, the minimum value of the part motion values of the part motion type of the target part motion type. Then the matching of the location action value of the candidate location action type with the target threshold of the target location action type may be that the location action value of the candidate location action type is not less than the target threshold of the target location action type.
It should be noted that, if the comparison standard and the numerical expression manner change, the understanding of the target threshold may also change, and the understanding that the part action numerical value of the candidate part action type matches the target threshold of the target part action type may also change, which may be flexibly set in an actual scene, and will not be described herein.
When at least two target part action types exist and the number of each target part action type exceeds the corresponding target number, triggering image shooting.
For example, the user king may set the target position action type and the target number corresponding to each position action type on X, if the preset shooting condition is set to be that the target position action type is 2 gifts and 2 gifts, then X determines the corresponding target threshold according to the target position action type and compares the target threshold corresponding to the target position action type with the position action value corresponding to the gifts 1, the position action value corresponding to the gifts 2 and the position action value corresponding to the gifts 3 respectively, finally confirms that the gifts 1 and the gifts 2 are the target position action types according to the comparison result, determines the corresponding target threshold according to the target position action type and compares the target threshold corresponding to the position action value corresponding to the head 1 and the position action value corresponding to the head 2 respectively, and finally confirms that the head 1 and the head 2 are the target position action types according to the comparison result, and then it is known that two target position action types (head and gifts) exist in the position action type set, and the head 2 are the preset shooting condition is met, and the shooting condition can be triggered.
In an embodiment, the photographing method may further include the steps of:
and displaying a shooting setting page, wherein the shooting setting page comprises a part action setting control, and determining at least two target part action types triggering shooting and the corresponding target quantity of each target part action type based on the setting operation of the part action setting control.
In this embodiment, the expression form of the control may be a button, an input box, or the like.
The position action setting control can determine at least two target position action types and target quantity corresponding to each target position action type, wherein the control can be an image input box, an image or an image sequence containing at least two position action types can be input into the image input box, the image or the image sequence is identified, the at least two position action types are determined to be the target position image types, and the target quantity corresponding to each target position image type is confirmed.
The position action setting control can be a text input box, at least two target position action types of marks (such as smile marks, digital marks 15, symbol marks # and the like) and the corresponding target quantity of each target position action mark can be input into the text input box, and the input target position action types and the corresponding target quantity can be input together in one text box or respectively input into two text boxes; the input of different kinds of target part action identifiers can be sequentially input in one or a group of text boxes, and one target part action identifier and corresponding target quantity are determined at a time; the method can also be used for respectively inputting at least two or two groups of text boxes, and determining at least two target part action types and the corresponding target quantity of each target part action type at a time.
Similarly, the position action setting control may also be an audio input control, the terminal recognizes the voice and determines at least two target position action types, and the target number corresponding to each target position action type, and the specific display form and recognition mode may be flexibly set according to the actual situation, which is not described herein.
The site action setting control page may be a site action setting button, and when the site action setting button is triggered, a specific site action setting area is displayed, at least two target site action types, a target number corresponding to each target site action type, and the like are determined for controls, texts, and the like in the site action setting area.
For example, referring to fig. 3, based on an input operation of the input box for the part action setting, the target part action type triggering photographing and the corresponding number thereof may be determined: 2 hearts, 1 five-pointed star, and 5 leg lifts.
In an embodiment, the image capturing page further includes a capturing setting control, and the step of displaying the capturing setting page may include: when an operation for a photographing setting control on an image photographing page is detected, the photographing setting page is displayed.
When the image shooting page is displayed, the image shooting page comprises a shooting setting control, the shooting setting control is triggered, and the shooting setting page can be displayed.
The photographing setting page may include several setting controls for setting image photographing conditions and image photographing parameters, including a part motion setting control.
The shooting setting control can be a button, and the shooting setting page can be displayed by triggering the button; the user can display a shooting setting page by sliding or the like according to prompt information expressed by the identification.
For example, referring to fig. 4, a photographing setting control is included on an image photographing page, and when a trigger operation for the photographing setting control is detected, the photographing setting page is displayed.
In an embodiment, the location action setting control includes at least two types of location actions to be selected and a location action quantity setting control corresponding to each type of location actions to be selected, and the determining, based on a setting operation for the location action setting control, at least two types of target location actions to trigger shooting and a target quantity corresponding to each type of target location actions may include:
and determining the action type of the part to be selected as the action type of the target part and the target number corresponding to the action type of the target part based on the setting operation of the part action number setting control corresponding to the action type of the part to be selected, so as to obtain at least two target part action types and the target number corresponding to each target part action type.
In this embodiment, the location action setting control includes at least two types of location actions to be selected, and a location action quantity setting control corresponding to each type of location actions to be selected, where the type of location actions to be selected may be a location action type that may be given by the electronic device and may be a target location action type, and when the electronic device identifies location actions for implementing the preview screen, the electronic device necessarily identifies location actions corresponding to all types of location actions to be selected, where the types of location actions to be selected may be displayed in the form of identifiers (such as text, symbols, etc.), and corresponds to the location action quantity setting control corresponding to the type of location actions to be selected, so as to prompt the location action type to be selected when setting the location action quantity setting control.
The number setting control can set the number of the part action setting types to be selected to be not smaller than 0. And determining at least two target part action types for triggering shooting and the target quantity corresponding to each target part action type based on the setting operation of the part action quantity setting control corresponding to the part action type to be selected.
According to the scheme, the target quantity can be flexibly set under the condition that the action type range of the target part is given, and the action type of the target part and the corresponding target quantity can be determined only by setting the quantity.
For example, referring to fig. 5, the candidate part action types include lifting legs, opening eyes and five-pointed star, the number of controls is set to 2 based on the part action set number corresponding to lifting legs, the lifting legs are determined to be the target part action type, and the target number corresponding to lifting legs is 2; based on the number 0 of the position action setting number control corresponding to the eyes, determining that the leg lifting is not the target position action type, based on the number 1 of the position action setting number control corresponding to the five-pointed star, determining that the five-pointed star is the target position action type, and determining that the target number corresponding to the five-pointed star is 1.
In an embodiment, the shooting setting page further includes a setting determination control, and the step of determining, based on a setting operation of the part action quantity setting control corresponding to the part action type to be selected, the part action type to be selected as the target part action type and the target quantity corresponding to the target part action type, to obtain at least two target part action types and the target quantity corresponding to each target part action type may include:
based on the setting operation of the part action quantity setting control corresponding to the part action type to be selected, displaying the setting quantity corresponding to the part action type to be selected;
When the triggering operation aiming at the setting determination control is detected, determining that the action type of the part to be selected is the target part action type, and determining that the set quantity is the target quantity of the target part action type, so as to obtain at least two target part action types and the target quantity corresponding to each target part action type.
The setting determining module may determine that the setting for the part action quantity setting control is valid, if the setting determining control does not exist, there may also be a plurality of confirmation manners to achieve the manner of determining that the setting for the part action quantity setting control is valid, for example, may be that in a certain time, if the setting operation for the part action quantity setting control is not detected, prompt information may be displayed, the prompt information may display at least two target part action types previously determined based on the setting operation for the part action quantity setting control, and the target quantity corresponding to each target part action type, and so on.
The setting determining module can be set under the scene that at least two target part action types and the target quantity corresponding to each target part action type are to be determined at will, and the setting determining module is used for determining the at least two target part action types and the target quantity corresponding to each target part action type finally.
For example, referring to fig. 6, based on the setting operation of the part action setting number setting control corresponding to the candidate part action type, the setting number 2 of the leg lifting of the candidate part action type, the setting number 0 of the eye opening of the candidate part action type, and the setting number 1 of the five-pointed star of the candidate part action type may be displayed, and when the trigger operation for the determination button is detected, the target part action type and the target number corresponding to each part action type are determined: 2 lifting legs and 1 five-pointed star.
In an embodiment, when detecting a trigger operation for setting the determination control, determining that the to-be-selected part action type is a target part action type and determining that the set number is a target number of target part action types, to obtain at least two target part action types and a target number corresponding to each target part action type may include:
when the triggering operation aiming at the setting determination control is detected, acquiring the setting quantity corresponding to the action type of the part to be selected;
when the set number of the action types of the to-be-selected part is larger than the preset number, determining that the action type of the to-be-selected part is the target part action type and the set number is the target number corresponding to the target part action type, and obtaining at least two target part action types and the target number corresponding to each target part action type.
The method can quickly determine whether the action type of the part to be selected is the action type of the target part, the user operation is simple, the detection of the electronic equipment is simple, and therefore the shooting efficiency is remarkably improved.
For example, if the set number displayed by the part action number setting control corresponding to the opening of the eye of the part action type to be selected is 0, determining that the opening of the eye is not the target part action type; if the number of the part action number setting controls corresponding to the head-up of the part action type to be selected is 2, determining that the head-up is the target part action type, and the corresponding target number is 2.
In an embodiment, the determining at least two target site action types for triggering shooting and the target number corresponding to each target site action type based on the setting operation of the site action setting control may include:
determining at least two target part action types based on the setting operation of the part action setting control;
and detecting the number of objects in the real-time preview picture so as to set the number of objects as the corresponding target number of each target part action type.
In this embodiment, the number of targets corresponding to each target portion action type cannot be determined through the control on the shooting setting page, and the electronic device may detect the number of objects on the implementation preview screen and use the number as the number of targets corresponding to each target portion action type. When the number of the objects in the preview picture is too large or the objects cannot be fixed in a certain period of time, the shooting of the image is triggered and the shooting effect is optimized by detecting the number of the objects in the preview picture, so that the situation that the action of the part of each object in the real-time preview picture meets at least two target part action types is ensured.
For example, if 30 objects exist in the real-time preview screen, the number 30 is used as the target number corresponding to the eyes and the head-up of the target part action type.
In an embodiment, the shooting setting page further includes a target object number control, and the shooting method may further include:
determining the number of target objects based on the setting operation of the number of target objects control;
at this time, when detecting that at least two target site action types exist in the site action type set and the number of each target site action type exceeds the corresponding target number, the step of triggering image capturing may include:
when at least two target part action types exist in the part action type set, the number of target objects is matched with the number of all target part action types, and the number of each target part action type exceeds the corresponding target number, the image shooting is triggered.
For example, referring to fig. 7, according to the setting operation of the number control corresponding to the number of identified people (i.e., the target object number control), the number of target objects is determined to be 5, and according to the position action number setting control corresponding to the candidate position action type, the target position action type is determined, and the target number corresponding to each target position action type is determined: 2 lifting legs and 3 opening eyes. When detecting that the part action types are concentrated, 2 legs are lifted and 3 eyes are opened, triggering image shooting.
In an embodiment, the image capturing page further includes an object part action setting control, and the capturing method may further include:
displaying a candidate part action list of the object in the real-time preview screen based on a trigger operation of the object part action setting control;
when a selection operation of a candidate part action list for an object is detected, the object is taken as a target object for triggering shooting, the selected action type is taken as a target part action type corresponding to the target object, and the target quantity corresponding to each target part action type is determined.
At this time, when detecting that at least two target site action types exist in the site action type set and the number of each target site action type exceeds the corresponding target number, the step of triggering image capturing may include:
when detecting that at least two target part action types exist in the part action type set, each target part action type is a corresponding target object, and the number of each target part action type exceeds the corresponding target number, triggering image shooting.
In this embodiment, the target object corresponds to at least two target site action types, and the types of target site action types included in all the target site types and the target number corresponding to each target site action type may be determined by a determination operation for the target site action types of the target object.
For example, referring to fig. 8, the image capturing page includes a subject part action setting control, and based on a trigger operation for the subject part action setting control, a candidate part action list (including candidate part action types, namely eye opening, head lifting and smiling) of the subject is displayed, and for a selected operation of the candidate part action list of the subject, the target part action of the target subject is determined to be eye opening and head lifting; displaying a candidate part action list of the object B (comprising candidate part action types, namely eye opening, head lifting and smiling respectively), and aiming at the selected operation of the candidate part action list of the object B, determining the target part action of the object B as head lifting, and further determining the target part action type and the target quantity corresponding to each target part action type as follows: 1 open eye and 2 heads up.
In an embodiment, the step of displaying the candidate part action list of the object in the real-time preview screen based on the trigger operation of the object part action setting control may include:
and displaying a candidate part action display control of the object based on the triggering operation of the target part action setting control, and displaying a candidate part action list of the object when the determining operation of the candidate part action display control of the object is detected.
For example, referring to fig. 9, based on a trigger operation of the target site operation setting control, a display button of the target site (i.e., a candidate site operation display control of the target site) and a display button of the target site (i.e., a candidate site operation display control of the target site) are displayed, and when the trigger operation of the display button of the target site is detected, a candidate site operation list of the target site is displayed; when a trigger operation of a display button for the object b is detected, a candidate part action list for the object b is displayed.
In an embodiment, the photographing method may further include:
at least two candidate images are shot, the candidate images are identified based on the target part action types, the actual part action types and the corresponding actual numbers of the actual part action types are obtained, the actual part action types and the corresponding actual numbers of each candidate image are respectively compared with the target part action types and the corresponding target numbers of the target part action types, an identification difference result is obtained, and the target image is determined from the candidate images according to the identification difference result.
For example, 3 candidate pictures are shot, based on the previously determined target part action types, eyes are opened and heads are raised, part actions in each candidate picture are recognized, actual part action types are obtained, the actual number of each actual part action type is determined, the actual number of each actual part action type and the actual number of each actual part action type which are recognized in each candidate picture are compared with the target number corresponding to the target part action number and each target part action type, recognition difference results of each candidate picture are obtained, 3 recognition difference results are compared, a target picture is determined from candidate images according to the recognition difference results, and the target picture is the candidate picture with the smallest difference from preset shooting conditions.
The method comprises the steps of firstly displaying an image shooting page, wherein the image shooting page comprises a real-time preview picture, then identifying part actions in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises identified part action types, and finally triggering image shooting when at least two target part action types exist in the part action type set and the number of each target part action type exceeds the corresponding target number. The method and the device can automatically identify the part actions in the real-time preview picture, and automatically trigger image shooting when the part actions meet the preset shooting conditions, so that the image shooting efficiency is remarkably improved.
The method described in the above embodiments is described in further detail below by way of example.
In this embodiment, an example in which the imaging device is specifically integrated in the electronic device will be described.
In this embodiment, a shooting method will be described in detail using a part motion as an example of a head motion.
Fig. 10 is a flowchart of a photographing method according to the present application, as shown in fig. 10. The photographing method may include:
301. the electronic equipment displays an image shooting page, wherein the image shooting page comprises a real-time preview picture and a shooting setting control.
As shown in fig. 11 and 12, the image photographing page includes a trigger setting button (i.e., a photographing setting control) and a viewfinder (i.e., a live preview screen), and further includes a photographing button for a user to trigger manual photographing.
302. When an operation for the shooting setting control is detected, the electronic device displays a shooting setting page, and the shooting setting page comprises a part action setting control and a setting determination control.
As shown in fig. 11, when an operation for a trigger setting button is detected, the electronic device displays a shooting setting page including a trigger condition setting control (i.e., a site action setting control): the smile number setting control, the open eye number setting control, and the head up number setting control, and further includes a determination button (i.e., a setting determination control).
As shown in fig. 12, when an operation for a trigger setting button is detected, the electronic device displays a photographing setting page including a trigger condition setting control: smile control, eye opening control and head raising control, and number of people identification control, (number of people identification control and trigger condition setting control are part action setting control), target part action type is determined by the trigger condition determining control, and target number corresponding to each target part action type is determined by the number of people identification.
303. And determining at least two target part action types for triggering shooting and the corresponding target quantity of each target part action type based on the setting operation of the part action setting control and the triggering operation of the setting determination control, and displaying an image shooting page.
As shown in fig. 11, by triggering the condition setting control, the target number corresponding to smiles is set to 1, the target number corresponding to eyes open is set to 2, and the target number corresponding to heads up is set to 0, when a triggering operation for the determination button is detected, the target part action type can be determined, and the target number corresponding to each target part action type is: 1 smile and 2 eyes open, it is also possible to determine a trigger condition set tc= { type=t, num=1 }, { type=p, num=2 }, and display an image capturing page including a live preview screen in the viewfinder.
As shown in fig. 12, the target part action types are determined to be smile and open eyes by the trigger condition setting control, and the setting number corresponding to each target part action type is displayed by the setting operation for the number of identified people control, and may be set to numbers such as 3, or may be set to all persons, that is, all objects contained in the live preview screen as shown in the figure, at this time, the electronic device detects the number of objects in the live preview screen and takes the detected number of objects as the target number, and when the trigger operation for the determination button is detected, the number of objects in the live preview screen may be determined to be the target number corresponding to each target part action type, the target part action types are smile and open eyes, and the trigger condition set tc= { t, p } may be determined according to the setting operation, and the image capturing page implementing the preview screen is displayed in the viewfinder frame.
304. The electronic equipment identifies the head action of the object in the real-time picture preview to obtain a part action type set of the object, wherein the part action type set comprises the identified part action type of the object.
For example, the electronic device continuously identifies head actions of the object in the real-time preview image, based on the application scenario of fig. 11 or fig. 12, the head actions identified by the electronic device include laugh, open eye, and head lifting, the real-time preview image includes object 1, object 2, object 3, and object 4, the electronic device identifies laugh, open eye, and head lifting of each object in the real-time preview image, and obtains an action value of each head action of each object, the action value characterizes a probability of the head action as a standard action corresponding to the head action, all the values are stored in a concentrated manner, and an identification result set ER is obtained, er= { e1, e2, e3,34}, wherein e1, e2, e3, and e4 respectively correspond to the action values corresponding to the part actions of the object 1, object 2, object 3, and object 4, ei (i has values of 1, 2, 3, and 4), the part actions corresponding to the part actions can be identified by t, the open eye can be identified by p, and the head lifting can be identified by h.
305. When the electronic equipment detects that at least two target part action types exist in the part action type set, and the number of each target part action type exceeds the corresponding target number, triggering image shooting.
For example, according to the action types of the target parts being smiles and eyes being opened, determining the corresponding action values of smiles and eyes being opened of each object identified in ER, judging whether the values meet the threshold range corresponding to smiles or eyes being opened, if so, determining the head action corresponding to the values of the object as the action type of the target parts, then obtaining the number of the action types of each target part, comparing the number with the corresponding target number, and exceeding the target number, thus the image shooting can be carried out. The threshold range corresponding to smile is 0.8 to 0.9, and when the detected action values corresponding to smile of subject 1 and subject 3 satisfy the threshold range, it can be determined that 2 smiles are included in the picture corresponding to ER, and similarly, it can be determined that 4 eyes are open in the picture, and then image capturing can be triggered.
If the ER does not meet the trigger condition set, continuing to identify the real-time preview picture, comparing the identification result with the trigger condition until the ER meets the trigger condition set, and shooting the image.
For another example, if the trigger condition set is tc= { t, p }, the number of objects needs to be determined for the real-time preview picture, for example, the real-time preview picture including the object 1, the object 2, the object 3 and the object 4, the number of objects can be determined to be 4, then, whether the acquired ER satisfies the trigger condition set is determined, and if so, the image capturing can be performed; if not, continuing to identify the head action of the real-time preview picture and comparing the head action with the triggering condition until the head action is satisfied, and then shooting the image. If the objects in the real-time preview picture are more or the objects are not fixed, if the situation that the objects enter or exit exists, the shooting efficiency can be effectively improved by identifying the number of people, and the possibility that the user shoots more satisfactory pictures is improved.
306. The electronic device shoots at least two candidate images and determines a target image from the candidate images.
For example, after triggering shooting, the electronic device automatically shoots at least 5 images, and based on the triggering shooting condition, selects a candidate image closest to the triggering shooting condition, takes the candidate image as a target image, and displays the target image on the electronic device.
Referring to fig. 13, fig. 13 is a general flowchart illustrating the process of recognizing an expression and automatically photographing, firstly, setting an expression recognition mode, such as an expression to be recognized, by a photographing object, the number of expressions to be recognized (i.e., a target part action type and a target number corresponding to each target part action type), then continuously collecting image data by a camera (the image data may be displayed in a form of a real-time preview screen on an image photographing page), performing expression recognition on the collected image data by the camera to obtain an expression set of the image data, if the expression set of the image data may satisfy the preset expression to be recognized and the number of each expression to be recognized, i.e., satisfy a trigger condition, recognizing the next frame of data of the image data until the image data satisfying the trigger condition is detected, then automatically photographing 3 pictures by the camera, and finally obtaining a picture (i.e., a target image) closest to the trigger condition according to the trigger condition.
Referring to fig. 14, fig. 14 is a partial flowchart illustrating a shooting method, in which an acquired real-time preview image is displayed on a camera image (i.e., an image shooting page), an expression recognition SDK (i.e., an expression recognition software development kit) can detect image data included in the real-time preview image frame by frame and obtain expression recognition results of all faces in the image data, an expression triggering condition can be obtained by setting, whether the expression recognition results of all faces in the image data satisfy the expression triggering condition is determined, if not, the image data is again identified, whether the recognition results satisfy the expression triggering condition is determined, until the expression triggering condition is satisfied, and shooting operation is triggered.
The electronic equipment displays an image shooting page, the image shooting page comprises a real-time preview picture and a shooting setting control, when the operation aiming at the shooting setting control is detected, the electronic equipment displays the shooting setting page, the shooting setting page comprises a part action setting control and a setting determining control, at least two target part action types for triggering shooting and the corresponding target quantity of each target part action type are determined based on the setting operation aiming at the part action setting control and the triggering operation aiming at the setting determining control, the image shooting page is displayed, the electronic equipment identifies the head action of an object in the real-time picture preview to obtain a part action type set of the object, the part action type set comprises the part action types of the identified object, when the electronic equipment detects that at least two target part action types exist in the part action type set, and the quantity of each target part action type exceeds the corresponding target quantity, the electronic equipment triggers image shooting, and finally, the electronic equipment shoots at least two candidate images and determines the target image from the candidate pictures. The method and the device can provide the position actions to be selected, and determine the target position action types and the target quantity of each target position action type through the self setting of the user, wherein the target position action types can be more than one, the quantity of each target position action type can be more than one, the requirements of shooting better quality (richer position actions) photos in an actual shooting scene are better met, the triggering conditions for automatic triggering shooting are enriched, and the shooting efficiency is remarkably improved.
In order to facilitate better implementation of the photographing method provided by the embodiment of the application, the embodiment of the application also provides a device based on the photographing method. The meaning of the nouns is the same as that of the shooting method, and specific implementation details can be referred to the description of the method embodiment.
As shown in fig. 15, fig. 15 is a schematic structural diagram of a photographing device according to an embodiment of the present application, where the photographing device may include a photographed page display module 401, an identification module 402, and a photographing module 403, where,
the shooting page display module 401 is configured to display an image shooting page, where the image shooting page includes a real-time preview screen;
the identifying module 402 is configured to identify a part action in the real-time preview screen, so as to obtain a part action type set, where the part action type set includes the identified part action types;
a shooting module 403, configured to trigger image shooting when it is detected that there are at least two target site action types in the site action type set, and the number of each target site action type exceeds the corresponding target number.
In some embodiments of the present application, referring to fig. 16, the apparatus further comprises:
the setting page display module 404 is configured to display a shooting setting page, where the shooting setting page includes a part action setting control;
The target determining module 405 is configured to determine at least two target location action types for triggering shooting and a target number corresponding to each target location action type based on a setting operation for the location action setting control.
In some embodiments of the present application, the image capturing page further includes a capturing setting control, and the setting page display module 404 is specifically configured to:
when an operation for a photographing setting control on an image photographing page is detected, the photographing setting page is displayed.
In some embodiments of the present application, the site action setting control includes at least two candidate site action types, and a site action number setting control corresponding to each candidate site action type, and the target determination module 405 includes:
the target determination submodule is used for determining the action type of the part to be selected as the target part action type and the target number corresponding to the target part action type based on the setting operation of the part action number setting control corresponding to the action type of the part to be selected, and obtaining at least two target part action types and the target number corresponding to each target part action type.
In some embodiments of the present application, shooting the settings page further includes setting determination controls, and the target determination submodule includes:
The setting quantity display unit is used for displaying the setting quantity corresponding to the action type of the part to be selected based on the setting operation of the part action quantity setting control corresponding to the action type of the part to be selected;
the target determining unit is used for determining that the action type of the part to be selected is the target part action type and determining that the set quantity is the target quantity of the target part action type when the trigger operation aiming at the setting determining control is detected, so as to obtain at least two target part action types and the target quantity corresponding to each target part action type.
In some embodiments of the present application, the targeting unit is specifically configured to:
when the triggering operation aiming at the setting determination control is detected, acquiring the setting quantity corresponding to the action type of the part to be selected;
when the set number of the action types of the to-be-selected part is larger than the preset number, determining that the action type of the to-be-selected part is the target part action type and the set number is the target number corresponding to the target part action type, and obtaining at least two target part action types and the target number corresponding to each target part action type.
In some embodiments of the present application, the targeting module 405 includes:
The target type determining submodule is used for determining at least two target part action types based on the setting operation of the part action setting control;
the target number determining sub-module is used for detecting the number of objects in the real-time preview picture so as to set the number of objects as the corresponding target number of each target position action type.
In some embodiments of the present application, the shooting setup page further includes a target object quantity control, and the apparatus further includes:
the target object quantity determining module is used for determining the quantity of target objects based on the setting operation of the target object quantity control;
at this time, the photographing module 403 is configured to: when at least two target part action types exist in the part action type set, the number of target objects is matched with the number of all target part action types, and the number of each target part action type exceeds the corresponding target number, the image shooting is triggered.
In some embodiments of the present application, the image capturing page further includes a subject site action setting control, and the capturing device further includes:
the candidate list display module is used for displaying a candidate part action list of the object in the real-time preview picture based on the triggering operation of the target part action setting control;
The determining module is used for taking the object as a target object for triggering shooting, taking the selected action type as a target part action type corresponding to the target object and determining the target quantity corresponding to each target part action type when the selected operation of the candidate part action list aiming at the object is detected.
At this time, the photographing module 403 is configured to: when detecting that at least two target part action types exist in the part action type set, each target part action type is a corresponding target object, and the number of each target part action type exceeds the corresponding target number, triggering image shooting.
In some embodiments of the present application, the candidate list display module is configured to:
based on the triggering operation of the target part action setting control, displaying a candidate part action display control of the target; when a determination operation for the candidate part action display control of the object is detected, a candidate part action list of the object is displayed.
In some embodiments of the present application, the site action set includes a site action type to which the site action belongs, and a site action value of the site action, the site action value characterizes a probability that the site action belongs to a standard site action corresponding to the site action type, referring to fig. 17, the photographing module 403 includes:
A candidate type determination submodule 4031 for determining a candidate part action type corresponding to the target part action type from among the set of part action types;
a target type determination submodule 4032 configured to determine the candidate part action type as the target part action type when the part action value of the candidate part action type matches with the target threshold of the target part action type;
the shooting sub-module 4033 is configured to trigger image shooting when there are at least two target location action types, and the number of each target location action type exceeds the corresponding target number.
In some embodiments of the present application, the photographing apparatus further includes:
the candidate image shooting module is used for shooting at least two candidate images;
the candidate image recognition module is used for recognizing the candidate image based on the action type of the target part to obtain the action type of the actual part and the corresponding actual quantity thereof;
the comparison module is used for respectively comparing the actual part action type and the corresponding actual quantity of each candidate image with the target part action type and the corresponding target quantity to obtain an identification difference result;
and the target image determining module is used for determining a target image from the candidate images according to the recognition difference result.
The shooting page display module 401 of the embodiment of the present application displays an image shooting page first, the image shooting page includes a real-time preview picture, then the recognition module 402 recognizes a part action in the real-time preview picture to obtain a part action type set, where the part action type set includes recognized part action types, and finally when at least two target part action types exist in the part action type set and the number of each target part action type exceeds the number of targets corresponding to the target part action types, the shooting module 403 triggers image shooting. The method and the device can automatically identify the part actions in the real-time preview picture, and automatically trigger image shooting when the part actions meet the preset shooting conditions, so that the image shooting efficiency is remarkably improved.
In addition, the embodiment of the present application further provides a computer device, which may be a terminal or a server, as shown in fig. 18, which shows a schematic structural diagram of the computer device according to the embodiment of the present application, specifically:
the computer device may include one or more processors 701 of a processing core, memory 702 of one or more computer readable storage media, power supply 703, and input unit 704, among other components. Those skilled in the art will appreciate that the computer device structure shown in FIG. 18 is not limiting of the computer device and may include more or fewer components than shown, or may be a combination of certain components, or a different arrangement of components. Wherein:
The processor 701 is a control center of the computer device, connects various parts of the entire computer device using various interfaces and lines, and performs various functions of the computer device and processes data by running or executing software programs and/or modules stored in the memory 702, and calling data stored in the memory 702, thereby performing overall monitoring of the computer device. Optionally, processor 701 may include one or more processing cores; preferably, the processor 701 may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, etc., and the modem processor primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 701.
The memory 702 may be used to store software programs and modules, and the processor 701 executes various functional applications and data processing by executing the software programs and modules stored in the memory 702. The memory 702 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the computer device, etc. In addition, the memory 702 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 702 may also include a memory controller to provide access to the memory 702 by the processor 701.
The computer device further comprises a power supply 703 for powering the various components, preferably the power supply 703 is logically connected to the processor 701 by a power management system, whereby the functions of managing charging, discharging, and power consumption are performed by the power management system. The power supply 703 may also include one or more of any component, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, etc.
The computer device may further comprise an input unit 704, which input unit 704 may be used for receiving input numerical or character information and generating keyboard, mouse, joystick, optical or trackball signal inputs in connection with user settings and function control.
Although not shown, the computer device may further include a display unit or the like, which is not described herein. In particular, in this embodiment, the processor 701 in the computer device loads executable files corresponding to the processes of one or more application programs into the memory 702 according to the following instructions, and the processor 701 executes the application programs stored in the memory 702, so as to implement various functions, as follows:
Displaying an image shooting page, wherein the image shooting page comprises a real-time preview picture; identifying part actions in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises the identified part action types; when detecting that at least two target part action types exist in the part action type set and the number of each target part action type exceeds the corresponding target number, triggering image shooting.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, the present embodiment also provides a storage medium in which a computer program is stored, the computer program being capable of being loaded by a processor to perform the steps in any of the photographing methods provided in the present embodiment. For example, the computer program may perform the steps of:
Displaying an image shooting page, wherein the image shooting page comprises a real-time preview picture; identifying part actions in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises the identified part action types; when detecting that at least two target part action types exist in the part action type set and the number of each target part action type exceeds the corresponding target number, triggering image shooting.
The system according to the embodiments of the present application may be a distributed system formed by connecting a client and a plurality of nodes (access to any form of computer devices in a network, such as servers and terminals) through a network communication.
Taking the distributed system as an example of a blockchain system, referring To fig. 19, fig. 19 is a schematic diagram of an alternative architecture of the distributed system 110 applied To the blockchain system according To the embodiment of the present application, where the architecture is formed by a plurality of nodes 1101 (any type of computing devices in an access network, such as servers and user terminals) and clients 1102, and a Peer-To-Peer (P2P, peer To Peer) network is formed between the nodes, where the P2P protocol is an application layer protocol running on top of a transmission control protocol (TCP, transmission Control Protocol) protocol. In a distributed system, any machine, such as a server, a terminal, may join to become a node, including a hardware layer, an intermediate layer, an operating system layer, and an application layer.
Referring to the functionality of each node in the blockchain system shown in fig. 19, the functions involved include:
1) The routing, the basic function that the node has, is used for supporting the communication between the nodes.
Besides the routing function, the node can also have the following functions:
2) The application is used for being deployed in a block chain to realize specific service according to actual service requirements, recording data related to the realization function to form recorded data, carrying a digital signature in the recorded data to represent the source of task data, sending the recorded data to other nodes in the block chain system, and adding the recorded data into a temporary block when the source and the integrity of the recorded data are verified by the other nodes.
For example, the services implemented by the application include:
2.1 Wallet for providing electronic money transactions, including initiating a transaction (i.e., sending a transaction record of the current transaction to other nodes in the blockchain system, the other nodes, after verification, storing record data of the transaction in a temporary block of the blockchain in response to acknowledging that the transaction is valid; of course, the wallet also supports inquiry of remaining electronic money in the electronic money address;
2.2 The shared account book is used for providing the functions of storing, inquiring, modifying and the like of account data, sending record data of the operation on the account data to other nodes in the blockchain system, and after the other nodes verify to be effective, storing the record data into a temporary block as a response for acknowledging that the account data is effective, and also sending confirmation to the node initiating the operation.
2.3 A computerized agreement that can execute the terms of a contract, implemented by code deployed on a shared ledger for execution when certain conditions are met, for completing automated transactions based on actual business demand codes, such as querying the physical distribution status of the goods purchased by the buyer, transferring the electronic money of the buyer to the merchant's address after the buyer signs for the goods; of course, the smart contract is not limited to executing the contract for the transaction, and may execute a contract that processes the received information.
3) The blockchain comprises a series of blocks (blocks) which are connected with each other according to the generated sequence time, the new blocks are not removed once being added into the blockchain, and record data submitted by nodes in the blockchain system are recorded in the blocks.
The target site action type and its corresponding target number, target object, and target image in this embodiment may be stored in a shared ledger of the regional chain by the node, and the computer device (for example, the terminal or the server) may acquire the target site action type and its corresponding target number, target object, and target image based on the data stored in the shared ledger.
Referring to fig. 20, fig. 20 is an optional Block Structure (Block Structure) provided in the embodiment of the present application, where each Block includes a hash value of a transaction record stored in the Block (hash value of the Block) and a hash value of a previous Block, and each Block is connected by the hash value to form a Block chain. In addition, the block may include information such as a time stamp at the time of block generation. The Blockchain (Blockchain), which is essentially a de-centralized database, is a string of data blocks that are generated in association using cryptographic methods, each of which contains associated information that is used to verify the validity (anti-counterfeiting) of its information and to generate the next block.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The steps in any of the photographing methods provided in the embodiments of the present application may be executed by the computer program stored in the storage medium, so that the beneficial effects that any of the photographing methods provided in the embodiments of the present application may be achieved, which are described in detail in the previous embodiments and are not repeated herein.
The foregoing describes in detail a shooting method, apparatus, computer device and storage medium provided in the embodiments of the present application, and specific examples are applied to illustrate the principles and embodiments of the present application, where the foregoing examples are only used to help understand the method and core idea of the present application; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (13)

1. A photographing method, comprising:
displaying an image shooting page, wherein the image shooting page comprises a real-time preview picture and an object part action setting control;
displaying candidate part action lists of the objects at corresponding positions of all the objects in the real-time preview picture based on triggering operation of the object part action setting control;
when detecting the selection operation of the candidate part action list aiming at the corresponding object, taking the object as a target object for triggering shooting, taking the selected action type as a target part action type corresponding to the target object, and further determining the target quantity corresponding to each target part action type;
Identifying part actions in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises the identified part action types;
when detecting that at least two target part action types exist in the part action type set, each target part action type is a corresponding target object, and the number of each target part action type exceeds the corresponding target number, triggering image shooting.
2. The method of claim 1, wherein the image capture page further comprises a capture settings control, the capture settings page comprising a site action settings control displayed when an operation is detected for the capture settings control on the image capture page;
and determining at least two target part action types triggering shooting and the corresponding target quantity of each target part action type based on the setting operation of the part action setting control.
3. The method of claim 2, wherein the site action setup controls include at least two candidate site action types, and a site action quantity setup control corresponding to each candidate site action type,
The determining at least two target part action types for triggering shooting and the target quantity corresponding to each target part action type based on the setting operation of the part action setting control comprises the following steps:
and determining the action type of the part to be selected as the target part action type and the target number corresponding to the target part action type based on the setting operation of the part action number setting control corresponding to the action type of the part to be selected, so as to obtain at least two target part action types and the target number corresponding to each target part action type.
4. The method of claim 3, wherein the capture settings page further comprises a settings determination control,
the determining that the to-be-selected part action type is a target part action type and the target number corresponding to the target part action type based on the setting operation of the part action number setting control corresponding to the to-be-selected part action type, and obtaining at least two target part action types and the target number corresponding to each target part action type includes:
displaying the set quantity corresponding to the action type of the part to be selected based on the set operation of the part action quantity set control corresponding to the action type of the part to be selected;
When the triggering operation of the setting determination control is detected, determining that the action type of the part to be selected is a target part action type, and determining that the set quantity is the target quantity of the target part action type, so as to obtain at least two target part action types and the target quantity corresponding to each target part action type.
5. The method of claim 4, wherein when the trigger operation for the setting determination control is detected, determining the candidate part action type as a target part action type and determining the setting number as the target number of target part action types, obtaining at least two target part action types and target numbers corresponding to each target part action type, comprises:
when the triggering operation of the setting determination control is detected, acquiring the setting quantity corresponding to the action type of the part to be selected;
when the set number of the action types of the to-be-selected part is larger than the preset number, determining that the action type of the to-be-selected part is the target part action type and the set number is the target number corresponding to the target part action type, and obtaining at least two target part action types and the target number corresponding to each target part action type.
6. The method according to claim 2, wherein determining at least two target site action types for triggering photographing and a target number corresponding to each target site action type based on the setting operation for the site action setting control includes:
determining at least two target part action types based on the setting operation of the part action setting control;
and detecting the number of objects in the real-time preview picture so as to set the number of the objects as the corresponding target number of each target part action type.
7. The method of claim 2, wherein the shot settings page further comprises a target object quantity control, the method further comprising:
determining the number of target objects based on the setting operation of the number of target objects control;
when detecting that at least two target part action types exist in the part action type set and the number of each target part action type exceeds the corresponding target number, triggering image shooting, including:
when at least two target part action types exist in the part action type set, the number of the target objects is matched with the number of all the target part action types, and the number of each target part action type exceeds the corresponding target number, the image shooting is triggered.
8. The method according to claim 1, wherein the displaying the candidate part action list of the object at all the object corresponding positions in the live preview screen based on the trigger operation of the object part action setting control includes:
based on the triggering operation of the object part action setting control, displaying candidate part action display controls of the objects at the corresponding positions of all the objects in the real-time preview picture;
and when the determination operation of the candidate part action display control for the corresponding object is detected, displaying a candidate part action list of the object.
9. The method of claim 1, wherein the set of site actions includes a site action type to which a site action belongs, and a site action value for the site action, the site action value characterizing a probability that the site action belongs to a standard site action corresponding to the site action type,
when detecting that at least two target part action types exist in the part action type set and the number of each target part action type exceeds the corresponding target number, triggering image shooting, including:
determining candidate part action types corresponding to the target part action types from the part action type set;
When the part action value of the candidate part action type is matched with the target threshold value of the target part action type, determining that the candidate part action type is the target part action type;
when at least two target part action types exist and the number of each target part action type exceeds the corresponding target number, triggering image shooting.
10. The method according to claim 1, wherein when detecting that there are at least two target site action types in the site action type set and the number of each target site action type exceeds its corresponding target number, the steps of, after triggering image capturing, include:
shooting at least two candidate images;
based on the target part action type, identifying candidate images to obtain an actual part action type and the corresponding actual number thereof;
comparing the actual part action type and the corresponding actual number of each candidate image with the target part action type and the corresponding target number of each candidate image respectively to obtain a recognition difference result;
and determining a target image from the candidate images according to the recognition difference result.
11. A photographing apparatus, comprising:
The shooting page display module is used for displaying an image shooting page, and the image shooting page comprises a real-time preview picture and an object part action setting control;
the candidate list display module is used for displaying candidate part action lists of the objects at corresponding positions of all the objects in the real-time preview picture based on the triggering operation of the object part action setting control;
the determining module is used for taking the object as a target object for triggering shooting and taking the selected action type as a target part action type corresponding to the target object when detecting the selection operation of the candidate part action list aiming at the corresponding object, so as to determine the target quantity corresponding to each target part action type;
the identification module is used for identifying the part actions in the real-time preview picture to obtain a part action type set, wherein the part action type set comprises the identified part action types;
the shooting module is used for triggering image shooting when detecting that at least two target part action types exist in the part action type set, the object of each target part action type is the corresponding target object, and the number of each target part action type exceeds the corresponding target number.
12. A storage medium storing a plurality of computer programs adapted to be loaded by a processor for performing the steps of the method according to any one of claims 1 to 10.
13. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method according to any one of claims 1 to 10 when the computer program is executed.
CN202010092824.1A 2020-02-14 2020-02-14 Shooting method, shooting device, computer equipment and storage medium Active CN112752016B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010092824.1A CN112752016B (en) 2020-02-14 2020-02-14 Shooting method, shooting device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010092824.1A CN112752016B (en) 2020-02-14 2020-02-14 Shooting method, shooting device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112752016A CN112752016A (en) 2021-05-04
CN112752016B true CN112752016B (en) 2023-06-16

Family

ID=75645147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010092824.1A Active CN112752016B (en) 2020-02-14 2020-02-14 Shooting method, shooting device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112752016B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009065382A (en) * 2007-09-05 2009-03-26 Nikon Corp Imaging apparatus
CN103209304A (en) * 2012-01-16 2013-07-17 卡西欧计算机株式会社 Imaging device and imaging method
CN103312945A (en) * 2012-03-07 2013-09-18 华晶科技股份有限公司 Image pickup device and image pickup method thereof, and figure recognition photo-taking system
CN109343764A (en) * 2018-07-18 2019-02-15 奇酷互联网络科技(深圳)有限公司 The method, apparatus of mobile terminal and control operation control

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5195120B2 (en) * 2008-07-25 2013-05-08 株式会社ニコン Digital camera
KR101322465B1 (en) * 2011-11-17 2013-10-28 삼성전자주식회사 Method and apparatus for taking a self camera recording
US10535375B2 (en) * 2015-08-03 2020-01-14 Sony Corporation Information processing system, information processing method, and recording medium
CN107911614B (en) * 2017-12-25 2019-09-27 腾讯数码(天津)有限公司 A kind of image capturing method based on gesture, device and storage medium
CN108307116B (en) * 2018-02-07 2022-03-29 腾讯科技(深圳)有限公司 Image shooting method and device, computer equipment and storage medium
CN110337806A (en) * 2018-05-30 2019-10-15 深圳市大疆创新科技有限公司 Group picture image pickup method and device
CN109005336B (en) * 2018-07-04 2021-03-02 维沃移动通信有限公司 Image shooting method and terminal equipment
CN109194879B (en) * 2018-11-19 2021-09-07 Oppo广东移动通信有限公司 Photographing method, photographing device, storage medium and mobile terminal
CN109348135A (en) * 2018-11-21 2019-02-15 Oppo广东移动通信有限公司 Photographic method, device, storage medium and terminal device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009065382A (en) * 2007-09-05 2009-03-26 Nikon Corp Imaging apparatus
CN103209304A (en) * 2012-01-16 2013-07-17 卡西欧计算机株式会社 Imaging device and imaging method
CN103312945A (en) * 2012-03-07 2013-09-18 华晶科技股份有限公司 Image pickup device and image pickup method thereof, and figure recognition photo-taking system
CN109343764A (en) * 2018-07-18 2019-02-15 奇酷互联网络科技(深圳)有限公司 The method, apparatus of mobile terminal and control operation control

Also Published As

Publication number Publication date
CN112752016A (en) 2021-05-04

Similar Documents

Publication Publication Date Title
CN108229369A (en) Image capturing method, device, storage medium and electronic equipment
US9652663B2 (en) Using facial data for device authentication or subject identification
CN104168378B (en) A kind of picture group technology and device based on recognition of face
CN108234870A (en) Image processing method, device, terminal and storage medium
CN108710847A (en) Scene recognition method, device and electronic equipment
CN110012210A (en) Photographic method, device, storage medium and electronic equipment
CN108198177A (en) Image acquiring method, device, terminal and storage medium
CN109508694A (en) A kind of face identification method and identification device
JP2017531950A (en) Method and apparatus for constructing a shooting template database and providing shooting recommendation information
CN112733802B (en) Image occlusion detection method and device, electronic equipment and storage medium
CN106101541A (en) A kind of terminal, photographing device and image pickup method based on personage's emotion thereof
Vazquez-Fernandez et al. Built-in face recognition for smart photo sharing in mobile devices
CN102982064A (en) Method and device for automatic distribution of media files
CN108810406B (en) Portrait light effect processing method, device, terminal and computer readable storage medium
US20190332854A1 (en) Hybrid deep learning method for recognizing facial expressions
US20180225505A1 (en) Processing images from an electronic mirror
CN111652601B (en) Virtual article issuing and receiving method and device
CN106815803B (en) Picture processing method and device
CN108198130A (en) Image processing method, device, storage medium and electronic equipment
TWI752105B (en) Feature image acquisition method, acquisition device, and user authentication method
US11783192B2 (en) Hybrid deep learning method for recognizing facial expressions
WO2020172870A1 (en) Method and apparatus for determining motion trajectory of target object
CN108156385A (en) Image acquiring method and image acquiring device
CN103609098B (en) Method and apparatus for being registered in telepresence system
CN106126067B (en) A kind of method, device and mobile terminal that triggering augmented reality function is opened

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40043508

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant