WO2021057644A1 - 拍摄方法和装置 - Google Patents

拍摄方法和装置 Download PDF

Info

Publication number
WO2021057644A1
WO2021057644A1 PCT/CN2020/116440 CN2020116440W WO2021057644A1 WO 2021057644 A1 WO2021057644 A1 WO 2021057644A1 CN 2020116440 W CN2020116440 W CN 2020116440W WO 2021057644 A1 WO2021057644 A1 WO 2021057644A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image template
user account
template
photographed
Prior art date
Application number
PCT/CN2020/116440
Other languages
English (en)
French (fr)
Inventor
罗剑嵘
Original Assignee
上海盛付通电子支付服务有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海盛付通电子支付服务有限公司 filed Critical 上海盛付通电子支付服务有限公司
Publication of WO2021057644A1 publication Critical patent/WO2021057644A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Definitions

  • the embodiments of the present disclosure relate to the field of computer technology, and in particular to shooting methods and devices.
  • the present disclosure proposes shooting methods and equipment.
  • the embodiments of the present disclosure provide a photographing method, which is applied to a terminal device, and the method includes: photographing a photographed person to obtain a face image of the photographed person; and obtaining a face image of the photographed person from a predetermined image template
  • a target image template is obtained, where the target image template corresponds to the identity recognition result obtained by performing image recognition on the face image; based on the target image template, a photographed image or a photographed video of the subject is generated.
  • the embodiments of the present disclosure provide a photographing device, which is provided in a terminal device, and the device includes: a photographing unit configured to photograph a photographed person to obtain a face image of the photographed person;
  • the acquiring unit is configured to acquire a target image template from a set of predetermined image templates, wherein the target image template corresponds to the identity recognition result obtained by performing image recognition on the face image;
  • the generating unit is configured to be based on the target Image template to generate the captured image or video of the subject.
  • the embodiments of the present disclosure provide a terminal device, including: one or more processors; a storage device, on which one or more programs are stored. Is executed by two processors, so that the one or more processors implement the method in any one of the foregoing photographing methods.
  • an embodiment of the present disclosure provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processor, a method as in any one of the foregoing shooting methods is implemented.
  • the photographing method and device acquire a face image of the photographed person by photographing the photographed person, and then obtain a target image template from a predetermined image template set, wherein the target image
  • the template corresponds to the identity recognition result obtained by the image recognition of the face image.
  • the captured image or video of the subject is generated.
  • different images can be obtained based on different facial images. Templates are used to generate different photographed images or videos of the photographed persons, thereby increasing the probability that the generated photographed images or photographed videos meet the needs of the photographed persons, and enriching the generation methods of the images or videos.
  • FIG. 1 is an exemplary system architecture diagram in which some embodiments of the present disclosure can be applied;
  • Fig. 2 is a flowchart of a first embodiment of a photographing method according to the present disclosure
  • 3A-3C are schematic diagrams of an application scenario for the embodiment of FIG. 2;
  • FIG. 4 is a flowchart of a second embodiment of the photographing method according to the present disclosure.
  • Fig. 5 is a flowchart of a third embodiment of the photographing method according to the present disclosure.
  • Fig. 6 is a schematic structural diagram of a computer system suitable for implementing an electronic device of an embodiment of the present disclosure.
  • FIG. 1 shows an exemplary system architecture 100 to which an embodiment of a photographing method or a photographing apparatus of an embodiment of the present disclosure can be applied.
  • the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105.
  • the network 104 is used to provide a medium for communication links between the terminal devices 101, 102, 103 and the server 105.
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, and so on.
  • the user can use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send data (for example, a face image of the photographed person obtained by photographing the photographed person) and the like.
  • client applications can be installed on the terminal devices 101, 102, 103, such as beauty cameras, image processing software, video processing software, video playback software, news applications, image processing applications, web browser applications, shopping Class applications, search applications, instant messaging tools, email clients, social platform software, etc.
  • the terminal devices 101, 102, 103 may be hardware or software.
  • the terminal devices 101, 102, 103 may be various electronic devices with image capturing devices (such as cameras), including but not limited to smart phones, tablet computers, laptop computers, desktop computers, and so on.
  • the terminal devices 101, 102, 103 are software, they can be installed in the electronic devices listed above.
  • the terminal device can be a shooting application, which can call an image acquisition device to capture images or videos during operation, and the terminal device can It can be implemented as multiple software or software modules (for example, software or software modules used to provide distributed services), or it can be implemented as a single software or software module. There is no specific limitation here.
  • the server 105 may be a server that provides various services, for example, a back-end server that provides support for shooting applications working on the terminal devices 101, 102, and 103.
  • the background server can perform image recognition on the face image sent by the terminal device to obtain the identity recognition result, and obtain the target image template corresponding to the identity recognition result from the predetermined image template set, and then feedback the target image template to the terminal equipment.
  • the server 105 may be a cloud server.
  • the server can be hardware or software.
  • the server can be implemented as a distributed server cluster composed of multiple servers, or as a single server.
  • the server can be implemented as multiple software or software modules (for example, software or software modules used to provide distributed services), or can be implemented as a single software or software module. There is no specific limitation here.
  • the shooting method provided by the embodiments of the present disclosure is usually executed by a terminal device.
  • various parts for example, various units, sub-units, modules, and sub-modules included in the photographing device are usually set in the server.
  • the shooting method provided by the embodiments of the present disclosure may also be executed by the terminal device and the server in cooperation with each other.
  • various parts for example, various units, sub-units, modules, and sub-modules included in the photographing device can also be respectively set in the terminal device and the server.
  • the numbers of terminal devices, networks, and servers in FIG. 1 are merely illustrative. According to implementation needs, there can be any number of terminal devices, networks, and servers.
  • the system architecture may only include the electronic device (such as a terminal device) on which the shooting method runs.
  • a process 200 of the first embodiment of the photographing method according to the present disclosure is shown.
  • the shooting method is applied to a terminal device, and the shooting method includes the following steps:
  • Step 201 Take a picture of the person being photographed to obtain a face image of the person being photographed.
  • the execution subject of the photographing method may photograph the person being photographed to obtain the face image of the person being photographed.
  • the above-mentioned execution subject may be provided with an image acquisition device (for example, a camera).
  • the above-mentioned execution subject can photograph the subject through the image acquisition device to acquire the face image of the subject.
  • the above-mentioned subject may be one or more persons located within the shooting range of the above-mentioned image acquisition device.
  • the aforementioned face image may contain one or more face objects.
  • the face object may be an image of the face presented in the image.
  • the face image can be a single photo or multiple video frames.
  • the above-mentioned execution subject may photograph the subject through a shooting application working on the above-mentioned execution subject.
  • the login account of the photographing application is the first user account.
  • the photographing application may be an application installed on the above-mentioned execution subject that calls the above-mentioned image acquisition device.
  • the above-mentioned image acquisition device can be called to photograph the subject.
  • the user can log in to his account so as to associate the account with his related information.
  • an image taken by a user through a photographing application may be associated with the user's user account, so that the user can obtain the image photographed through the photographing application after logging in to his account through the photographing application on any terminal device.
  • the user can photograph the subject through the photographing application with the login account as the first user account, so that the photographed image or the photographed video generated in the subsequent step 203 is the same as that of the first user.
  • the accounts are associated so that the user can obtain captured images or captured videos through the first user account, thereby improving the user experience.
  • Step 202 Obtain a target image template from a set of predetermined image templates.
  • the above-mentioned execution subject may obtain the target image template from a predetermined image template set.
  • the target image template corresponds to the identity recognition result obtained by performing image recognition on the face image obtained in step 201.
  • the identity recognition result can be used to indicate the identity of the person corresponding to the face image.
  • the image templates in the above-mentioned image template set may be images containing human objects, or may include human body contours or position information of key points of the human body (for example, head, hands, legs, feet, etc.).
  • the execution subject may obtain a target image template from the set of image templates for each human object. That is, the number of target image templates can be equal to the number of human objects contained in the face image.
  • the above identification result may be used to indicate the occupation of the person corresponding to the face image (for example, white-collar, student, teacher, lawyer).
  • Each image template in the foregoing image template set may be associated with a profession, and thus, in the foregoing image template set, an image template associated with the profession indicated by the identity recognition result may be used as a target image template.
  • the execution subject may execute for each user of the shooting application.
  • the above-mentioned execution subject or the server connected in communication with the above-mentioned execution subject can perform image recognition on the face image obtained in step 201, obtain the identity recognition result, and determine the identity from the predetermined image template set.
  • the image template corresponding to the recognition result for example, the image template uploaded by the user account stored in association with the identity recognition result in the database
  • this optional implementation manner may use the target image template corresponding to the photographed person by the photographing application, thereby further enriching the way of generating images or videos.
  • step 202 may include the following steps (including step one and step two):
  • Step 1 Send the face image to the server, so that the server: performs image recognition on the face image to obtain an identity recognition result, and obtains a target image template corresponding to the identity recognition result from a predetermined image template set.
  • the server provides support for shooting applications.
  • the server communicates with the above-mentioned execution subject.
  • Step 2 Receive the target image template from the above-mentioned server.
  • the specific execution method of performing image recognition on the face image to obtain the identity recognition result, and obtaining the target image template corresponding to the identity recognition result from the predetermined image template set can refer to the above description, which will not be repeated here.
  • this optional implementation method uses the server to perform image recognition on the face image, and determines the target image template in a predetermined image template set, as opposed to performing image recognition on the face image locally by the execution subject, and
  • the solution of determining the target image template in the predetermined image template set can reduce the calculation amount of the above-mentioned execution subject, and can be applied to terminal devices of lower configuration.
  • the target image template is an image template of the second user account.
  • the second user account is associated with the identity recognition result.
  • the first user account is the same or different from the second user account.
  • the target image template may be an image template of the second user account, that is, the above-mentioned target image template is associated with the second user account (for example, the association is stored in a database).
  • the second user account is the same as the first user account, that is, the current login account of the shooting application is the login account of the person being photographed.
  • the current login account of the shooting application is the login account of the person being photographed.
  • a photographed image or a photographed video of the photographed person can be generated based on the target image template associated with the photographed person.
  • the second user account is different from the first user account, that is, the current login account of the shooting application is not the login account of the person being photographed, the same face image can be obtained through the login account of the current shooting application (the first login account)
  • the target image template corresponding to the other account (the second login account) associated with the corresponding identity recognition result so that through the subsequent step 203, the target image template corresponding to the face image of the subject can be used to generate the subject Of shooting images or shooting videos.
  • the target image template corresponding to the face image can be obtained, and then the photographed image or the photographed video of the subject can be generated, that is, the generated
  • the captured image or video of the captured person is not limited to the user account currently logged in to the capture application, thereby increasing the probability that the generated captured image or captured video meets the needs of the captured person, enriching the image or How the video is generated.
  • the target image template may be an image template obtained by any of the following:
  • the first item is an image template uploaded to the server through the second user account.
  • the user can upload the image template to the server through his user account, thereby designating the uploaded image template as the target image template to generate the captured image or video of the user corresponding to the second user account.
  • the second item is the image template last used, created or edited in the shooting application through the second user account.
  • the execution subject or the server connected in communication with the execution subject may, for each user account (for example, the second user account), record the user's last use, last creation, or last edit in the shooting application through the user account.
  • the image template is used as the target image template associated with the user account.
  • the target image template associated with the user account can be obtained without user designation, which simplifies the user's operation steps and improves the user's experience.
  • the third item is the image template used most often in the shooting application through the second user account within the preset time period.
  • the above-mentioned execution subject or the server connected in communication with the above-mentioned execution subject may use the user account to use the most frequently used image template in the shooting application within a preset time period.
  • the target image template associated with the user account As a result, the target image template associated with the user account can be obtained without user designation, which simplifies the user's operation steps and improves the user's experience.
  • step 202 may also include the following steps (including the first step and the second step):
  • the first step is to recognize the face image and get the identity recognition result.
  • the identity recognition result is associated with the first user account.
  • the second step is to obtain a target image template corresponding to the first user account from a set of locally predetermined image templates.
  • this optional implementation method can perform image recognition on the face image locally in the above-mentioned execution subject, and determine the target image template in the locally predetermined image template set, so there is no need to send the face image to the server. Reduce the occupation of network resources.
  • the target image template may be an image template obtained by any of the following:
  • the first item is the image template set by the first user account.
  • the user can set an image template through the first user account, so as to use the set image template as a target image template, so as to generate a captured image or a captured video of the user corresponding to the first user account.
  • the second item is the image template last used, created or edited in the shooting application through the first user account.
  • the execution subject or the server connected in communication with the execution subject may, for each user account (for example, the first user account), record the user's last use, last creation, or last edit in the shooting application through the user account.
  • the image template is used as the target image template associated with the user account.
  • the target image template associated with the user account can be obtained without user designation, which simplifies the user's operation steps and improves the user's experience.
  • the third item is the image template used most often in the shooting application through the first user account within the preset time period.
  • the above-mentioned execution subject or the server connected in communication with the above-mentioned execution subject may use the user account to use the image template most frequently used in the shooting application within a preset time period.
  • the target image template associated with the user account As a result, the target image template associated with the user account can be obtained without user designation, which simplifies the user's operation steps and improves the user's experience.
  • step 202 in the case that step 202 includes the first and second steps above and the user performs the template uploading operation through the first user account, the above-mentioned execution subject may also send to the server through the shooting application Upload the target image template.
  • the user can upload the image template to the server through the first user account, thereby designating the uploaded image template as the target image template, so as to generate a captured image or a captured video of the user corresponding to the first user account.
  • step 202 may further include the following steps:
  • the face image is recognized, and the identity recognition result is obtained.
  • the obtained identity recognition result is sent to the server, so that the server is in the target image
  • the target image template corresponding to the identity recognition result is obtained from the template set.
  • the above-mentioned target image template set may be a set of image templates stored locally in each terminal device running a shooting application.
  • this optional implementation method can combine the obtained identity recognition result in the case that the target image template corresponding to the user account associated with the obtained identity recognition result cannot be obtained from the locally predetermined image template set. Send to the server so that the server obtains the target image template corresponding to the identity recognition result from the target image template set. Therefore, when two or more user accounts have been logged in to a terminal device, the logged-in account When switching between logins, the target image template can be obtained without sending the identity recognition result to the server, which improves the speed of obtaining the target image template.
  • Step 203 Based on the target image template, generate a photographed image or a photographed video of the subject.
  • the above-mentioned execution subject may generate a photographed image or a photographed video of the subject based on the target image template obtained in step 202.
  • the above-mentioned execution subject may adopt the following steps to execute this step 203:
  • an image of the subject is acquired, for example, during the process of performing step 203, the subject is photographed, so as to obtain the image of the subject.
  • the target image template and the image of the subject are input to the pre-trained first model to obtain the captured image of the subject.
  • the above-mentioned first model is used to generate a photographed image of the subject based on the image template and the image of the subject.
  • the above-mentioned first model may be a convolutional neural network trained based on a predetermined training sample set using a machine learning algorithm.
  • the training samples in the above-mentioned training sample set include input data and expected output data corresponding to the input data.
  • the input data includes an image template and an image of the subject (for example, an image that has not been adjusted by the user).
  • the desired output data corresponding to the input data includes a photographed image of the subject (for example, an image obtained after the user adjusts an image that has not been adjusted by the user).
  • the above-mentioned execution subject may also adopt the following steps to execute this step 203:
  • the video of the person being photographed is acquired, for example, the person being photographed is photographed in the process of performing step 203, so as to obtain the video of the person being photographed.
  • the target image template and the video of the subject are input to the second pre-trained model to obtain the video of the subject.
  • the above-mentioned second model is used to generate a photographed video of the subject based on the image template and the video of the photographed person.
  • the above-mentioned second model may be a convolutional neural network trained based on a predetermined training sample set using a machine learning algorithm.
  • the training samples in the above-mentioned training sample set include input data and expected output data corresponding to the input data.
  • the input data includes an image template and a video of the person being photographed (for example, a video that has not been adjusted by the user).
  • the expected output data corresponding to the input data includes the captured video of the subject (for example, the video obtained after the user adjusts the video that has not been adjusted by the user).
  • the above-mentioned execution subject can obtain a target image template for the face object in the face image corresponding to each subject, so as to be based on each subject.
  • a target image template is used to adjust the corresponding face objects in the face image, so as to generate a photographed image or a photographed video of the adjusted face objects containing multiple subjects.
  • the image template in the image template set is a face image template
  • the target image template is a target face image template
  • this step 203 may include: pairing based on the target face image template The face image of the photographed person is adjusted to generate a photographed image or a photographed video containing the adjusted face image of the photographed person.
  • the above-mentioned execution subject may adopt the following steps to generate a photographed image containing the adjusted face image of the subject:
  • the target face image template and the face image of the subject are input to the third pre-trained model to obtain a photographed image containing the adjusted face image of the subject.
  • the above-mentioned third model is used to generate a photographed image including the adjusted face image of the photographed person based on the face image template and the photographed face image of the photographed person.
  • the above-mentioned third model may be a convolutional neural network trained based on a predetermined training sample set using a machine learning algorithm.
  • the training samples in the above-mentioned training sample set include input data and expected output data corresponding to the input data.
  • the input data includes a face image template and a face image of the subject (for example, a face image that has not been adjusted by the user).
  • the desired output data corresponding to the input data includes a photographed image containing an adjusted face image of the subject (for example, a photographed image containing a face image obtained after a user adjusts a face image that has not been adjusted by the user).
  • the above-mentioned execution subject may also adopt the following steps to generate a shooting video containing the adjusted face image of the subject:
  • the video of the person being photographed is acquired, for example, the person being photographed is photographed during the execution of this step, so as to obtain the video of the person being photographed.
  • the video frame in the video of the subject contains a face object, that is, the video of the subject contains a face image.
  • the target face image template and the video of the subject are input to the pre-trained fourth model to obtain the captured video containing the adjusted facial image of the subject.
  • the above-mentioned fourth model is used to generate a shooting video containing the adjusted face image of the subject based on the face image template and the video of the subject.
  • the foregoing fourth model may be a convolutional neural network trained based on a predetermined set of training samples by using a machine learning algorithm.
  • the training samples in the above-mentioned training sample set include input data and expected output data corresponding to the input data.
  • the input data includes a face image template and a video containing the face image of the subject (for example, a video containing the face image that has not been adjusted by the user).
  • the expected output data corresponding to the input data includes the captured video containing the adjusted face image of the subject (for example, the video containing the face image obtained after the user adjusts the video containing the face image that has not been adjusted by the user) ).
  • this optional implementation method can obtain different face image templates based on different face images, thereby generating a photographed image or a photographed video containing the adjusted face image of the subject, thereby improving the overall performance.
  • the execution subject can be executed by the following steps: The photographer's face image performs at least one of the following operations: adjustment of the position of key points of the face, texture processing, and image area degradation processing.
  • the above-mentioned target face image template may be obtained by performing at least one operation of adjusting the position of key points of the face, mapping processing, and image area degradation processing on the original face image, and thus, the target face image template may be The operation information of the above at least one operation performed on the original face image is included, and the execution subject may perform at least one operation indicated by the operation information included in the target face image template on the face image of the subject.
  • this optional implementation method can adjust the face image of the subject corresponding to the target face image through the target face image template, thereby automatically performing face key points for the face image of the subject At least one of position adjustment, texture processing, and image area degradation processing to generate a shot video containing the adjusted face image of the subject, thereby improving the generated adjusted face image of the subject
  • the probability that the photographed image or the photographed video of the face image meets the needs of the photographed person further enriches the generation method of the image or video.
  • FIG. 3A the terminal device 31 photographs the subject to obtain a face image 301 of the subject. Then, referring to FIG. 3B, the terminal device 31 obtains a target image template from a predetermined image template set 303 3031, where the target image template 3031 corresponds to the identity recognition result 302 obtained by performing image recognition on the face image 301. Finally, referring to FIG. 3C, the terminal device 31 generates a photograph of the subject 304 based on the target image template 3031 Image 305.
  • the photographing method acquires the face image of the photographed person by photographing the photographed person, and then obtains a target image template from a predetermined image template set, wherein the target image template Corresponding to the identity recognition result obtained by image recognition of the face image, finally, based on the target image template, generate the photographed image or the photographed video of the subject, thus, different image templates can be obtained based on different face images In this way, different photographed images or photographed videos of the photographed persons are generated, thereby increasing the probability that the generated photographed images or photographed videos meet the needs of the photographed persons, and enriching the generation methods of the images or videos.
  • the above-mentioned execution subject may also perform the following steps:
  • the above-mentioned execution subject may allow the user to adjust the captured image or video of the captured person through the currently logged-in user account; In the case that the currently logged-in user account of the shooting application does not meet the preset allowable adjustment conditions, the execution subject may not allow the user to adjust the captured image or captured video of the included person through the currently logged-in user account.
  • this optional implementation manner can control whether the user can adjust the captured image or captured video of the included person through the currently logged-in user account.
  • the above-mentioned execution subject may also perform the following steps:
  • the above-mentioned execution subject may allow the user to adjust the target image template through the currently logged-in user account; the currently logged-in user account of the shooting application does not In the case that the preset allowable adjustment condition is met, the above-mentioned execution subject may not allow the user to adjust the target image template through the currently logged-in user account.
  • this optional implementation manner can control whether the user can adjust the target image template through the currently logged-in user account.
  • the foregoing preset allowable adjustment conditions include at least one of the following:
  • the first item is that the user has set permission adjustment information for the target image template through the currently logged-in user account.
  • each user of the shooting application can allow adjustment operations on the target image template by setting permission adjustment information for the target image template, or adjustment operations on the captured image or captured video of the included person.
  • the user wants to prohibit the adjustment operation on the target image template, or the adjustment operation on the captured image or the captured video of the included person, he can delete or cancel the allowed adjustment information.
  • the second item is that the currently logged-in user account is the same as the user account associated with the target image template.
  • the adjustment operation to the target image template or the adjustment operation to the captured image or video of the included person may be allowed; In the case that the currently logged-in user account is different from the user account associated with the target image template, the adjustment operation on the target image template or the adjustment operation on the captured image or video of the included person may be prohibited.
  • FIG. 4 shows a process 400 of the second embodiment of the photographing method according to the present disclosure.
  • the shooting method is applied to a terminal device, and the shooting method includes the following steps:
  • Step 401 Take a picture of a person being photographed through a photographing application working on a terminal device to obtain a face image of the person being photographed.
  • the execution subject of the shooting method can shoot the subject through a shooting application working on the terminal device (that is, the execution subject described above) to obtain the subject’s information. Face image.
  • the login account of the photographing application is the first user account.
  • the above-mentioned execution subject may include an image acquisition device, such as a camera.
  • the above-mentioned subject may be one or more persons located within the shooting range of the above-mentioned image acquisition device.
  • the photographing application may be an application that has a function of invoking the above-mentioned image acquisition device and is installed on the above-mentioned execution subject.
  • Step 402 Obtain a target image template from a set of predetermined image templates.
  • the above-mentioned execution subject may obtain the target image template from a predetermined image template set.
  • the target image template corresponds to the identity recognition result obtained by performing image recognition on the face image.
  • the target image template is an image template of the second user account.
  • the target image template may be an image template uploaded to the server through the second user account, or the target image template may be an image template that was last used, created, or edited in the shooting application through the second user account, or the target image
  • the template may be an image template that is used most frequently in the shooting application through the second user account within a preset time period.
  • the aforementioned first user account is different from the second user account.
  • step 403 based on the target image template, a photographed image or a photographed video of the subject is generated.
  • the above-mentioned execution subject may generate a photographed image or a photographed video of the subject based on the target image template.
  • step 403 may be basically the same as step 203 in the embodiment corresponding to FIG. 2, and will not be repeated here.
  • Step 404 Based on whether the currently logged-in user account of the shooting application meets the preset allowable adjustment conditions, it is determined whether the user is allowed to adjust the captured image or the captured video of the included person through the currently logged-in user account.
  • the aforementioned preset allowable adjustment conditions include: the second user account has set allowable adjustment information on the target image template.
  • the above-mentioned execution subject may allow the user to use the currently logged-in user account (ie, the first user account) to capture the captured image or captured video of the included person.
  • the adjustment operation when the second user account has not set the adjustment permission information for the target image template (or the second user account has set the adjustment prohibition information for the target image template), the above-mentioned execution subject may not allow the user to pass the currently logged-in user account (I.e., the first user account) an adjustment operation on the captured image or captured video of the included person.
  • this embodiment may also include the same or similar features and effects as the embodiment corresponding to FIG. 2, which will not be repeated here.
  • the process 400 of the photographing method in this embodiment can determine whether the user account of the photographed person is the target when the user account currently logged in to the photographing application is different from the user account of the person being photographed.
  • the image template is set with allowable adjustment information to correspondingly allow or prohibit the user through the currently logged-in user account (that is, the first user account) to adjust the captured image or video of the captured person, thereby realizing the user's adjustment to other
  • the user's permission or prohibition of the adjustment operation of the captured image or the captured video helps to increase the probability that the generated captured image or captured video meets the needs of the user when other users perform image or video capture for the user.
  • FIG. 5 shows a process 500 of the third embodiment of the photographing method according to the present disclosure.
  • the shooting method is applied to a terminal device, and the shooting method includes the following steps:
  • Step 501 In response to a template upload operation performed by the user through the first user account, a target image template is acquired through a shooting application.
  • the execution subject of the photographing method may obtain the target image template through the photographing application.
  • the above-mentioned execution subject may include an image acquisition device, such as a camera.
  • the shooting application can run on the above-mentioned execution subject.
  • the target image template may be an image containing a human body object, or it may contain position information of a human body outline or key points of the human body (for example, head, hands, legs, feet, etc.).
  • the user can adjust the original image obtained by shooting himself (for example, at least one of whitening, adjustment of key points on the face (such as face-lifting), texture processing, and image area degradation processing) to obtain the target image template.
  • the original image obtained by shooting himself for example, at least one of whitening, adjustment of key points on the face (such as face-lifting), texture processing, and image area degradation processing
  • Step 502 Take a picture of the person being photographed through a photographing application working on the terminal device to obtain a face image of the person being photographed.
  • the above-mentioned execution subject may photograph the subject through a photographing application working on a terminal device (ie, the above-mentioned execution subject) to obtain a face image of the subject.
  • a photographing application working on a terminal device ie, the above-mentioned execution subject
  • the login account of the photographing application is the first user account.
  • the above-mentioned subject may be one or more persons located within the shooting range of the above-mentioned image acquisition device.
  • the photographing application may be an application that has a function of invoking the above-mentioned image acquisition device and is installed on the above-mentioned execution subject.
  • Step 503 Recognize the face image to obtain the identity recognition result.
  • the above-mentioned execution subject may recognize the face image to obtain the identity recognition result.
  • the identity recognition result is associated with the first user account.
  • Step 504 Obtain a target image template corresponding to the first user account from a set of locally predetermined image templates.
  • the above-mentioned execution subject may obtain a target image template corresponding to the first user account from a set of locally predetermined image templates.
  • the target image template corresponds to the identity recognition result obtained by performing image recognition on the face image.
  • the above-mentioned execution subject may locally store a collection of image templates.
  • the image template set may include a target image template corresponding to the first user account, that is, after the user performs the template upload operation through the first user account in step 501, the target image template obtained by the execution subject through the shooting application.
  • Step 505 based on the target image template, generate a photographed image or a photographed video of the subject.
  • the above-mentioned execution subject may generate a photographed image or a photographed video of the subject based on the target image template.
  • step 505 may be basically the same as step 203 in the embodiment corresponding to FIG. 2, and will not be repeated here.
  • this embodiment may also include the same or similar features and effects as the embodiment corresponding to FIG. 2, which will not be repeated here.
  • the process 500 of the shooting method in this embodiment can perform image recognition on the face image locally in the above-mentioned execution subject, and determine the target image template from the locally predetermined image template set, thus, There is no need to send the face image to the server, nor to obtain the target image template from the server, which reduces the occupation of network resources.
  • FIG. 6 shows a schematic structural diagram of an electronic device (such as the terminal device in FIG. 1) 600 suitable for implementing the embodiments of the present disclosure.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals ( For example, mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs and desktop computers.
  • the terminal device shown in FIG. 6 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 600 may include a processing device (such as a central processing unit, a graphics processor, etc.) 601, which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 602 or from a storage device 608.
  • the program in the memory (RAM) 603 executes various appropriate actions and processing.
  • various programs and data required for the operation of the electronic device 600 are also stored.
  • the processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O) interface 605 is also connected to the bus 604.
  • the following devices can be connected to the I/O interface 605: including input devices 606 such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc.; including, for example, liquid crystal displays (LCD), speakers, vibration Output device 607 such as a device; and a communication device 609.
  • the communication device 609 may allow the electronic device 600 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 6 shows an electronic device 600 having various devices, it should be understood that it is not required to implement or have all the illustrated devices. It may alternatively be implemented or provided with more or fewer devices. Each block shown in FIG. 6 may represent one device, or may represent multiple devices as needed.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 609, or installed from the storage device 608, or installed from the ROM 602.
  • the processing device 601 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the computer-readable medium described in the embodiments of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable removable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
  • the above-mentioned computer-readable medium may be included in the above-mentioned terminal device; or it may exist alone without being assembled into the terminal device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the terminal device, the terminal device is caused to: take a picture of the person being photographed to obtain the face image of the person being photographed; Obtain a target image template from a set of predetermined image templates, where the target image template corresponds to the identity recognition result obtained by performing image recognition on the face image; based on the target image template, the captured image or video of the subject is generated .
  • the computer program code for performing the operations of the embodiments of the present disclosure can be written in one or more programming languages or a combination thereof, the programming languages including object-oriented programming languages-such as Java, Smalltalk, C++, It also includes conventional procedural programming languages-such as "C" language or similar programming languages.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, through an Internet service provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logical function Executable instructions.
  • the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.

Abstract

本公开的实施例公开了拍摄方法和设备。该拍摄方法应用于终端设备。该拍摄方法的一具体实施方式包括:对被拍摄者进行拍摄,以获取被拍摄者的人脸图像;从预先确定的图像模板集合中,获取目标图像模板,其中,目标图像模板与对人脸图像进行图像识别所得到的身份识别结果相对应;基于目标图像模板,生成被拍摄者的拍摄图像或拍摄视频。该实施方式可以基于不同的人脸图像得到不同的图像模板,从而生成不同的被拍摄者的拍摄图像或拍摄视频,由此,提高了所生成的拍摄图像或拍摄视频符合被拍摄者的自身需求的概率,丰富了图像或视频的生成方式。

Description

拍摄方法和装置 技术领域
本公开的实施例涉及计算机技术领域,具体涉及拍摄方法和装置。
背景技术
目前,越来越多的场景下,人们需要通过图像来记录自己的生活、传达情感等等。然而,真实图像往往无法满足上述需求。例如,如果用户体态偏胖,其真实图像中呈现的用户体态也将偏胖,而在该用户想要通过更完美的图像来呈现自己的情况下,往往会对真实图像进行调整(例如瘦脸),然后使用调整后获得的图像而非真实图像。也即,在采用图像处理应用对真实图像进行自动或人为的调整之后,才能满足其需求。
现有的拍摄应用,通常是对各真实图像进行基本类似的处理,例如:美白、瘦脸等等。
发明内容
本公开提出了拍摄方法和设备。
第一方面,本公开的实施例提供了一种拍摄方法,该方法应用于终端设备,该方法包括:对被拍摄者进行拍摄,以获取被拍摄者的人脸图像;从预先确定的图像模板集合中,获取目标图像模板,其中,目标图像模板与对人脸图像进行图像识别所得到的身份识别结果相对应;基于目标图像模板,生成被拍摄者的拍摄图像或拍摄视频。
第二方面,本公开的实施例提供了一种拍摄装置,该装置设置于终端设备,该装置包括:拍摄单元,被配置成对被拍摄者进行拍摄,以获取被拍摄者的人脸图像;获取单元,被配置成从预先确定的图像模板集合中,获取目标图像模板,其中,目标图像模板与对人脸图像进行图像识别所得到的身份识别结果相对应;生成单元,被配置成基于目标图像模板,生成被拍摄者的拍摄图像或拍 摄视频。
第三方面,本公开的实施例提供了一种终端设备,包括:一个或多个处理器;存储装置,其上存储有一个或多个程序,当上述一个或多个程序被上述一个或多个处理器执行,使得该一个或多个处理器实现如上述拍摄方法中任一实施例的方法。
第四方面,本公开的实施例提供了一种计算机可读介质,其上存储有计算机程序,该程序被处理器执行时实现如上述拍摄方法中任一实施例的方法。
本公开的实施例提供的拍摄方法和设备,通过对被拍摄者进行拍摄,以获取被拍摄者的人脸图像,然后,从预先确定的图像模板集合中,获取目标图像模板,其中,目标图像模板与对人脸图像进行图像识别所得到的身份识别结果相对应,最后,基于目标图像模板,生成被拍摄者的拍摄图像或拍摄视频,通常情况下可以基于不同的人脸图像得到不同的图像模板,从而生成不同的被拍摄者的拍摄图像或拍摄视频,由此,提高了所生成的拍摄图像或拍摄视频符合被拍摄者的自身需求的概率,丰富了图像或视频的生成方式。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本公开的其它特征、目的和优点将会变得更明显:
图1是本公开的一些实施例可以应用于其中的示例性系统架构图;
图2是根据本公开的拍摄方法的第一个实施例的流程图;
图3A-图3C是针对图2的实施例的一个应用场景的示意图;
图4是根据本公开的拍摄方法的第二个实施例的流程图;
图5是根据本公开的拍摄方法的第三个实施例的流程图;
图6是适于用来实现本公开的实施例的电子设备的计算机系统的结构示意图。
具体实施方式
下面结合附图和实施例对本公开作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还 需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。
需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本公开。
图1示出了可以应用本公开的实施例的拍摄方法或拍摄装置的实施例的示例性系统架构100。
如图1所示,系统架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送数据(例如对被拍摄者进行拍摄而得到的该被拍摄者的人脸图像)等。终端设备101、102、103上可以安装有各种客户端应用,例如美颜相机、图像处理软件、视频处理软件、视频播放软件、新闻资讯类应用、图像处理类应用、网页浏览器应用、购物类应用、搜索类应用、即时通信工具、邮箱客户端、社交平台软件等。
终端设备101、102、103可以是硬件,也可以是软件。当终端设备101、102、103为硬件时,可以是具有图像拍摄装置(例如摄像头)的各种电子设备,包括但不限于智能手机、平板电脑、膝上型便携计算机和台式计算机等等。当终端设备101、102、103为软件时,可以安装在上述所列举的电子设备中,例如终端设备可以是拍摄应用,其可以在运行过程中调用图像获取装置来拍摄图像或视频,终端设备可以实现成多个软件或软件模块(例如用来提供分布式服务的软件或软件模块),也可以实现成单个软件或软件模块。在此不做具体限定。
服务器105可以是提供各种服务的服务器,例如对工作于终端设备101、102、103上的拍摄应用提供支持的后台服务器。后台服务器可以对终端设备发送的人脸图像进行图像识别,得到身份识别结果,以及在预先确定的图像模板集合中获取与该身份识别结果相对应的目标图像模板,然后将目标图像模板反馈给终端设备。作为示例,服务器105可以是云端服务器。
需要说明的是,服务器可以是硬件,也可以是软件。当服务器为硬件时, 可以实现成多个服务器组成的分布式服务器集群,也可以实现成单个服务器。当服务器为软件时,可以实现成多个软件或软件模块(例如用来提供分布式服务的软件或软件模块),也可以实现成单个软件或软件模块。在此不做具体限定。
还需要说明的是,本公开的实施例所提供的拍摄方法通常由终端设备执行。相应地,拍摄装置包括的各个部分(例如各个单元、子单元、模块、子模块)通常设置于服务器中。此外,在一些情况下,本公开的实施例所提供的拍摄方法也可以由终端设备和服务器彼此配合执行。相应地,拍摄装置包括的各个部分(例如各个单元、子单元、模块、子模块)也可以分别设置于终端设备和服务器中。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。当拍摄方法运行于其上的电子设备在执行该方法的过程中,不需要与其他电子设备进行数据传输时,该系统架构可以仅包括拍摄方法运行于其上的电子设备(例如终端设备)。
继续参考图2,示出了根据本公开的拍摄方法的第一个实施例的流程200。该拍摄方法应用于终端设备,该拍摄方法包括以下步骤:
步骤201,对被拍摄者进行拍摄,以获取被拍摄者的人脸图像。
在本实施例中,拍摄方法的执行主体(例如图1所示的终端设备)可以对被拍摄者进行拍摄,以获取被拍摄者的人脸图像。其中,上述执行主体可以设置有图像获取装置(例如摄像头)。由此,上述执行主体可以通过该图像获取装置对被拍摄者进行拍摄,以获取被拍摄者的人脸图像。
在这里,上述被拍摄者可以是位于上述图像获取装置的拍摄范围之内的一个或多个人。上述人脸图像中可以包含一个或多个人脸对象。人脸对象可以是图像中呈现的人脸的影像。人脸图像可以是单张照片,也可以是多个视频帧。
在本实施例的一些可选的实现方式中,上述执行主体可以通过工作于上述执行主体上的拍摄应用对被拍摄者进行拍摄。其中,在通过拍摄应用对被拍摄者进行拍摄的过程中,拍摄应用的登录账号为第一用户账号。
在这里,拍摄应用可以是具有调用上述图像获取装置的、安装于上述执行主体上的应用。通过运行该拍摄应用,可以调用上述图像获取装置对被拍摄者 进行拍摄。通常,在通过拍摄应用对被拍摄者进行拍摄的过程中,用户可以登录自己的账号,以便将账号与自己的相关信息进行关联。例如可以将用户通过拍摄应用拍摄的图像与该用户的用户账号相关联,以便用户通过任一终端设备上的拍摄应用登录自己的账号后,均可获得其通过该拍摄应用拍摄的图像。
可以理解,在本可选的实现方式中,用户可以通过登录账号为第一用户账号的拍摄应用对被拍摄者进行拍摄,从而使得通过后续步骤203生成的拍摄图像或拍摄视频与该第一用户账号进行关联,以便用户可以通过第一用户账号获得拍摄图像或拍摄视频,由此提升了用户的使用体验。
步骤202,从预先确定的图像模板集合中,获取目标图像模板。
在本实施例中,上述执行主体可以从预先确定的图像模板集合中,获取目标图像模板。其中,目标图像模板与对步骤201所获取到的人脸图像进行图像识别所得到的身份识别结果相对应。身份识别结果可以用于指示人脸图像对应的人的身份。
其中,上述图像模板集合中的图像模板可以是包含人体对象的图像,也可以是包含人体轮廓或人体关键点(例如头部、手部、腿部、脚部等等)的位置信息。
需要说明的是,若步骤201所获取到的人脸图像中包含两个或两个以上人脸对象,上述执行主体可以针对每个人体对象,从上述图像模板集合中,获取一个目标图像模板,即目标图像模板的数量可以与人脸图像中包含的人体对象的数量相等。
作为示例,上述身份识别结果可以用于指示人脸图像对应的人的职业(例如白领、学生、老师、律师)。上述图像模板集合中的每个图像模板可以与一种职业相关联,由此,可以上述图像模板集合中,与身份识别结果指示的职业相关联的图像模板作为目标图像模板。
在本实施例的一些可选的实现方式中,在上述执行主体通过工作于上述执行主体上的拍摄应用对被拍摄者进行拍摄的情况下,上述执行主体可以针对该拍摄应用的每个用户执行如下步骤:对该用户的人脸图像进行识别,得到该用户的人脸图像对应的身份识别结果,将该用户的人脸图像对应的身份识别结果与该用户的用户账号相关联(例如关联存储于数据库中)。由此,上述执行主 体可以从上述图像模板集合中,确定与步骤201获取到的人脸图像对应的身份识别结果相关联的图像模板,从而将所确定出的图像模板作为目标图像模板。
在这里,上述执行主体或者与上述执行主体通信连接的服务器可对步骤201所获取到的人脸图像进行图像识别,得到的身份识别结果,以及从预先确定的图像模板集合中,确定与该身份识别结果相对应的图像模板(例如通过数据库中与该身份识别结果关联存储的用户账号上传的图像模板),从而将与该身份识别结果相对应的图像模板作为目标图像模板。由此,本可选的实现方式可以采用拍摄应用与被拍摄者相对应的目标图像模板,从而进一步丰富了图像或视频的生成方式。
在本实施例的一些可选的实现方式中,该步骤202可以包括如下步骤(包括步骤一和步骤二):
步骤一,将人脸图像发送至服务器,以使服务器:对人脸图像进行图像识别得到身份识别结果,以及在预先确定的图像模板集合中获取与身份识别结果相对应的目标图像模板。其中,服务器为拍摄应用提供支持。服务器与上述执行主体通信连接。
步骤二,从上述服务器接收目标图像模板。
在这里,对人脸图像进行图像识别得到身份识别结果,以及在预先确定的图像模板集合中获取与身份识别结果相对应的目标图像模板的具体执行方式可以参考以上描述,在此不再赘述。
可以理解,本可选的实现方式采用服务器对人脸图像进行图像识别,以及在预先确定的图像模板集合中确定目标图像模板,相对于在上述执行主体本地对人脸图像进行图像识别,以及在预先确定的图像模板集合中确定目标图像模板的方案,可以减少上述执行主体的运算量,可以适用于更低配置的终端设备。
在本实施例的一些可选的实现方式中,目标图像模板为第二用户账号的图像模板。第二用户账号与身份识别结果相关联。第一用户账号与第二用户账号相同或不同。
在本可选的实现方式中,对步骤201中获取的人脸图像进行人脸识别所得到的身份识别结果所关联的用户账号为第二用户账号。因此,目标图像模板可以为第二用户账号的图像模板,即上述目标图像模板与第二用户账号相关联 (例如关联存储于数据库中)。
可以理解,若第二用户账号与第一用户账号相同,也即,拍摄应用的当前登录账号为被拍摄者本人的登录账号。这样一来,可以基于与被拍摄者相关联的目标图像模板,生成被拍摄者的拍摄图像或拍摄视频。
若第二用户账号与第一用户账号不同,也即,拍摄应用的当前登录账号不是被拍摄者本人的登录账号,可以通过当前拍摄应用的登录账号(第一登录账号)获取到与人脸图像对应的身份识别结果相关联的另一账号(第二登录账号)对应的目标图像模板,从而通过后续步骤203可以采用与被拍摄者的人脸图像对应的目标图像模板,来生成该被拍摄者的拍摄图像或拍摄视频。
由此,可以基于对人脸图像进行人脸识别而得到的身份识别结果,获得与该人脸图像相对应的目标图像模板,进而生成被拍摄者的拍摄图像或拍摄视频,,也即所生成的被拍摄者的拍摄图像或拍摄视频不会受限于拍摄应用当前登录的用户账号,由此,提高了所生成的拍摄图像或拍摄视频符合被拍摄者的自身需求的概率,丰富了图像或视频的生成方式。
在本实施例的一些可选的实现方式中,在步骤202包括上述步骤一和步骤二的情况下,目标图像模板可以为通过以下任一项得到的图像模板:
第一项,通过第二用户账号上传至服务器的图像模板。
在这里,用户可以通过其用户账号将图像模板上传至服务器,从而将所上传的图像模板指定为目标图像模板,用以生成第二用户账号对应的用户的拍摄图像或拍摄视频。
第二项,通过第二用户账号在拍摄应用中最近一次使用、创建或者编辑的图像模板。
在这里,上述执行主体或者与上述执行主体通信连接的服务器可以针对每个用户账号(例如第二用户账号),将用户通过该用户账号在拍摄应用中最近一次使用、最近一次创建或者最近一次编辑的图像模板,作为与该用户账号关联的目标图像模板。由此,无需用户指定,即可获得用户账号关联的目标图像模板,简化了用户的操作步骤,提高了用户的使用体验。
第三项,在预设时间段内通过第二用户账号在拍摄应用中使用次数最多的图像模板。
在这里,上述执行主体或者与上述执行主体通信连接的服务器可以针对每个用户账号(例如第二用户账号),将在预设时间段内通过该用户账号在拍摄应用中使用次数最多的图像模板,作为与该用户账号关联的目标图像模板。由此,无需用户指定,即可获得用户账号关联的目标图像模板,简化了用户的操作步骤,提高了用户的使用体验。
在本实施例的一些可选的实现方式中,该步骤202也可以包括如下步骤(包括第一步和第二步):
第一步,对人脸图像进行识别,得到身份识别结果。其中,身份识别结果与第一用户账号相关联。
第二步,从本地预先确定的图像模板集合中,获取与第一用户账号相对应的目标图像模板。
在这里,对人脸图像进行图像识别得到身份识别结果,以及在预先确定的图像模板集合(即本地预先确定的图像模板集合)中获取与身份识别结果相对应的目标图像模板的具体执行方式可以参考以上描述,在此不再赘述。
可以理解,本可选的实现方式可以在上述执行主体本地对人脸图像进行图像识别,以及在本地预先确定的图像模板集合中确定目标图像模板,由此,无需将人脸图像发送至服务器,减少了对网络资源的占用。
在本实施例的一些可选的实现方式中,在步骤202包括上述第一步和第二步的情况下,目标图像模板可以为通过以下任一项得到的图像模板:
第一项,通过第一用户账号设置的图像模板。
在这里,用户可以通过第一用户账号来设置图像模板,从而将所设置的图像模板作为目标图像模板,以便生成第一用户账号对应的用户的拍摄图像或拍摄视频。
第二项,通过第一用户账号在拍摄应用中最近一次使用、创建或者编辑的图像模板。
在这里,上述执行主体或者与上述执行主体通信连接的服务器可以针对每个用户账号(例如第一用户账号),将用户通过该用户账号在拍摄应用中最近一次使用、最近一次创建或者最近一次编辑的图像模板,作为与该用户账号关联的目标图像模板。由此,无需用户指定,即可获得用户账号关联的目标图像 模板,简化了用户的操作步骤,提高了用户的使用体验。
第三项,在预设时间段内通过第一用户账号在拍摄应用中使用次数最多的图像模板。
在这里,上述执行主体或者与上述执行主体通信连接的服务器可以针对每个用户账号(例如第一用户账号),将在预设时间段内通过该用户账号在拍摄应用中使用次数最多的图像模板,作为与该用户账号关联的目标图像模板。由此,无需用户指定,即可获得用户账号关联的目标图像模板,简化了用户的操作步骤,提高了用户的使用体验。
在本实施例的一些可选的实现方式中,在步骤202包括上述第一步和第二步且用户通过第一用户账号执行模板上传操作的情况下,上述执行主体还可以通过拍摄应用向服务器上传目标图像模板。
由此,用户可以通过第一用户账号将图像模板上传至服务器,从而将所上传的图像模板指定为目标图像模板,以便生成第一用户账号对应的用户的拍摄图像或拍摄视频。
在本实施例的一些可选的实现方式中,该步骤202还可以包括如下步骤:
首先,对人脸图像进行识别,得到身份识别结果。
然后,响应于无法从本地预先确定的图像模板集合中获取与所得到的身份识别结果关联的用户账号相对应的目标图像模板,将所获得的身份识别结果发送至服务器,以使服务器在目标图像模板集合中获取与身份识别结果相对应的目标图像模板。其中,上述目标图像模板集合可以是运行有拍摄应用的各个终端设备本地存储的各个图像模板的集合。
在这里,对人脸图像进行图像识别得到身份识别结果,以及在图像模板集合(例如本地预先确定的图像模板集合或者目标图像模板集合)中获取与身份识别结果相对应的目标图像模板的具体执行方式可以参考以上描述,在此不再赘述。
可以理解,本可选的实现方式可以在无法从本地预先确定的图像模板集合中获取与所得到的身份识别结果关联的用户账号相对应的目标图像模板的情况下,将所获得的身份识别结果发送至服务器,以使服务器在目标图像模板集合中获取与身份识别结果相对应的目标图像模板,由此,在一个终端设备登录 过两个或两个以上用户账号的情况下,所登录的账号间进行切换登录时,无需将身份识别结果发送至服务器即可获得目标图像模板,提高了获得目标图像模板的速度。
步骤203,基于目标图像模板,生成被拍摄者的拍摄图像或拍摄视频。
在本实施例中,上述执行主体可以基于步骤202获得的目标图像模板,生成被拍摄者的拍摄图像或拍摄视频。
作为第一个示例,上述执行主体可以采用如下步骤来执行该步骤203:
首先,获取被拍摄者的图像,例如在执行该步骤203的过程中对被拍摄者进行拍摄,从而得到被拍摄者的图像。
然后,将目标图像模板和被拍摄者的图像输入至预先训练的第一模型,得到被拍摄者的拍摄图像。其中,上述第一模型用于基于图像模板和被拍摄者的图像生成被拍摄者的拍摄图像。
在这里,上述第一模型可以是采用机器学习算法基于预先确定的训练样本集合训练得到的卷积神经网络。其中,上述训练样本集合中的训练样本包括输入数据和与输入数据相对应的期望输出数据。输入数据包括图像模板和被拍摄者的图像(例如未经用户调整的图像)。与输入数据相对应的期望输出数据包括被拍摄者的拍摄图像(例如用户对未经用户调整的图像进行调整后得到的图像)。
作为第二个示例,上述执行主体也可以采用如下步骤来执行该步骤203:
首先,获取被拍摄者的视频,例如在执行该步骤203的过程中对被拍摄者进行拍摄,从而得到被拍摄者的视频。
然后,将目标图像模板和被拍摄者的视频输入至预先训练的第二模型,得到被拍摄者的拍摄视频。其中,上述第二模型用于基于图像模板和被拍摄者的视频生成被拍摄者的拍摄视频。
在这里,上述第二模型可以是采用机器学习算法基于预先确定的训练样本集合训练得到的卷积神经网络。其中,上述训练样本集合中的训练样本包括输入数据和与输入数据相对应的期望输出数据。输入数据包括图像模板和被拍摄者的视频(例如未经用户调整的视频)。与输入数据相对应的期望输出数据包括被拍摄者的拍摄视频(例如用户对未经用户调整的视频进行调整后得到的视 频)。
需要说明的是,当被拍摄者的数量为两个或两个以上时,上述执行主体可以针对每个被拍摄者对应的人脸图像中的人脸对象,获取一个目标图像模板,从而基于每个目标图像模板,对人脸图像中相应的人脸对象进行调整,以便生成包含多个被拍摄者的调整后的人脸对象的拍摄图像或拍摄视频。
在本实施例的一些可选的实现方式中,图像模板集合中的图像模板为人脸图像模板,目标图像模板为目标人脸图像模板;以及,该步骤203可以包括:基于目标人脸图像模板对被拍摄者的人脸图像进行调整,以生成包含被拍摄者的调整后的人脸图像的拍摄图像或拍摄视频。
作为第一个示例,上述执行主体可以采用如下步骤来生成包含被拍摄者的调整后的人脸图像的拍摄图像:
将目标人脸图像模板和被拍摄者的人脸图像输入至预先训练的第三模型,得到包含被拍摄者的调整后的人脸图像的拍摄图像。其中,上述第三模型用于基于人脸图像模板和被拍摄者的人脸图像生成包含被拍摄者的调整后的人脸图像的拍摄图像。
在这里,上述第三模型可以是采用机器学习算法基于预先确定的训练样本集合训练得到的卷积神经网络。其中,上述训练样本集合中的训练样本包括输入数据和与输入数据相对应的期望输出数据。输入数据包括人脸图像模板和被拍摄者的人脸图像(例如未经用户调整的人脸图像)。与输入数据相对应的期望输出数据包括包含被拍摄者的调整后的人脸图像的拍摄图像(例如用户对未经用户调整的人脸图像进行调整后得到的包含人脸图像的拍摄图像)。
作为第二个示例,上述执行主体也可以采用如下步骤来生成包含被拍摄者的调整后的人脸图像的拍摄视频:
首先,获取被拍摄者的视频,例如在执行该步骤的过程中对被拍摄者进行拍摄,从而得到被拍摄者的视频。其中,被拍摄者的视频中的视频帧包含人脸对象,即被拍摄者的视频包含人脸图像。
然后,将目标人脸图像模板和被拍摄者的视频输入至预先训练的第四模型,得到包含被拍摄者的调整后的人脸图像的拍摄视频。其中,上述第四模型用于基于人脸图像模板和被拍摄者的视频生成包含被拍摄者的调整后的人脸图像 的拍摄视频。
在这里,上述第四模型可以是采用机器学习算法基于预先确定的训练样本集合训练得到的卷积神经网络。其中,上述训练样本集合中的训练样本包括输入数据和与输入数据相对应的期望输出数据。输入数据包括人脸图像模板和包含被拍摄者的人脸图像的视频(例如未经用户调整的包含人脸图像的视频)。与输入数据相对应的期望输出数据包括包含被拍摄者的调整后的人脸图像的拍摄视频(例如用户对未经用户调整的包含人脸图像的视频进行调整后得到的包含人脸图像的视频)。
可以理解,本可选的实现方式可以基于不同的人脸图像得到不同的人脸图像模板,从而生成包含被拍摄者的调整后的人脸图像的拍摄图像或拍摄视频,由此,提高了所生成的包含被拍摄者的调整后的人脸图像的拍摄图像或拍摄视频符合被拍摄者的自身需求的概率,丰富了包含被拍摄者的人脸图像的拍摄图像或拍摄视频的生成方式。
在本实施例的一些可选的实现方式中,对于上述基于目标人脸图像模板对被拍摄者的人脸图像进行调整,上述执行主体可以通过如下步骤来执行:基于目标人脸图像模板对被拍摄者的人脸图像进行以下至少一项操作:人脸关键点位置调整、贴图处理及图像区域劣化处理。
具体地,上述目标人脸图像模板可以通过对原始人脸图像进行人脸关键点位置调整、贴图处理和图像区域劣化处理中的至少一项操作而得到,由此,该目标人脸图像模板可以包含对原始人脸图像进行的以上至少一项操作的操作信息,进而上述执行主体可以对被拍摄者的人脸图像进行目标人脸图像模板包含的操作信息指示的至少一项操作。
可以理解,本可选的实现方式可以通过目标人脸图像模板对被拍摄者的人脸图像进行与目标人脸图像相应的调整,从而自动化的为被拍摄者的人脸图像进行人脸关键点位置调整、贴图处理和图像区域劣化处理中的至少一项处理,以生成包含被拍摄者的调整后的人脸图像的拍摄视频,由此,提高了所生成的包含被拍摄者的调整后的人脸图像的拍摄图像或拍摄视频符合被拍摄者的自身需求的概率,进一步丰富了图像或视频的生成方式。
继续参见图3A-图3C,图3A-图3C是根据本实施例的拍摄方法的一个应 用场景的示意图。在图3A中,终端设备31对被拍摄者进行拍摄,以获取被拍摄者的人脸图像301,然后,请参考图3B,终端设备31从预先确定的图像模板集合303中,获取目标图像模板3031,其中,目标图像模板3031与对人脸图像301进行图像识别所得到的身份识别结果302相对应,最后,请参考图3C,终端设备31基于目标图像模板3031,生成被拍摄者304的拍摄图像305。
本公开的上述实施例提供的拍摄方法,通过对被拍摄者进行拍摄,以获取被拍摄者的人脸图像,然后,从预先确定的图像模板集合中,获取目标图像模板,其中,目标图像模板与对人脸图像进行图像识别所得到的身份识别结果相对应,最后,基于目标图像模板,生成被拍摄者的拍摄图像或拍摄视频,由此,可以基于不同的人脸图像得到不同的图像模板,从而生成不同的被拍摄者的拍摄图像或拍摄视频,由此,提高了所生成的拍摄图像或拍摄视频符合被拍摄者的自身需求的概率,丰富了图像或视频的生成方式。
在本实施例的一些可选的实现方式中,上述执行主体还可以执行如下步骤:
基于拍摄应用当前登录的用户账号是否符合预设允许调整条件,确定是否允许用户通过当前登录的用户账号对所包含被拍摄者的拍摄图像或拍摄视频的调整操作。
具体地,在拍摄应用当前登录的用户账号符合预设允许调整条件的情况下,上述执行主体可以允许用户通过当前登录的用户账号对所包含被拍摄者的拍摄图像或拍摄视频的调整操作;在拍摄应用当前登录的用户账号不符合预设允许调整条件的情况下,上述执行主体可以不允许用户通过当前登录的用户账号对所包含被拍摄者的拍摄图像或拍摄视频的调整操作。
可以理解,本可选的实现方式可以控制用户能否通过当前登录的用户账号对所包含被拍摄者的拍摄图像或拍摄视频的调整操作。
在本实施例的一些可选的实现方式中,上述执行主体还可以执行如下步骤:
基于拍摄应用当前登录的用户账号是否符合预设允许调整条件,确定是否允许用户通过当前登录的用户账号对目标图像模板的调整操作。
具体地,在拍摄应用当前登录的用户账号符合预设允许调整条件的情况下,上述执行主体可以允许用户通过当前登录的用户账号对目标图像模板的调整操作;在拍摄应用当前登录的用户账号不符合预设允许调整条件的情况下,上 述执行主体可以不允许用户通过当前登录的用户账号对目标图像模板的调整操作。
可以理解,本可选的实现方式可以控制用户能否通过当前登录的用户账号对目标图像模板的调整操作。
在本实施例的一些可选的实现方式中,上述预设允许调整条件包括以下至少一项:
第一项,用户通过当前登录的用户账号对目标图像模板设置过允许调整信息。
在这里,拍摄应用的每个用户可以通过对目标图像模板设置允许调整信息,来允许对目标图像模板的调整操作,或者,对所包含被拍摄者的拍摄图像或拍摄视频的调整操作。当该用户想要禁止对目标图像模板的调整操作,或者,对所包含被拍摄者的拍摄图像或拍摄视频的调整操作时,其可以删除或取消允许调整信息。
第二项,当前登录的用户账号与目标图像模板关联的用户账号相同。
在这里,在当前登录的用户账号与目标图像模板关联的用户账号相同的情况下,可以允许对目标图像模板的调整操作,或者,对所包含被拍摄者的拍摄图像或拍摄视频的调整操作;在当前登录的用户账号与目标图像模板关联的用户账号不同的情况下,可以禁止对目标图像模板的调整操作,或者,对所包含被拍摄者的拍摄图像或拍摄视频的调整操作。
进一步参考图4,其示出了根据本公开的拍摄方法的第二个实施例的流程400。该拍摄方法应用于终端设备,该拍摄方法包括以下步骤:
步骤401,通过工作于终端设备上的拍摄应用对被拍摄者进行拍摄,以获取被拍摄者的人脸图像。
在本实施例中,拍摄方法的执行主体(例如图1所示的终端设备)可以通过工作于终端设备(即上述执行主体)上的拍摄应用对被拍摄者进行拍摄,以获取被拍摄者的人脸图像。其中,在通过拍摄应用对被拍摄者进行拍摄的过程中,拍摄应用的登录账号为第一用户账号。
在这里,上述执行主体可以包括图像获取装置,例如摄像头。上述被拍摄 者可以是位于上述图像获取装置的拍摄范围之内的一个或多个人。拍摄应用可以是具有调用上述图像获取装置的功能的、安装于上述执行主体上的应用。
步骤402,从预先确定的图像模板集合中,获取目标图像模板。
在本实施例中,上述执行主体可以从预先确定的图像模板集合中,获取目标图像模板。
其中,目标图像模板与对人脸图像进行图像识别所得到的身份识别结果相对应。
目标图像模板为第二用户账号的图像模板。例如,目标图像模板可以是通过第二用户账号上传至服务器的图像模板,或者,目标图像模板可以是通过第二用户账号在拍摄应用中最近一次使用、创建或者编辑的图像模板,或者,目标图像模板可以是在预设时间段内通过第二用户账号在拍摄应用中使用次数最多的图像模板。
在这里,上述第一用户账号与第二用户账号不同。
步骤403,基于目标图像模板,生成被拍摄者的拍摄图像或拍摄视频。
在本实施例中,上述执行主体可以基于目标图像模板,生成被拍摄者的拍摄图像或拍摄视频。
在本实施例中,步骤403的具体执行方式可以与图2对应实施例中的步骤203基本一致,这里不再赘述。
步骤404,基于拍摄应用当前登录的用户账号是否符合预设允许调整条件,确定是否允许用户通过当前登录的用户账号对所包含被拍摄者的拍摄图像或拍摄视频的调整操作。
在本实施例中,基于拍摄应用当前登录的用户账号是否符合预设允许调整条件,确定是否允许用户通过当前登录的用户账号对所包含被拍摄者的拍摄图像或拍摄视频的调整操作。其中,上述预设允许调整条件包括:第二用户账号对目标图像模板设置过允许调整信息。
具体地,当第二用户账号对目标图像模板设置过允许调整信息时,上述执行主体可以允许用户通过当前登录的用户账号(即第一用户账号)对所包含被拍摄者的拍摄图像或拍摄视频的调整操作;当第二用户账号对目标图像模板未设置过允许调整信息(或者第二用户账号对目标图像模板设置过禁止调整信息) 时,上述执行主体可以不允许用户通过当前登录的用户账号(即第一用户账号)对所包含被拍摄者的拍摄图像或拍摄视频的调整操作。
需要说明的是,除上面所记载的内容外,本实施例还可以包括与图2对应的实施例相同或类似的特征、效果,在此不再赘述。
从图4中可以看出,本实施例中的拍摄方法的流程400可以在拍摄应用当前登录的用户账号与被拍摄者的用户账号不同的情况下,通过判断被拍者的用户账号是否对目标图像模板设置过允许调整信息,来相应地允许或禁止用户通过当前登录的用户账号(即第一用户账户)对所包含被拍摄者的拍摄图像或拍摄视频的调整操作,从而实现了用户针对其他用户对其拍摄图像或拍摄视频的调整操作的允许或禁止,在其他用户为该用户进行图像或视频拍摄时,有助于提高所生成的拍摄图像或拍摄视频符合该用户的需求的概率。
进一步参考图5,其示出了根据本公开的拍摄方法的第三个实施例的流程500。该拍摄方法应用于终端设备,该拍摄方法包括以下步骤:
步骤501,响应于用户通过第一用户账号执行的模板上传操作,通过拍摄应用获取目标图像模板。
在本实施例中,在用户通过第一用户账号执行模板上传操作的情况下,拍摄方法的执行主体(例如图1所示的终端设备)可以通过拍摄应用获取目标图像模板。
其中,上述执行主体可以包括图像获取装置,例如摄像头。拍摄应用可以在上述执行主体上运行。目标图像模板可以是包含人体对象的图像,也可以是包含人体轮廓或人体关键点(例如头部、手部、腿部、脚部等等)的位置信息。
在一些情况下,用户可以对自己进行拍摄得到的原始图像进行调整(例如美白、人脸关键点位置调整(例如瘦脸)、贴图处理及图像区域劣化处理中的至少一项),从而获得目标图像模板。
步骤502,通过工作于终端设备上的拍摄应用对被拍摄者进行拍摄,以获取被拍摄者的人脸图像。
在本实施例中,上述执行主体可以通过工作于终端设备(即上述执行主体)上的拍摄应用对被拍摄者进行拍摄,以获取被拍摄者的人脸图像。其中,在通过拍摄应用对被拍摄者进行拍摄的过程中,拍摄应用的登录账号为第一用户账 号。
在这里,上述被拍摄者可以是位于上述图像获取装置的拍摄范围之内的一个或多个人。拍摄应用可以是具有调用上述图像获取装置的功能的、安装于上述执行主体上的应用。
步骤503,对人脸图像进行识别,得到身份识别结果。
在本实施例中,上述执行主体可以对人脸图像进行识别,得到身份识别结果。其中,身份识别结果与第一用户账号相关联。
步骤504,从本地预先确定的图像模板集合中,获取与第一用户账号相对应的目标图像模板。
在本实施例中,上述执行主体可以从本地预先确定的图像模板集合中,获取与第一用户账号相对应的目标图像模板。其中,目标图像模板与对人脸图像进行图像识别所得到的身份识别结果相对应。
在这里,上述执行主体本地可以存储有图像模板集合。该图像模板集合中可以包含与第一用户账号相对应的目标图像模板,即步骤501中用户通过第一用户账号执行模板上传操作后,上述执行主体通过拍摄应用获取到的目标图像模板。
步骤505,基于目标图像模板,生成被拍摄者的拍摄图像或拍摄视频。
在本实施例中,上述执行主体可以基于目标图像模板,生成被拍摄者的拍摄图像或拍摄视频。
在本实施例中,步骤505的具体执行方式可以与图2对应实施例中的步骤203基本一致,这里不再赘述。
需要说明的是,除上面所记载的内容外,本实施例还可以包括与图2对应的实施例相同或类似的特征、效果,在此不再赘述。
从图5中可以看出,本实施例中的拍摄方法的流程500可以在上述执行主体本地对人脸图像进行图像识别,以及在本地预先确定的图像模板集合中确定目标图像模板,由此,无需将人脸图像发送至服务器,也无需从服务器获取目标图像模板,减少了对网络资源的占用。
下面参考图6,其示出了适于用来实现本公开的实施例的电子设备(例如图1中的终端设备)600的结构示意图。本公开的实施例中的终端设备可以包 括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图6示出的终端设备仅仅是一个示例,不应对本公开的实施例的功能和使用范围带来任何限制。
如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储装置608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。图6中示出的每个方框可以代表一个装置,也可以根据需要代表多个装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开的实施例的方法中限定的上述功能。
需要说明的是,本公开的实施例所述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例 子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开的实施例中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开的实施例中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述终端设备中所包含的;也可以是单独存在,而未装配入该终端设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该终端设备执行时,使得该终端设备:对被拍摄者进行拍摄,以获取被拍摄者的人脸图像;从预先确定的图像模板集合中,获取目标图像模板,其中,目标图像模板与对人脸图像进行图像识别所得到的身份识别结果相对应;基于目标图像模板,生成被拍摄者的拍摄图像或拍摄视频。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的实施例的操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言-诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言-诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)-连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。

Claims (15)

  1. 一种拍摄方法,应用于终端设备,其特征在于,所述方法包括:
    对被拍摄者进行拍摄,以获取所述被拍摄者的人脸图像;
    从预先确定的图像模板集合中,获取目标图像模板,其中,所述目标图像模板与对所述人脸图像进行图像识别所得到的身份识别结果相对应;
    基于所述目标图像模板,生成所述被拍摄者的拍摄图像或拍摄视频。
  2. 根据权利要求1所述的方法,其特征在于,所述对被拍摄者进行拍摄,包括:
    通过工作于所述终端设备上的拍摄应用对所述被拍摄者进行拍摄,其中,在通过所述拍摄应用对所述被拍摄者进行拍摄的过程中,所述拍摄应用的登录账号为第一用户账号。
  3. 根据权利要求2所述的方法,其特征在于,所述从预先确定的图像模板集合中,获取目标图像模板,包括:
    将所述人脸图像发送至服务器,以使所述服务器:对所述人脸图像进行图像识别得到所述身份识别结果,以及在预先确定的图像模板集合中获取与所述身份识别结果相对应的目标图像模板,其中,所述服务器为所述拍摄应用提供支持;
    从所述服务器接收所述目标图像模板。
  4. 根据权利要求3所述的方法,其特征在于,所述目标图像模板为第二用户账号的图像模板,其中,所述第二用户账号与所述身份识别结果相关联,所述第一用户账号与所述第二用户账号相同或不同。
  5. 根据权利要求4所述的方法,其特征在于,所述目标图像模板为通过以下任一项得到的图像模板:
    通过所述第二用户账号上传至所述服务器的图像模板;
    通过所述第二用户账号在所述拍摄应用中最近一次使用、创建或者编辑的图 像模板;
    在预设时间段内通过所述第二用户账号在所述拍摄应用中使用次数最多的图像模板。
  6. 根据权利要求2所述的方法,其特征在于,所述从预先确定的图像模板集合中,获取目标图像模板,包括:
    对所述人脸图像进行识别,得到所述身份识别结果,其中,所述身份识别结果与所述第一用户账号相关联;
    从本地预先确定的图像模板集合中,获取与所述第一用户账号相对应的目标图像模板。
  7. 根据权利要求6所述的方法,其特征在于,所述目标图像模板为通过以下任一项得到的图像模板:
    通过所述第一用户账号设置的图像模板;
    通过所述第一用户账号在所述拍摄应用中最近一次使用、创建或者编辑的图像模板;
    在预设时间段内通过所述第一用户账号在所述拍摄应用中使用次数最多的图像模板。
  8. 根据权利要求6或7所述的方法,其特征在于,所述方法还包括:
    响应于用户通过所述第一用户账号执行的模板上传操作,通过所述拍摄应用向所述服务器上传所述目标图像模板。
  9. 根据权利要求1至8之一所述的方法,其特征在于,所述方法还包括:
    基于所述拍摄应用当前登录的用户账号是否符合预设允许调整条件,确定是否允许用户通过所述当前登录的用户账号对所包含所述被拍摄者的拍摄图像或拍摄视频的调整操作。
  10. 根据权利要求1至8之一所述的方法,其特征在于,所述方法还包括:
    基于所述拍摄应用当前登录的用户账号是否符合预设允许调整条件,确定是否允许用户通过所述当前登录的用户账号对所述目标图像模板的调整操作。
  11. 根据权利要求9或10所述的方法,其特征在于,所述预设允许调整条件包括以下至少一项:
    用户通过所述当前登录的用户账号对所述目标图像模板设置过允许调整信息;
    所述当前登录的用户账号与所述目标图像模板关联的用户账号相同。
  12. 根据权利要求1至11之一所述的方法,其特征在于,所述图像模板集合中的图像模板为人脸图像模板,所述目标图像模板为目标人脸图像模板;以及
    所述基于所述目标图像模板,生成所述被拍摄者的拍摄图像或拍摄视频,包括:
    基于所述目标人脸图像模板对所述被拍摄者的人脸图像进行调整,以生成包含所述被拍摄者的调整后的人脸图像的拍摄图像或拍摄视频。
  13. 根据权利要求12所述的方法,其特征在于,所述基于所述目标人脸图像模板对所述被拍摄者的人脸图像进行调整,包括:
    基于所述目标人脸图像模板对所述被拍摄者的人脸图像进行以下至少一项操作:人脸关键点位置调整、贴图处理及图像区域劣化处理。
  14. 一种终端设备,包括:
    一个或多个处理器;
    存储装置,其上存储有一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1至13中任一所述的方法。
  15. 一种计算机可读介质,其上存储有计算机程序,其中,所述程序被处理器执行时实现如权利要求1至13中任一所述的方法。
PCT/CN2020/116440 2019-09-26 2020-09-21 拍摄方法和装置 WO2021057644A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910918668.7 2019-09-26
CN201910918668.7A CN110602405A (zh) 2019-09-26 2019-09-26 拍摄方法和装置

Publications (1)

Publication Number Publication Date
WO2021057644A1 true WO2021057644A1 (zh) 2021-04-01

Family

ID=68863845

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/116440 WO2021057644A1 (zh) 2019-09-26 2020-09-21 拍摄方法和装置

Country Status (2)

Country Link
CN (1) CN110602405A (zh)
WO (1) WO2021057644A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110602405A (zh) * 2019-09-26 2019-12-20 上海盛付通电子支付服务有限公司 拍摄方法和装置
CN111314620B (zh) * 2020-03-26 2022-03-04 上海盛付通电子支付服务有限公司 拍摄方法和设备

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103297697A (zh) * 2013-05-30 2013-09-11 北京小米科技有限责任公司 在拍照过程中显示模板照片的方法及装置
CN104715236A (zh) * 2015-03-06 2015-06-17 广东欧珀移动通信有限公司 一种美颜拍照方法及装置
CN104853092A (zh) * 2015-04-30 2015-08-19 广东欧珀移动通信有限公司 一种拍照方法及装置
CN105447047A (zh) * 2014-09-02 2016-03-30 阿里巴巴集团控股有限公司 建立拍照模板数据库、提供拍照推荐信息的方法及装置
CN105574006A (zh) * 2014-10-10 2016-05-11 阿里巴巴集团控股有限公司 建立拍照模板数据库、提供拍照推荐信息的方法及装置
CN106408510A (zh) * 2016-09-08 2017-02-15 厦门美图之家科技有限公司 一种获取人脸图像的美颜蒙版的方法及系统
CN107566728A (zh) * 2017-09-25 2018-01-09 维沃移动通信有限公司 一种拍摄方法、移动终端及计算机可读存储介质
US20180060690A1 (en) * 2015-03-06 2018-03-01 Matthew Lee Method and device for capturing images using image templates
CN108282611A (zh) * 2018-01-11 2018-07-13 维沃移动通信有限公司 一种图像处理方法及移动终端
CN110602405A (zh) * 2019-09-26 2019-12-20 上海盛付通电子支付服务有限公司 拍摄方法和装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009059073A (ja) * 2007-08-30 2009-03-19 Toshiba Corp 撮影装置、撮影方法、人物認識装置および人物認識方法
JP2012244226A (ja) * 2011-05-16 2012-12-10 Nec Casio Mobile Communications Ltd 撮像装置、画像合成方法、及びプログラム
CN105530435A (zh) * 2016-02-01 2016-04-27 深圳市金立通信设备有限公司 一种拍摄方法及移动终端
CN107404381A (zh) * 2016-05-19 2017-11-28 阿里巴巴集团控股有限公司 一种身份认证方法和装置
CN107018333A (zh) * 2017-05-27 2017-08-04 北京小米移动软件有限公司 拍摄模板推荐方法、装置及拍摄设备
CN109104566B (zh) * 2018-06-28 2020-10-16 维沃移动通信有限公司 一种图像显示方法及终端设备

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103297697A (zh) * 2013-05-30 2013-09-11 北京小米科技有限责任公司 在拍照过程中显示模板照片的方法及装置
CN105447047A (zh) * 2014-09-02 2016-03-30 阿里巴巴集团控股有限公司 建立拍照模板数据库、提供拍照推荐信息的方法及装置
CN105574006A (zh) * 2014-10-10 2016-05-11 阿里巴巴集团控股有限公司 建立拍照模板数据库、提供拍照推荐信息的方法及装置
CN104715236A (zh) * 2015-03-06 2015-06-17 广东欧珀移动通信有限公司 一种美颜拍照方法及装置
US20180060690A1 (en) * 2015-03-06 2018-03-01 Matthew Lee Method and device for capturing images using image templates
CN104853092A (zh) * 2015-04-30 2015-08-19 广东欧珀移动通信有限公司 一种拍照方法及装置
CN106408510A (zh) * 2016-09-08 2017-02-15 厦门美图之家科技有限公司 一种获取人脸图像的美颜蒙版的方法及系统
CN107566728A (zh) * 2017-09-25 2018-01-09 维沃移动通信有限公司 一种拍摄方法、移动终端及计算机可读存储介质
CN108282611A (zh) * 2018-01-11 2018-07-13 维沃移动通信有限公司 一种图像处理方法及移动终端
CN110602405A (zh) * 2019-09-26 2019-12-20 上海盛付通电子支付服务有限公司 拍摄方法和装置

Also Published As

Publication number Publication date
CN110602405A (zh) 2019-12-20

Similar Documents

Publication Publication Date Title
US10938725B2 (en) Load balancing multimedia conferencing system, device, and methods
WO2019242222A1 (zh) 用于生成信息的方法和装置
CN110348419B (zh) 用于拍照的方法和装置
CN111476871B (zh) 用于生成视频的方法和装置
US10541000B1 (en) User input-based video summarization
JP2022523606A (ja) 動画解析のためのゲーティングモデル
WO2021190625A1 (zh) 拍摄方法和设备
US10015385B2 (en) Enhancing video conferences
CN111835531B (zh) 会话处理方法、装置、计算机设备及存储介质
US11196962B2 (en) Method and a device for a video call based on a virtual image
JP2022525272A (ja) 選択的な動きの描画を伴う画像表示
JP6946566B2 (ja) 静的な映像認識
WO2021057644A1 (zh) 拍摄方法和装置
WO2019227429A1 (zh) 多媒体内容生成方法、装置和设备/终端/服务器
CN112805722A (zh) 减少面部识别中的误报的方法和装置
EP3744088A1 (en) Techniques to capture and edit dynamic depth images
CN114630057B (zh) 确定特效视频的方法、装置、电子设备及存储介质
CN110570383A (zh) 一种图像处理方法、装置、电子设备及存储介质
CN109949213B (zh) 用于生成图像的方法和装置
US20200322648A1 (en) Systems and methods of facilitating live streaming of content on multiple social media platforms
CN114666622A (zh) 特效视频确定方法、装置、电子设备及存储介质
CN115002359A (zh) 视频处理方法、装置、电子设备及存储介质
CN114490513A (zh) 一种文件处理方法及装置、电子设备和存储介质
CN110545386B (zh) 用于拍摄图像的方法和设备
CN111447501A (zh) 一种照片管理方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20868012

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20868012

Country of ref document: EP

Kind code of ref document: A1