CN111385514A - Portrait processing method and device and terminal - Google Patents

Portrait processing method and device and terminal Download PDF

Info

Publication number
CN111385514A
CN111385514A CN202010100149.2A CN202010100149A CN111385514A CN 111385514 A CN111385514 A CN 111385514A CN 202010100149 A CN202010100149 A CN 202010100149A CN 111385514 A CN111385514 A CN 111385514A
Authority
CN
China
Prior art keywords
image
user
terminal
processor
portrait processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010100149.2A
Other languages
Chinese (zh)
Other versions
CN111385514B (en
Inventor
田春长
汪亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010100149.2A priority Critical patent/CN111385514B/en
Publication of CN111385514A publication Critical patent/CN111385514A/en
Priority to PCT/CN2020/122767 priority patent/WO2021164289A1/en
Application granted granted Critical
Publication of CN111385514B publication Critical patent/CN111385514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Abstract

The application provides a portrait processing method and device and a terminal, which can restore facial features of a user, which are shielded by virtual reality equipment, so that user experience of a receiving end is improved. The method comprises the following steps: the method comprises the steps that a terminal acquires a first image of a user, wherein the first image comprises a face of the user, a partial region of the face of the user is shielded by a Virtual Reality (VR) device, and the partial region comprises regions where two eyes of the user are located; the terminal inputs the first image into a portrait processing model to obtain a second image, wherein the second image comprises a complete face of the user; the terminal sends the second image to a receiving end.

Description

Portrait processing method and device and terminal
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a portrait processing method and apparatus in the field of terminal technologies, and a terminal.
Background
With the continuous development and progress of communication technology, a large broadband service anytime and anywhere provides a wider platform for the application development of Virtual Reality (VR) technology.
Among many VR applications, immersive video call is a very important business. Immersive video call means that both parties of the call wear VR devices to carry out video call, a camera is arranged beside each party of the call, the camera can shoot a live-action of the call when the call is carried out by the call, and the live-action is transmitted to the VR device of the other party to be displayed, so that the call effect of simulating the live-action conversation with the other party is realized.
However, when the user wears the VR device to perform a video call, the VR device may block a partial area of the face of the user, for example, the VR glasses may block the eyes of the user, so that when the call live-action of the user is shot through the camera, the opposite party cannot see the facial features of the user, such as the eye movement and the expression, and the user experience is poor.
Disclosure of Invention
The application provides a portrait processing method and device and a terminal, which can restore facial features of a user, which are shielded by a VR device, so that user experience of a receiving end is improved.
In a first aspect, the present application provides a portrait processing method, including: acquiring a first image of a user, wherein the first image comprises a face of the user, and a partial region of the face of the user is occluded by a Virtual Reality (VR) device, and the partial region comprises a region where two eyes of the user are located; inputting the first image into a portrait processing model to obtain a second image, wherein the second image includes a complete face of the user, wherein the portrait processing model is obtained by training a sample training data set, the sample training data set includes a plurality of original images and a plurality of restored images corresponding to the plurality of original images, wherein the plurality of original images are acquired for at least one sample user, a first original image in the plurality of original images includes a first sample user, and the partial region of the face of the first sample user is occluded by a VR device, a first restored image in the plurality of restored images includes a complete face of the first sample user, and the at least one sample user includes the first sample user; and sending the second image to a receiving end.
By adopting the portrait processing method provided by the embodiment of the application, the facial features of the user, which are shielded by the VR device, can be restored, so that the user experience of the receiving end is improved.
Optionally, the portrait processing apparatus may further acquire the first image in other manners, which is not limited in this embodiment of the application.
In a first possible implementation manner, the portrait processing apparatus may receive a third image captured by the first imaging apparatus, where the third image includes the user, and the portrait processing apparatus intercepts the first image from the third image.
Note that the first image pickup device may be an image pickup device 120 as shown in fig. 1. The first image capturing device and the portrait processing device may be independent devices, or the first image capturing device and the portrait processing device may be integrated in a same device, which is not limited in this embodiment of the present application.
It should be noted that the third image may include at least the user. That is to say, the third image may further include an environment or a scene where the user is located, which is not limited in this embodiment of the application.
It should be noted that, in the embodiment of the present application, only a partial region of the face of the user is taken as an example of a region where both eyes are located, and it should be understood that the partial region may also include other regions of the face of the user, which is not limited in the embodiment of the present application.
It is further noted that the first image may include at least the face of the user, wherein a partial region of the face of the user is occluded by the VR device.
Optionally, the first image may further include at least one of other parts of the user and an environment where the user is located, or the first image may further include other users except the user, where a portrait processing method for the other users is similar to a portrait processing method for the user, and is not described herein again to avoid repetition.
It should be further noted that, when the scene image further includes other users and partial regions of faces of the other users are also blocked by VR devices worn by the other users, the portrait processing device may perform portrait processing on the other users by using a method similar to the portrait processing method of the user, and details are not repeated here to avoid repetition.
It should be further noted that the first image described in this embodiment of the present application may be a single image, or may be a frame image in a video stream, which is not limited in this embodiment of the present application.
Optionally, the first image may be an original photographed image or an image with higher definition obtained after the base image quality processing.
Optionally, the basic image quality processing in the embodiment of the present application may include at least one image processing step for improving image quality, for example: denoising, sharpening, brightness improvement and the like, which are not limited in the embodiments of the present application.
Optionally, the portrait processing apparatus may obtain the portrait processing model in a plurality of ways, which is not limited in this embodiment.
In a first possible implementation, the portrait processing apparatus may pre-configure the portrait processing model at the time of factory shipment.
In a second possible implementation manner, the human image processing apparatus may receive the human image processing model from other devices, that is, the human image processing model is trained by other devices.
For example, the portrait processing model may be stored at a cloud server, and the portrait processing device may request the portrait processing model from the cloud server over a network.
In a third possible implementation, the portrait processing apparatus may train the portrait processing model by itself.
Optionally, taking an example that the area where the two eyes of the user are located is blocked by the VR device, the human image processing device may obtain a sample training data set, train and learn the sample training data set to obtain the human image processing model. Wherein the sample training data set comprises a plurality of original images and a plurality of restored images corresponding to the plurality of original images, wherein the plurality of original images are acquired for at least one sample user, a first original image in the plurality of original images comprises a first sample user, and the partial region of the face of the first sample user is occluded by a VR device, a first restored image in the plurality of restored images comprises a complete face of the first sample user, and the at least one sample user comprises the first sample user.
Optionally, the portrait processing apparatus may train and learn the sample training data set through a plurality of methods to obtain the portrait processing model, which is not limited in this embodiment of the application.
In a possible implementation manner, the human image processing apparatus may train and learn the sample training data set through a neural network model to obtain the human image processing model.
For example, the neural network model may be a Generic Adaptive Networks (GAN) model, which may include a Conditional access network (Conditional GAN), a Deep Convolution access network (Deep convergence GAN), and the like.
By adopting the portrait processing method provided by the embodiment of the application, the facial features of the first image, which are sheltered by the VR device, of the user can be restored through the portrait processing model, so that the user experience of the receiving end is improved.
Optionally, the portrait processing apparatus may input the first image and the feature reference information into the portrait processing model to obtain the second image. Wherein the feature reference information includes at least one of an eye feature parameter and a head feature parameter, the eye feature parameter includes at least one of position information, size information, and perspective information, the position information is used to indicate a position of each of two eyes of the user, the size information is used to indicate a size of each eye, the perspective information is used to indicate an eyeball gaze angle of each eye, and the head feature parameter includes a three-axis attitude angle and an acceleration of the head of the user.
By adopting the portrait processing method provided by the embodiment of the application, the first image and the characteristic reference information are combined and input into the portrait processing device, and the reduction degree and the authenticity of the shielded area can be improved by providing more portrait related characteristics.
Optionally, the portrait processing apparatus may obtain the feature reference information in various ways, which is not limited in this embodiment.
(1) Method for acquiring eye characteristic parameters
In a first possible implementation manner, the human image processing apparatus may receive a fourth image captured by a second camera apparatus, where the second camera apparatus is a camera apparatus built in the VR apparatus, and the fourth image includes two eyes of the user; the human image processing device may extract the eye feature parameters from the fourth image.
The second imaging device may be an imaging device built in the VR device, for example: an internal Infrared (IR) camera.
Optionally, in the embodiment of the present application, only the second image capturing device is taken as an example of a built-in image capturing device on the VR device, and the second image capturing device may also be an image capturing device that is disposed at another position and is capable of capturing a real image of a facial area of the user that is blocked by the VR device, which is not limited in the embodiment of the present application.
In a second possible implementation manner, the human image processing device may extract the eye feature parameters from a plurality of locally stored images of the user, wherein the plurality of images include the eyes of the user.
It should be noted that the plurality of images may be photographs including faces taken by the user on a daily basis, for example: and (4) self-photographing.
In a third possible implementation manner, the portrait processing apparatus may establish a facial feature database of the user through photos taken by the user daily, where the facial feature database includes feature parameters of each organ of the face, such as eye feature parameters, and the portrait processing apparatus may retrieve the eye feature parameters from the facial feature database.
In a fourth possible implementation, the human image processing device may receive the eye feature parameters from other measuring devices, such as an eye tracking sensor.
Optionally, the plurality of images and the facial feature database may be further stored in a cloud, and the human image processing device may request the cloud through a network, which is not limited in this embodiment of the present application.
It should be noted that, when the content that is blocked in the first image is restored through the portrait processing model, since the VR device mainly blocks the eye region of the user, the restoration degree and the authenticity of the blocked content can be improved in combination with the eye feature parameters of the user.
(2) Method for acquiring head characteristic parameters
In one possible implementation, the portrait processing device may receive the head feature parameters measured by the inertial measurement device.
It should be noted that the Inertial Measurement Unit (IMU) may also be an Inertial Measurement Unit (IMU) that is a device for measuring the three-axis attitude angle (or angular velocity) and acceleration of an object. Generally, an IMU includes three single-axis accelerometers and three single-axis gyroscopes, the accelerometers detect acceleration signals of an object in three independent axes of a carrier coordinate system, and the gyroscopes detect angular velocity signals of the carrier relative to a navigation coordinate system, and measure angular velocity and acceleration of the object in three-dimensional space, and then solve the attitude of the object.
Alternatively, the inertial measurement unit may be a separate measurement device fixed on the head of the user, or the inertial measurement unit may be integrated in the VR device, which is not limited in this embodiment.
It should be noted that, when the content that is blocked in the first image is restored through the portrait processing model, since the head of the user may be in a moving state, that is, has a moving speed and a rotation angle, in combination with the head characteristic parameters of the user, the degree of restoration and the authenticity of the blocked content can be further improved.
Optionally, the feature reference information may further include other feature parameters, such as: the parameters such as the nose characteristic parameter, the face characteristic parameter, the hair style characteristic parameter, etc. can describe the portrait characteristics of the user, which is not limited in the embodiment of the present application.
Optionally, the portrait processing apparatus may send the second image to the receiving end, or the portrait processing apparatus may send a target image to the receiving end, where the target image is an image obtained by synthesizing or splicing the second image, or the portrait processing apparatus may send a video stream to the receiving end, where the video stream includes the second image or the target image, which is not limited in this embodiment.
Accordingly, the portrait processing apparatus may send the second image, or a target image or a video stream containing the second image to the receiving end.
Optionally, the portrait processing apparatus may generate the video stream in various ways, which is not limited in this embodiment.
In one possible implementation manner, the sending, by the portrait processing apparatus, the second image to a receiving end includes: the portrait processing device splices the second image and a fifth image to obtain a target image, wherein the fifth image is an image obtained by cutting the first image from the fourth image; and the portrait processing device sends the target image to the receiving end.
In one possible implementation manner, the sending, by the portrait processing apparatus, the second image to a receiving end includes: the portrait processing device synthesizes the second image and the fourth image to obtain a target image, wherein the second image covers the upper layer of the first image in the fourth image; and the portrait processing device sends the target image to the receiving end.
Optionally, the portrait processing apparatus may generate the video stream in various ways, which is not limited in this embodiment.
In a possible implementation manner, the portrait processing apparatus may perform video coding on the second image to obtain a video image; and obtaining the video stream according to the video image, wherein the video stream comprises the video image.
It should be noted that, in the restored second image, an eye region of the user, which is blocked by the VR device, is restored, and in an actual situation, the user wears the VR device on the face, so that there is a certain difference between the second image and the actual situation.
Therefore, the portrait processing device may superimpose an eye mask layer on the eye region of the user in the second image, where the eye mask layer is subjected to perspective processing with a first transparency to simulate that the user wears a VR device, so that the reality of restoration can be improved.
It should be further noted that the value of the first transparency is required to satisfy that a user can see that the VR device is worn by the user, and the display of the blocked eye region under the VR device is not affected, and the specific value of the first transparency is not limited in the embodiment of the present application.
In a second aspect, an embodiment of the present application provides a terminal, including: a processor and a transceiver coupled to the processor,
the processor is configured to control the transceiver to acquire a first image of a user, the first image including a face of the user, and a partial region of the face of the user being occluded by a Virtual Reality (VR) device; inputting the first image into a portrait processing model to obtain a second image, wherein the second image includes a complete face of the user, wherein the portrait processing model is obtained by training a sample training data set, the sample training data set includes a plurality of original images and a plurality of restored images corresponding to the plurality of original images, wherein the plurality of original images are acquired for at least one sample user, a first original image in the plurality of original images includes a first sample user, and the partial region of the face of the first sample user is occluded by a VR device, a first restored image in the plurality of restored images includes a complete face of the first sample user, and the at least one sample user includes the first sample user; and controlling the transceiver to transmit the second image to a receiving end.
In one possible implementation, the processor is specifically configured to: inputting the first image and feature reference information into the portrait processing model to obtain the second image, wherein the feature reference information includes at least one of an eye feature parameter and a head feature parameter, the eye feature parameter includes at least one of position information, size information and angle of view information, the position information is used for indicating a position of each of two eyes of the user, the size information is used for indicating a size of each eye, the angle of view information is used for indicating an eyeball gaze angle of each eye, and the head feature parameter includes a three-axis attitude angle and an acceleration of the head of the user.
In a possible implementation manner, the feature reference information includes the eye feature parameter, and the processor is further configured to control the transceiver to receive a third image of the user captured by a first camera before the first image and the feature reference information are input into the portrait processing model to obtain the second image, where the third image includes the two eyes of the user, where the first camera is a built-in camera of the VR device; the processor is used for extracting the eye feature parameters from the second image.
In a possible implementation manner, the feature reference information includes the head feature parameter, and the processor is further configured to control the transceiver to receive the head feature parameter measured by the inertial measurement unit before the first image and the feature reference information are input into the portrait processing model to obtain the second image.
In a possible implementation manner, the processor is specifically configured to control the transceiver to receive the first image captured by the second camera.
In one possible implementation, the processor is specifically configured to: controlling the transceiver to receive a third image shot by a second camera, wherein the third image comprises the user; the first image is cut out of the third image.
In one possible implementation, the processor is further configured to: splicing the second image and a fourth image to obtain a target image, wherein the fourth image is an image obtained by intercepting the first image from the third image; and controlling the transceiver to transmit the target image to the receiving end.
In one possible implementation, the processor is further configured to: synthesizing the second image and the third image to obtain a target image, wherein the second image covers the upper layer of the first image in the third image; and controlling the transceiver to transmit the target image to the receiving end.
In a possible implementation manner, the processor is further configured to superimpose eye-mask layers on regions where the two eyes of the user are located in the second image, where the eye-mask layers are subjected to perspective processing with a first transparency.
In one possible implementation, the portrait processing model is trained on the sample training data set by generating a confrontation network model.
In a third aspect, the present application further provides a portrait processing apparatus, configured to perform the method in the first aspect or any possible implementation manner of the first aspect. In particular, the portrait processing apparatus may comprise means for performing the method of the first aspect described above or any possible implementation manner of the first aspect.
In a fourth aspect, an embodiment of the present application further provides a chip apparatus, including: a communication interface and a processor, the communication interface and the processor being in communication with each other via an internal connection path, the processor being configured to implement the method of the first aspect or any possible implementation thereof.
In a fifth aspect, this application further provides a computer-readable storage medium for storing a computer program, where the computer program includes instructions for implementing the method in the first aspect or any possible implementation manner thereof.
In a sixth aspect, the present application further provides a computer program product, where the computer program product includes instructions that, when executed on a computer, cause the computer to implement the method in the first aspect or any possible implementation manner thereof.
Drawings
FIG. 1 provides a schematic diagram of an application scenario 100 of an embodiment of the present application;
FIG. 2 provides a schematic diagram of another application scenario of an embodiment of the present application;
FIG. 3 provides a schematic flow chart diagram of a portrait processing method 200 of an embodiment of the present application;
FIG. 4 provides a schematic block diagram of a VR device 110 of an embodiment of the present application;
fig. 5 is a schematic diagram of a display interface of a receiving end according to an embodiment of the present application;
FIG. 6 provides a schematic block diagram of a portrait processing apparatus 300 according to an embodiment of the present application;
FIG. 7 provides a schematic block diagram of a handset 400 of an embodiment of the application;
FIG. 8 provides a schematic block diagram of a portrait processing system 500 of an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Fig. 1 shows a schematic diagram of an application scenario provided in an embodiment of the present application. As shown in fig. 1, the user wears a VR device 110.
The camera 120 is configured to capture a scene image of a user, and send the scene image to the portrait processing apparatus 130, where the scene image includes a face of the user, and a partial region of the face of the user is blocked by the VR apparatus 110.
The portrait processing apparatus 130 is configured to restore, by a portrait processing method, an area of the scene image, where the user is blocked by the VR apparatus, to obtain a target image, where the target image includes a complete face of the user.
The image processing device 130 is also used for transmitting the target image to the receiving end 140 for display.
It should be noted that the scene image at least includes the face of the user, and fig. 1 only schematically illustrates a case where the scene image includes the face of the user, but the embodiment of the present application is not limited thereto.
Optionally, the scene image may further include at least one of other parts of the user and an environment in which the user is located, or the scene image may further include other users besides the user, which is not limited in this embodiment of the application. For example: fig. 2 shows a case where the scene image includes the whole body of the user.
It should be further noted that, when the scene image further includes other users and partial areas of faces of the other users are also blocked by VR devices worn by the other users, the portrait processing device 130 may perform portrait processing on the other users by using a method similar to the portrait processing method of the user, and details are not repeated here to avoid repetition.
It should be further noted that the scene image described in this embodiment of the present application may be a single image, or may be a frame image in a video stream, which is not limited in this embodiment of the present application.
It should be further noted that the embodiment of the present application is applicable to any situation that the scene image of the user needs to be captured.
For example: when the user makes a video call with an opposite-side user, a scene image of the user is displayed through a display device (receiving end) of the opposite-side user.
It should be noted that the receiving end in the embodiment of the present application may be any display device capable of receiving and displaying the target image sent by the terminal, and the embodiment of the present application does not limit this.
Another example is: and displaying the scene image of the user through a display device (receiving end) under the condition of monitoring the user.
Optionally, the receiving end may also be a terminal with a display screen, such as a VR device, a mobile phone, and a smart television, which is not limited in this application.
Alternatively, the image capturing device 120 and the portrait processing device 130 in fig. 1 may be independent devices, or the image capturing device 120 and the portrait processing device 130 may be integrated in a same apparatus, which is not limited in this embodiment of the present application.
It should be noted that, regardless of whether the image capturing device 120 and the human image processing device 130 are integrated in one apparatus or are independent devices, the image capturing device and the human image processing device are described in the following description.
It should be noted that, the VR device described in this embodiment of the application may be a wearable device capable of implementing a virtual reality function.
It should be noted that the wearable device is also called a wearable smart device, and is a generic term for intelligently designing daily wearing by applying a wearable technology and developing wearable devices, such as glasses, helmets, masks, and the like. A wearable device is either worn directly on the body or is a portable device that is integrated into the user's accessory. The wearable device is not only a hardware device, but also realizes powerful functions through software support, data interaction and cloud interaction. The generalized wearable intelligent device has the advantages that the generalized wearable intelligent device has complete functions and large size, can realize complete or partial functions without depending on a smart phone, such as a smart helmet or smart glasses, and only focuses on a certain application function, and needs to be matched with other devices such as the smart phone for use, such as various smart bracelets, smart jewelry, patches and the like capable of processing portrait and/or displaying images, and the embodiment of the application does not limit the functions.
Optionally, the portrait processing apparatus 130 and the receiving end 140 may communicate with each other in a wired manner or a wireless manner, which is not limited in this embodiment.
The wired mode may be a mode in which communication is realized by data line connection or internal bus connection.
It should be noted that the above-mentioned wireless manner may be that communication is realized through a communication network, and the communication network may be a local area network, may also be a wide area network switched through a relay (relay) device, or includes a local area network and a wide area network. When the communication network is a local area network, the communication network may be a wifi hotspot network, a wifi P2P network, a bluetooth network, a zigbee network, or a Near Field Communication (NFC) network, for example. When the communication network is a wide area network, the communication network may be, for example, a third-generation wireless telephone technology (3G) network, a fourth-generation mobile communication technology (4G) network, a fifth-generation mobile communication technology (5G) network, a Public Land Mobile Network (PLMN) for future evolution, the internet, or the like, which is not limited in the embodiment of the present invention.
The application scenarios applicable to the present application are introduced above, and the portrait processing method provided in the embodiment of the present application will be described in detail below.
Fig. 3 shows a schematic flowchart of a portrait processing method 200 provided in an embodiment of the present application, where the method 200 may be applied to the application scenario described in fig. 1 and executed by the portrait processing apparatus 130.
S210, a first image of a user is shot by a first camera device, the first image comprises a face of the user, a partial region of the face of the user is shielded by a Virtual Reality (VR) device, and the partial region comprises regions where two eyes of the user are located;
s220, the first image pickup device sends a first image of the user to a human image processing device; correspondingly, the portrait processing device receives the first image sent by the first camera device.
Optionally, in S220, the portrait processing apparatus may further acquire the first image in other manners, which is not limited in this embodiment of the application.
In a first possible implementation manner, the portrait processing apparatus may receive a third image captured by the first imaging apparatus, where the third image includes the user, and the portrait processing apparatus intercepts the first image from the third image.
Note that the first image pickup device may be an image pickup device 120 as shown in fig. 1. The first camera device and the portrait processing device can be independent devices respectively, for example, the portrait processing device is a terminal, and the first camera device is a camera; or the first camera device and the portrait processing device may be integrated in the same device, such as a terminal, which is not limited in this embodiment of the application.
It should be noted that the third image may include at least the user. That is to say, the third image may further include an environment or a scene where the user is located, which is not limited in this embodiment of the application.
It should be noted that, in the embodiment of the present application, only a partial region of the face of the user is taken as an example of a region where both eyes are located, and it should be understood that the partial region may also include other regions of the face of the user, which is not limited in the embodiment of the present application.
It is further noted that the first image may include at least the face of the user, wherein a partial region of the face of the user is occluded by the VR device.
Optionally, the first image may further include at least one of other parts of the user and an environment where the user is located, or the first image may further include other users except the user, where a portrait processing method for the other users is similar to a portrait processing method for the user, and is not described herein again to avoid repetition.
It should be further noted that, when the scene image further includes other users and partial regions of faces of the other users are also blocked by VR devices worn by the other users, the portrait processing device may perform portrait processing on the other users by using a method similar to the portrait processing method of the user, and details are not repeated here to avoid repetition.
It should be further noted that the first image described in this embodiment of the present application may be a single image, or may be a frame image in a video stream, which is not limited in this embodiment of the present application.
Optionally, the first image may be an original photographed image or an image with higher definition obtained after the base image quality processing.
Optionally, the basic image quality processing in the embodiment of the present application may include at least one image processing step for improving image quality, for example: denoising, sharpening, brightness improvement and the like, which are not limited in the embodiments of the present application.
S230, the portrait processing apparatus inputs the first image into a portrait processing model to obtain a second image, where the second image includes a complete face of the user, where the portrait processing model is obtained by training a sample training data set, the sample training data set includes a plurality of original images and a plurality of restored images corresponding to the plurality of original images, where the plurality of original images are acquired for at least one sample user, a first original image in the plurality of original images includes a first sample user, and the partial region of the face of the first sample user is blocked by a VR apparatus, a first restored image in the plurality of restored images includes the complete face of the first sample user, and the at least one sample user includes the first sample user.
Optionally, before S230, the portrait processing apparatus may obtain the portrait processing model.
Optionally, the portrait processing apparatus may obtain the portrait processing model in a plurality of ways, which is not limited in this embodiment.
In a first possible implementation, the portrait processing apparatus may pre-configure the portrait processing model at the time of factory shipment.
In a second possible implementation manner, the human image processing apparatus may receive the human image processing model from other devices, that is, the human image processing model is trained by other devices.
For example, the portrait processing model may be stored at a cloud server, and the portrait processing device may request the portrait processing model from the cloud server over a network.
In a third possible implementation, the portrait processing apparatus may train the portrait processing model by itself.
In the following, the process of training the portrait processing model by the portrait processing apparatus will be described as an example, and the process of training the portrait processing model by other devices is similar to the process of training the portrait processing apparatus by itself, and is not repeated here to avoid repetition.
Optionally, taking an example that the area where the two eyes of the user are located is blocked by the VR device, the human image processing device may obtain a sample training data set, train and learn the sample training data set to obtain the human image processing model. Wherein the sample training data set comprises a plurality of original images and a plurality of restored images corresponding to the plurality of original images, wherein the plurality of original images are acquired for at least one sample user, a first original image in the plurality of original images comprises a first sample user, and the partial region of the face of the first sample user is occluded by a VR device, a first restored image in the plurality of restored images comprises a complete face of the first sample user, and the at least one sample user comprises the first sample user.
Optionally, the portrait processing apparatus may train and learn the sample training data set through a plurality of methods to obtain the portrait processing model, which is not limited in this embodiment of the application.
In a possible implementation manner, the human image processing apparatus may train and learn the sample training data set through a neural network model to obtain the human image processing model.
For example, the neural network model may be a Generic Adaptive Networks (GAN) model, which may include a Conditional access network (Conditional GAN), a Deep Convolution access network (Deep convergence GAN), and the like.
By adopting the portrait processing method provided by the embodiment of the application, the facial features of the first image, which are sheltered by the VR device, of the user can be restored through the portrait processing model, so that the user experience of the receiving end is improved.
Optionally, in S230, the portrait processing apparatus may input the first image and the feature reference information into the portrait processing model to obtain the second image. Wherein the feature reference information includes at least one of an eye feature parameter and a head feature parameter, the eye feature parameter includes at least one of position information, size information, and perspective information, the position information is used to indicate a position of each of two eyes of the user, the size information is used to indicate a size of each eye, the perspective information is used to indicate an eyeball gaze angle of each eye, and the head feature parameter includes a three-axis attitude angle and an acceleration of the head of the user.
That is, a sample training data set used in training the avatar processing model may include a plurality of original images, a plurality of restored images corresponding to the plurality of original images, and at least one feature reference information, wherein the plurality of original images are acquired for at least one sample user, a first original image of the plurality of original images includes a first sample user, and the partial region of the first sample user's face is occluded by a VR device, a first restored image of the plurality of restored images includes the first sample user's entire face, the at least one sample user includes the first sample user, and the at least one feature reference information includes the feature reference information of each of the at least one sample user.
By adopting the portrait processing method provided by the embodiment of the application, the first image and the characteristic reference information are combined and input into the portrait processing device, and the reduction degree and the authenticity of the shielded area can be improved by providing more portrait related characteristics.
Optionally, the portrait processing apparatus may obtain the feature reference information in various ways, which is not limited in this embodiment.
(1) Method for acquiring eye characteristic parameters
In a first possible implementation manner, the human image processing apparatus may receive a fourth image captured by a second camera apparatus, where the second camera apparatus is a camera apparatus built in the VR apparatus, and the fourth image includes two eyes of the user; the human image processing device may extract the eye feature parameters from the fourth image.
The second imaging device may be an imaging device built in the VR device, for example: an internal Infrared (IR) camera.
For example: as shown in fig. 4, the second image capture device may be an image capture device 150 built into the VR device 110 as shown in fig. 1.
Optionally, in the embodiment of the present application, only the second image capturing device is taken as an example of a built-in image capturing device on the VR device, and the second image capturing device may also be an image capturing device that is disposed at another position and is capable of capturing a real image of a facial area of the user that is blocked by the VR device, which is not limited in the embodiment of the present application.
In a second possible implementation manner, the human image processing device may extract the eye feature parameters from a plurality of locally stored images of the user, wherein the plurality of images include the eyes of the user.
It should be noted that the plurality of images may be photographs including faces taken by the user on a daily basis, for example: and (4) self-photographing.
In a third possible implementation manner, the portrait processing apparatus may establish a facial feature database of the user through photos taken by the user daily, where the facial feature database includes feature parameters of each organ of the face, such as eye feature parameters, and the portrait processing apparatus may retrieve the eye feature parameters from the facial feature database.
In a fourth possible implementation, the human image processing device may receive the eye feature parameters from other measuring devices, such as an eye tracking sensor.
Optionally, the plurality of images and the facial feature database may be further stored in a cloud, and the human image processing device may request the cloud through a network, which is not limited in this embodiment of the present application.
It should be noted that, when the content that is blocked in the first image is restored through the portrait processing model, since the VR device mainly blocks the eye region of the user, the restoration degree and the authenticity of the blocked content can be improved in combination with the eye feature parameters of the user.
(2) Method for acquiring head characteristic parameters
In one possible implementation, the portrait processing device may receive the head feature parameters measured by the inertial measurement device.
It should be noted that the Inertial Measurement Unit (IMU) may also be an Inertial Measurement Unit (IMU) that is a device for measuring the three-axis attitude angle (or angular velocity) and acceleration of an object. Generally, an IMU includes three single-axis accelerometers and three single-axis gyroscopes, the accelerometers detect acceleration signals of an object in three independent axes of a carrier coordinate system, and the gyroscopes detect angular velocity signals of the carrier relative to a navigation coordinate system, and measure angular velocity and acceleration of the object in three-dimensional space, and then solve the attitude of the object.
Alternatively, the inertial measurement unit may be a separate measurement device fixed on the head of the user, or the inertial measurement unit may be integrated in the VR device, which is not limited in this embodiment.
It should be noted that, when the content that is blocked in the first image is restored through the portrait processing model, since the head of the user may be in a moving state, that is, has a moving speed and a rotation angle, in combination with the head characteristic parameters of the user, the degree of restoration and the authenticity of the blocked content can be further improved.
Optionally, the feature reference information may further include other feature parameters, such as: the parameters such as the nose characteristic parameter, the face characteristic parameter, the hair style characteristic parameter, etc. can describe the portrait characteristics of the user, which is not limited in the embodiment of the present application.
S240, the portrait processing device sends the second image to a receiving end; correspondingly, the receiving end receives the second image sent by the portrait processing device.
Optionally, in S240, the portrait processing apparatus may send the second image to the receiving end, or the portrait processing apparatus may send a target image to the receiving end, where the target image is an image obtained by synthesizing or splicing the second image, or the portrait processing apparatus may send a video stream to the receiving end, where the video stream includes the second image or the target image, which is not limited in this embodiment of the present application.
Accordingly, in S240, the portrait processing apparatus may send the second image, or a target image or a video stream containing the second image to the receiving end.
Optionally, before S240, the portrait processing apparatus may generate the target image in various ways, which is not limited in this embodiment.
In a first possible implementation manner, the portrait processing apparatus may splice the second image and a fifth image to obtain a target image, where the fifth image is an image obtained by cutting the first image from the fourth image.
In a second possible implementation manner, the portrait processing apparatus may combine the second image and the fourth image to obtain a target image, where the second image is overlaid on an upper layer of the first image in the fourth image.
Optionally, before S240, the portrait processing apparatus may generate the video stream in various ways, which is not limited in this embodiment.
In a possible implementation manner, the portrait processing apparatus may perform video coding on the second image to obtain a video image; and obtaining the video stream according to the video image, wherein the video stream comprises the video image.
It should be noted that, in the restored second image, an eye region of the user, which is blocked by the VR device, is restored, and in an actual situation, the user wears the VR device on the face, so that there is a certain difference between the second image and the actual situation.
In one possible implementation manner, as shown in fig. 5, the portrait processing apparatus may superimpose an eye-mask layer on the eye region of the user in the second image, where the eye-mask layer is subjected to a perspective process with a first transparency.
It should be further noted that the value of the first transparency is required to satisfy that a user can see that the VR device is worn by the user, and the display of the blocked eye region under the VR device is not affected, and the specific value of the first transparency is not limited in the embodiment of the present application.
By adopting the portrait processing method provided by the embodiment of the application, the eye shade image layer after perspective processing is superposed on the eye region of the user in the second image, so that the VR device worn by the user can be simulated, and the reality degree of restoration is improved.
It should be noted that the portrait processing apparatus may be a terminal, and the terminal includes hardware and/or software modules for executing the functions in order to implement the functions. The present application is capable of being implemented in hardware or a combination of hardware and computer software in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In this embodiment, the terminal may be divided into functional modules according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in the form of hardware. It should be noted that the division of the modules in this embodiment is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In the case of dividing each function module by corresponding functions, fig. 6 shows a schematic diagram of a possible composition of the portrait processing apparatus 300 according to the above embodiment, and as shown in fig. 6, the apparatus 300 may include: a transceiving unit 310 and a processing unit 320.
Wherein the processing unit 320 may control the transceiver unit 310 to implement the methods described in the above-described method 200 embodiments, and/or other processes for the techniques described herein.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The apparatus 300 provided in this embodiment is used for executing the above portrait processing method, and therefore, the same effects as those of the above implementation method can be achieved.
In case of an integrated unit, the apparatus 300 may comprise a processing module, a storage module and a communication module. The processing module may be configured to control and manage the operations of the apparatus 300, and for example, may be configured to support the apparatus 200 to execute the steps performed by the above units. The memory modules may be used to support the apparatus 300 in executing stored program code and data and the like. A communication module, which can be used for communication of the apparatus 300 with other devices.
The processing module may be a processor or a controller. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., a combination of one or more microprocessors, a Digital Signal Processing (DSP) and a microprocessor, or the like. The storage module may be a memory. The communication module may specifically be a radio frequency circuit, a bluetooth chip, a Wi-Fi chip, or other devices that interact with other terminals.
In one possible implementation, the apparatus 300 according to the embodiment of the present application may be a terminal.
It should be noted that the terminal described in the embodiment of the present application may be mobile or fixed, for example, the terminal may be a mobile phone, a camera, a video camera, a tablet personal computer (tablet personal computer), a smart television, a laptop computer (laptop computer), a Personal Digital Assistant (PDA), a personal computer (personal computer), or a wearable device (smart watch), and the like, and the embodiment of the present application is not limited thereto.
Taking the terminal as a mobile phone as an example, fig. 7 shows a schematic structural diagram of the mobile phone 400. As shown in fig. 7, the mobile phone 400 may include a processor 410, an external memory interface 420, an internal memory 421, a Universal Serial Bus (USB) interface 430, a charging management module 440, a power management module 441, a battery 442, an antenna 1, an antenna 2, a mobile communication module 450, a wireless communication module 460, an audio module 470, a speaker 470A, a receiver 470B, a microphone 470C, an earphone interface 470D, a sensor module 480, keys 490, a motor 491, an indicator 492, a camera 493, a display 494, a Subscriber Identification Module (SIM) card interface 495, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the mobile phone 400. In other embodiments of the present application, the handset 400 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 410 may include one or more processing units, such as: the processor 410 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. Wherein the different processing units may be separate components or may be integrated in one or more processors. In some embodiments, the handset 400 may also include one or more processors 410. The controller can generate an operation control signal according to the instruction operation code and the time sequence signal to complete the control of instruction fetching and instruction execution. In other embodiments, a memory may also be provided in processor 410 for storing instructions and data. Illustratively, the memory in the processor 410 may be a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 410. If the processor 410 needs to use the instruction or data again, it can be called directly from the memory. This avoids repeated accesses and reduces the latency of the processor 410, thereby increasing the efficiency with which the handset 400 processes data or executes instructions.
In some embodiments, processor 410 may include one or more interfaces. The interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a SIM card interface, and/or a USB interface, etc. The USB interface 440 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 440 may be used to connect a charger to charge the mobile phone 400, and may also be used to transmit data between the mobile phone 400 and peripheral devices. The USB interface 440 may also be used to connect to a headset to play audio through the headset.
It should be understood that the interface connection relationship between the modules illustrated in the embodiment of the present application is only an exemplary illustration, and does not constitute a limitation on the structure of the mobile phone 400. In other embodiments of the present application, the mobile phone 400 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 440 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 440 may receive charging input from a wired charger via the USB interface 430. In some wireless charging embodiments, the charging management module 440 may receive a wireless charging input through a wireless charging coil of the cell phone 400. The charging management module 440 can also supply power to the mobile phone through the power management module 441 while charging the battery 442.
The power management module 441 is used to connect the battery 442, the charging management module 440 and the processor 410. The power management module 441 receives input from the battery 442 and/or the charging management module 440 and provides power to the processor 410, the internal memory 421, the external memory, the display screen 494, the camera 494, the wireless communication module 460, and the like. The power management module 441 may also be used to monitor parameters such as battery capacity, battery cycle number, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 441 may be disposed in the processor 410. In other embodiments, the power management module 441 and the charging management module 440 may be disposed in the same device.
The wireless communication function of the mobile phone 400 can be implemented by the antenna 1, the antenna 2, the mobile communication module 450, the wireless communication module 460, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset 400 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 4 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 450 may provide a solution including 2G/3G/4G/5G wireless communication applied to the handset 400. The mobile communication module 450 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 450 may receive the electromagnetic wave from the antenna 1, and filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 450 can also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 450 may be disposed in the processor 410. In some embodiments, at least some of the functional blocks of the mobile communication module 450 may be disposed in the same device as at least some of the blocks of the processor 410.
The wireless communication module 460 may provide solutions for wireless communication applied to the mobile phone 400, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like.
Alternatively, the wireless communication module 460 may be one or more devices integrating at least one communication processing module, wherein one communication processing module may correspond to one network interface, the network interface may be set in different service function modes, and the network interface set in different modes may establish a network connection corresponding to the mode. .
For example: a network connection supporting the P2P function may be established through the network interface in the P2P function mode, a network connection supporting the STA function may be established through the network interface in the STA function mode, and a network connection supporting the AP function may be established through the network interface in the AP mode.
The wireless communication module 460 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 410. The wireless communication module 460 may also receive a signal to be transmitted from the processor 410, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
The cell phone 400 implements a display function through the GPU, the display screen 494, and the application processor. The GPU is an image processing microprocessor connected to a display screen 494 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 410 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 494 is used to display images, videos, and the like. The display screen 494 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-o led, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, cell phone 400 may include 1 or more display screens 494.
In some embodiments of the present application, when the display panel is made of OLED, AMOLED, FLED, or the like, the display screen 494 shown in fig. 7 may be bent. Here, the display screen 494 may be bent in such a manner that the display screen may be bent at any position to any angle and may be held at the angle, for example, the display screen 494 may be folded right and left from the middle. Or can be folded from the middle part up and down. In this application, a display that can be folded is referred to as a foldable display. The touch display screen may be a single screen, or a display screen formed by combining multiple screens together, which is not limited herein.
The display 494 of the cell phone 400 can be a flexible screen that is currently of interest due to its unique characteristics and great potential. Compared with the traditional screen, the flexible screen has the characteristics of strong flexibility and flexibility, can provide a new interaction mode based on the bendable characteristic for a user, and can meet more requirements of the user on a mobile phone. For a mobile phone configured with a foldable display screen, the foldable display screen on the mobile phone can be switched between a small screen in a folded state and a large screen in an unfolded state at any time. Therefore, the user is more and more frequently using the split screen function on the mobile phone equipped with the foldable display screen.
The mobile phone 400 can implement a shooting function through the ISP, the camera 493, the video codec, the GPU, the display screen 494, the application processor, and the like.
The ISP is used to process the data fed back by the camera 494. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 494.
The camera 493 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the cell phone 400 may include 1 or more cameras 493.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the cell phone 400 is in frequency bin selection, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The handset 400 may support one or more video codecs. Thus, the handset 400 can play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG4, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor, which processes input information quickly by referring to a biological neural network structure, for example, by referring to a function of transmitting services between neurons in the human brain, and can also learn by itself continuously. The NPU can implement applications such as smart recognition of the mobile phone 400, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 420 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the mobile phone 400. The external memory card communicates with the processor 410 through the external memory interface 420 to implement data storage functions. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 421 may be used to store one or more computer programs comprising instructions. The processor 410 can execute the above-mentioned instructions stored in the internal memory 421, so as to make the mobile phone 400 execute the method of screen-off display provided in some embodiments of the present application, and various applications and data processing, etc. The internal memory 421 may include a program storage area and a data storage area. Wherein, the storage program area can store an operating system; the storage program area may also store one or more applications (e.g., gallery, contacts, etc.), and the like. The data storage area can store data (such as photos, contacts, etc.) created during use of the mobile phone 400, and the like. Further, the internal memory 421 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage components, flash memory components, Universal Flash Storage (UFS), and the like. In some embodiments, the processor 410 may cause the mobile phone 400 to perform the method for displaying the off-screen display provided in the embodiments of the present application, and other applications and data processing by executing instructions stored in the internal memory 421 and/or instructions stored in a memory provided in the processor 410. The handset 400 can implement audio functions via the audio module 470, speaker 470A, receiver 470B, microphone 470C, headset interface 470D, and application processor, etc. Such as music playing, recording, etc.
The sensor module 480 may include a pressure sensor 480A, a gyro sensor 480B, an air pressure sensor 480C, a magnetic sensor 480D, an acceleration sensor 480E, a distance sensor 480F, a proximity light sensor 480G, a fingerprint sensor 480H, a temperature sensor 480J, a touch sensor 480K, an ambient light sensor 480L, a bone conduction sensor 480M, and the like.
The pressure sensor 480A senses a pressure signal, and converts the pressure signal into an electrical signal. In some embodiments, the pressure sensor 480A may be disposed on the display screen 494. The pressure sensor 480A may be of a variety of types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 480A, the capacitance between the electrodes changes. The cell phone 400 determines the intensity of the pressure from the change in capacitance. When a touch operation is applied to the display screen 494, the mobile phone 400 detects the intensity of the touch operation based on the pressure sensor 480A. The cell phone 400 can also calculate the touched position based on the detection signal of the pressure sensor 480A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 480B may be used to determine the motion pose of the cell phone 400. In some embodiments, the angular velocity of the cell phone 400 about three axes (i.e., the X, Y, and Z axes) may be determined by the gyro sensor 480B. The gyro sensor 480B may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyroscope sensor 480B detects the shake angle of the mobile phone 400, calculates the distance to be compensated for the lens module according to the shake angle, and allows the lens to counteract the shake of the mobile phone 400 through reverse movement, thereby achieving anti-shake. The gyroscope sensor 480B can also be used for navigation and body sensing game scenes.
The acceleration sensor 480E can detect the magnitude of acceleration of the cellular phone 400 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the cell phone 400 is at rest. The method can also be used for identifying the gesture of the mobile phone, and is applied to horizontal and vertical screen switching, pedometers and other applications.
The ambient light sensor 480L is used to sense the ambient light level. The cell phone 400 can adaptively adjust the brightness of the display screen 494 based on the perceived ambient light level. The ambient light sensor 480L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 480L may also cooperate with the proximity light sensor 480G to detect whether the cell phone 400 is in a pocket to prevent accidental touches.
The fingerprint sensor 480H is used to collect a fingerprint. The mobile phone 400 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, take a picture of the fingerprint, answer an incoming call with the fingerprint, and the like.
The temperature sensor 480J is used to detect temperature. In some embodiments, the cell phone 400 implements a temperature processing strategy using the temperature detected by the temperature sensor 480J. For example, when the temperature reported by the temperature sensor 480J exceeds a threshold, the mobile phone 400 performs a reduction in performance of a processor located near the temperature sensor 480J, so as to reduce power consumption and implement thermal protection. In other embodiments, the cell phone 400 heats the battery 442 when the temperature is below another threshold to avoid an abnormal shutdown of the cell phone 400 due to low temperatures. In other embodiments, the cell phone 400 boosts the output voltage of the battery 442 when the temperature is below a further threshold to avoid abnormal shutdown due to low temperature.
The touch sensor 480K is also referred to as a "touch panel". The touch sensor 480K may be disposed on the display screen 494, and the touch sensor 480K and the display screen 494 form a touch screen, which is also referred to as a "touch screen". The touch sensor 480K is used to detect a touch operation applied thereto or thereabout. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 494. In other embodiments, the touch sensor 480K may be disposed on the surface of the mobile phone 400 at a different position than the display screen 494.
The keys 490 include a power-on key, a volume key, etc. The keys 490 may be mechanical keys. Or may be touch keys. The cellular phone 400 may receive a key input, and generate a key signal input related to user setting and function control of the cellular phone 400.
It should be noted that the related functions implemented by the processing unit 320 in fig. 6 may be implemented by the processor 410 in fig. 7, the related functions implemented by the transceiver unit 310 in fig. 6 may be implemented by the processor 410 in fig. 7 controlling the antenna 450 or the antenna 460, or the related functions implemented by the transceiver unit 310 in fig. 6 may be implemented by the processor 410 in fig. 7 controlling other components of the handset 400 through the internal interface.
Fig. 8 shows a portrait processing system 500 according to an embodiment of the present application, where the system 500 includes a mobile phone 510, a VR device 520, and a receiving end 530, where a communication interface exists between the mobile phone 510 and the VR device 520, and a communication interface exists between the mobile phone 510 and the receiving end 530. The mobile phone 510 includes a first camera 511, a first image processing module 512, a second image processing module 513, a portrait processing module 514, a storage module 515, an image synthesis module 516, and a video coding module 517, and the VR device 520 includes a display screen 521, a second camera 522, and an inertia measurement device 523.
Alternatively, the inertial measurement device 523 may be integrated into the VR device 520, or may be a stand-alone device, which is not limited in this application.
Alternatively, the first camera 511 may be a built-in camera of the mobile phone 510, or may be a stand-alone camera, which is not limited in this embodiment of the application.
The display screen 521 is used to play VR resources obtained from the cell phone 510 through the interface.
The first camera 511 is configured to capture a third image of the user, and send the third image to the first image processing module 512, where the third image includes the user and a scene where the user is located, and a partial region of the face of the user is blocked by the VR device 520, and the partial region includes a region where both eyes of the user are located.
The first image processing module 512 is configured to receive the third image sent by the first camera 511; performing basic image processing such as enhancement effect on the third image; a first image is cut out from the processed third image, the first image including the face of the user, and the first image is sent to the face processing module 514.
The second camera 522 is a built-in camera of the VR device 520, and is configured to capture a fourth image of the user, where the fourth image includes two eyes of the user, and send the fourth image to the second image processing module 513.
The second image processing module 513 is configured to receive the fourth image sent by the second camera 522, perform basic image processing such as enhancement on the fourth image, and send the processed fourth image to the human image processing module 514.
The inertial measurement unit 523 is configured to obtain head feature parameters of the user, where the head feature parameters include a three-axis posture and an acceleration of the head of the user, and send the head feature parameters to the portrait processing module 514.
The human image processing module 514 is configured to extract the eye feature parameters of the user from the received fourth image; inputting the first image, the eye feature parameters and the head feature parameters into a portrait processing model stored in a storage module 515 to obtain a second image; the second image is sent to an image composition module 516, where the second image includes the complete face of the user.
The image synthesizing module 516 is configured to synthesize or splice the second image to obtain a target image, and send the target image to the video encoding module 517.
The video coding module 517 is configured to code the target image to obtain a video stream, and send the video stream to the receiving end 530.
Optionally, the portrait processing module 514 in fig. 8 may be an operation unit such as an NPU, a DSP, a GPU, and the like, which is described in fig. 7, and this is not limited in this embodiment.
It should be noted that the first camera 511 in fig. 8 may be the camera 494 in fig. 7; the first image processing module 512 and the second image processing module 513 may belong to the ISP described in fig. 7, or the first image processing module 512 and the second image processing module may be different ISPs; the portrait processing module 514 may be an operation unit such as NPU, DSP, GPU, etc. described in fig. 7; the storage module 515 may be the internal memory 421 described in fig. 7, or a memory provided in the processor 410; the image composition module 516 may be a GPU as described in fig. 7; the video encoding module 517 may be the video encoder described in fig. 7.
Alternatively, the modules in the mobile phone 500 may also be implemented by other devices capable of implementing the functions implemented by the modules in fig. 7, which is not limited in this embodiment of the application.
Optionally, the system 500 provided in the embodiment of the present application is introduced above only in one possible implementation manner, which is not limited in the embodiment of the present application, and the system 500 should be capable of implementing other steps described in the above-mentioned method embodiment 200, and in order to avoid repetition, details are not described here again.
The present embodiment also provides a computer storage medium, where computer instructions are stored, and when the computer instructions are run on an electronic device, the electronic device executes the above related method steps to implement the portrait processing method in the above embodiment.
The present embodiment also provides a computer program product, which when running on a computer, causes the computer to execute the relevant steps described above, so as to implement the portrait processing method in the above embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip can execute the portrait processing method in the above-mentioned method embodiments.
The server, the terminal, the computer storage medium, the computer program product, or the chip provided in this embodiment are all configured to execute the corresponding method provided above, so that the beneficial effects achieved by the server, the terminal, the computer storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (24)

1. A method of portrait processing, comprising:
a terminal acquires a first image of a user, wherein the first image comprises a face of the user, and a partial region of the face of the user is shielded by a Virtual Reality (VR) device and comprises a region where two eyes of the user are located;
inputting, by the terminal, the first image into a portrait processing model to obtain a second image, where the second image includes a complete face of the user, where the portrait processing model is obtained by training a sample training data set, where the sample training data set includes a plurality of original images and a plurality of restored images corresponding to the plurality of original images, where the plurality of original images are acquired for at least one sample user, a first original image in the plurality of original images includes a first sample user, and the partial region of the face of the first sample user is blocked by a VR device, a first restored image in the plurality of restored images includes the complete face of the first sample user, and the at least one sample user includes the first sample user;
and the terminal sends the second image to a receiving end.
2. The method of claim 1, wherein the terminal inputs the first image into a portrait processing model to obtain a second image, comprising:
the terminal inputs the first image and feature reference information into the portrait processing model to obtain the second image, wherein the feature reference information includes at least one of an eye feature parameter and a head feature parameter, the eye feature parameter includes at least one of position information, size information and view angle information, the position information is used for indicating the position of each of two eyes of the user, the size information is used for indicating the size of each eye, the view angle information is used for indicating the eyeball gazing angle of each eye, and the head feature parameter includes the three-axis attitude angle and the acceleration of the head of the user.
3. The method according to claim 2, wherein the feature reference information includes the eye feature parameters, and before the terminal inputs the first image and the feature reference information into the portrait processing model to obtain the second image, the method further comprises:
the terminal receives a third image of the user, which is shot by a first camera device and comprises the two eyes of the user, wherein the first camera device is a camera device arranged in the VR device;
and the terminal extracts the eye characteristic parameters from the second image.
4. The method of claim 2, wherein the feature reference information comprises the head feature parameters, and before the terminal inputs the first image and the feature reference information into the portrait processing model to obtain the second image, the method further comprises:
and the terminal receives the head characteristic parameters measured by the inertial measurement unit.
5. The method according to any one of claims 1 to 4, wherein the terminal acquires a first image of a user, comprising:
and the terminal shoots the first image through a second camera device.
6. The method according to any one of claims 1 to 4, wherein the terminal acquires a first image of a user, comprising:
the terminal shoots a fourth image through a second camera device, and the fourth image comprises the user;
and the terminal intercepts the first image from the fourth image.
7. The method of claim 6, wherein the terminal sends the second image to a receiving end, and wherein the sending comprises:
the terminal splices the second image and a fifth image to obtain a target image, wherein the fifth image is an image obtained by cutting the first image from the fourth image;
and the terminal sends the target image to the receiving terminal.
8. The method of claim 6, wherein the terminal sends the second image to a receiving end, and wherein the sending comprises:
the terminal synthesizes the second image and the fourth image to obtain a target image, wherein the second image covers the upper layer of the first image in the fourth image;
and the terminal sends the target image to the receiving terminal.
9. The method according to any one of claims 1 to 8, further comprising:
and overlaying eyeshade layers on the areas where the eyes of the user are located in the second image, wherein the eyeshade layers are subjected to perspective processing through first transparency.
10. The method of any one of claims 1 to 9, wherein the human image processing model is trained on the sample training data set by generating a confrontation network model.
11. A terminal, comprising: a processor and a transceiver coupled to the processor,
the processor is configured to control the transceiver to acquire a first image of a user, the first image including a face of the user, and a partial region of the face of the user being occluded by a Virtual Reality (VR) device; inputting the first image into a portrait processing model to obtain a second image, wherein the second image includes a complete face of the user, wherein the portrait processing model is obtained by training a sample training data set, the sample training data set includes a plurality of original images and a plurality of restored images corresponding to the plurality of original images, wherein the plurality of original images are acquired for at least one sample user, a first original image in the plurality of original images includes a first sample user, and the partial region of the face of the first sample user is occluded by a VR device, a first restored image in the plurality of restored images includes a complete face of the first sample user, and the at least one sample user includes the first sample user; and controlling the transceiver to transmit the second image to a receiving end.
12. The terminal of claim 11, wherein the processor is specifically configured to:
inputting the first image and feature reference information into the portrait processing model to obtain the second image, wherein the feature reference information includes at least one of an eye feature parameter and a head feature parameter, the eye feature parameter includes at least one of position information, size information and angle of view information, the position information is used for indicating a position of each of two eyes of the user, the size information is used for indicating a size of each eye, the angle of view information is used for indicating an eyeball gaze angle of each eye, and the head feature parameter includes a three-axis attitude angle and an acceleration of the head of the user.
13. The terminal according to claim 12, wherein the feature reference information includes the eye feature parameters,
the processor is further configured to control the transceiver to receive a third image of the user captured by a first camera before the first image and the feature reference information are input into the portrait processing model to obtain the second image, where the third image includes the two eyes of the user, and the first camera is a built-in camera of the VR device;
the processor is used for extracting the eye feature parameters from the second image.
14. The terminal according to claim 12, wherein the feature reference information includes the header feature parameter,
the processor is further configured to control the transceiver to receive the head feature parameters measured by the inertial measurement unit before the first image and the feature reference information are input into the portrait processing model to obtain the second image.
15. The terminal according to any of the claims 11 to 14, wherein the processor is specifically configured to control the transceiver to receive the first image captured by a second camera.
16. The terminal according to any of claims 11 to 14, wherein the processor is specifically configured to:
controlling the transceiver to receive a third image shot by a second camera, wherein the third image comprises the user;
the first image is cut out of the third image.
17. The terminal of claim 16, wherein the processor is further configured to:
splicing the second image and a fourth image to obtain a target image, wherein the fourth image is an image obtained by intercepting the first image from the third image;
and controlling the transceiver to transmit the target image to the receiving end.
18. The terminal of claim 16, wherein the processor is further configured to:
synthesizing the second image and the third image to obtain a target image, wherein the second image covers the upper layer of the first image in the third image;
and controlling the transceiver to transmit the target image to the receiving end.
19. The terminal of any of claims 11 to 18, wherein the processor is further configured to overlay eye-mask layers on the second image over areas where the eyes of the user are located, the eye-mask layers being subjected to a transparentization process with a first transparency.
20. A terminal as claimed in any one of claims 11 to 19, wherein the portrait processing model is trained on the sample training data set by generating a confrontation network model.
21. A portrait processing apparatus, characterized in that the apparatus comprises means for implementing the method of any of the preceding claims 1 to 10.
22. A chip apparatus, comprising: communication interface and processor, which communicate with each other via an internal connection path, characterized in that the processor is adapted to implement the method of any of the preceding claims 1 to 10.
23. A computer-readable storage medium for storing a computer program, characterized in that the computer program comprises instructions for implementing the method of any of the preceding claims 1 to 10.
24. A computer program product comprising instructions which, when run on a computer, cause the computer to carry out the method of any one of claims 1 to 10.
CN202010100149.2A 2020-02-18 2020-02-18 Portrait processing method and device and terminal Active CN111385514B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010100149.2A CN111385514B (en) 2020-02-18 2020-02-18 Portrait processing method and device and terminal
PCT/CN2020/122767 WO2021164289A1 (en) 2020-02-18 2020-10-22 Portrait processing method and apparatus, and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010100149.2A CN111385514B (en) 2020-02-18 2020-02-18 Portrait processing method and device and terminal

Publications (2)

Publication Number Publication Date
CN111385514A true CN111385514A (en) 2020-07-07
CN111385514B CN111385514B (en) 2021-06-29

Family

ID=71219765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010100149.2A Active CN111385514B (en) 2020-02-18 2020-02-18 Portrait processing method and device and terminal

Country Status (2)

Country Link
CN (1) CN111385514B (en)
WO (1) WO2021164289A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112558906A (en) * 2020-12-11 2021-03-26 上海影创信息科技有限公司 Display control method and system with imaging distance, storage medium and VR equipment thereof
WO2021164289A1 (en) * 2020-02-18 2021-08-26 华为技术有限公司 Portrait processing method and apparatus, and terminal
CN114594851A (en) * 2020-11-30 2022-06-07 华为技术有限公司 Image processing method, server and virtual reality equipment
CN116503289A (en) * 2023-06-20 2023-07-28 北京天工异彩影视科技有限公司 Visual special effect application processing method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7257237B1 (en) * 2003-03-07 2007-08-14 Sandia Corporation Real time markerless motion tracking using linked kinematic chains
CN103888710A (en) * 2012-12-21 2014-06-25 深圳市捷视飞通科技有限公司 Video conferencing system and method
CN106372603A (en) * 2016-08-31 2017-02-01 重庆大学 Shielding face identification method and shielding face identification device
CN106529409A (en) * 2016-10-10 2017-03-22 中山大学 Eye ocular fixation visual angle measuring method based on head posture
US20180063484A1 (en) * 2016-01-20 2018-03-01 Gerard Dirk Smits Holographic video capture and telepresence system
CN109785369A (en) * 2017-11-10 2019-05-21 中国移动通信有限公司研究院 A kind of virtual reality portrait acquisition method and device
CN109831622A (en) * 2019-01-03 2019-05-31 华为技术有限公司 A kind of image pickup method and electronic equipment
CN109886216A (en) * 2019-02-26 2019-06-14 华南理工大学 Expression recognition method, equipment and the medium restored based on VR scene facial image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111385514B (en) * 2020-02-18 2021-06-29 华为技术有限公司 Portrait processing method and device and terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7257237B1 (en) * 2003-03-07 2007-08-14 Sandia Corporation Real time markerless motion tracking using linked kinematic chains
CN103888710A (en) * 2012-12-21 2014-06-25 深圳市捷视飞通科技有限公司 Video conferencing system and method
US20180063484A1 (en) * 2016-01-20 2018-03-01 Gerard Dirk Smits Holographic video capture and telepresence system
CN106372603A (en) * 2016-08-31 2017-02-01 重庆大学 Shielding face identification method and shielding face identification device
CN106529409A (en) * 2016-10-10 2017-03-22 中山大学 Eye ocular fixation visual angle measuring method based on head posture
CN109785369A (en) * 2017-11-10 2019-05-21 中国移动通信有限公司研究院 A kind of virtual reality portrait acquisition method and device
CN109831622A (en) * 2019-01-03 2019-05-31 华为技术有限公司 A kind of image pickup method and electronic equipment
CN109886216A (en) * 2019-02-26 2019-06-14 华南理工大学 Expression recognition method, equipment and the medium restored based on VR scene facial image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王旭东: "基于深度学习的遮挡人脸检测和还原技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021164289A1 (en) * 2020-02-18 2021-08-26 华为技术有限公司 Portrait processing method and apparatus, and terminal
CN114594851A (en) * 2020-11-30 2022-06-07 华为技术有限公司 Image processing method, server and virtual reality equipment
CN112558906A (en) * 2020-12-11 2021-03-26 上海影创信息科技有限公司 Display control method and system with imaging distance, storage medium and VR equipment thereof
CN116503289A (en) * 2023-06-20 2023-07-28 北京天工异彩影视科技有限公司 Visual special effect application processing method and system
CN116503289B (en) * 2023-06-20 2024-01-09 北京天工异彩影视科技有限公司 Visual special effect application processing method and system

Also Published As

Publication number Publication date
CN111385514B (en) 2021-06-29
WO2021164289A1 (en) 2021-08-26

Similar Documents

Publication Publication Date Title
CN109917956B (en) Method for controlling screen display and electronic equipment
WO2020211532A1 (en) Display control method and related apparatus
CN110502954B (en) Video analysis method and device
CN111385514B (en) Portrait processing method and device and terminal
CN110798568B (en) Display control method of electronic equipment with folding screen and electronic equipment
WO2022262313A1 (en) Picture-in-picture-based image processing method, device, storage medium, and program product
CN112085647B (en) Face correction method and electronic equipment
CN108848405B (en) Image processing method and device
CN110807769B (en) Image display control method and device
CN114257920B (en) Audio playing method and system and electronic equipment
CN113850726A (en) Image transformation method and device
WO2020015149A1 (en) Wrinkle detection method and electronic device
CN113850709A (en) Image transformation method and device
CN113518189A (en) Shooting method, shooting system, electronic equipment and storage medium
CN110673694A (en) Application opening method and electronic equipment
WO2022062985A1 (en) Method and apparatus for adding special effect in video, and terminal device
CN112243117A (en) Image processing apparatus, method and camera
CN113596320B (en) Video shooting variable speed recording method, device and storage medium
CN110891181B (en) Live broadcast picture display method and device, storage medium and terminal
CN113852755A (en) Photographing method, photographing apparatus, computer-readable storage medium, and program product
CN115393676A (en) Gesture control optimization method and device, terminal and storage medium
CN108881739B (en) Image generation method, device, terminal and storage medium
CN113923351B (en) Method, device and storage medium for exiting multi-channel video shooting
CN115908221B (en) Image processing method, electronic device and storage medium
CN115150542B (en) Video anti-shake method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant