WO2021139382A1 - 人脸图像的处理方法、装置、可读介质和电子设备 - Google Patents
人脸图像的处理方法、装置、可读介质和电子设备 Download PDFInfo
- Publication number
- WO2021139382A1 WO2021139382A1 PCT/CN2020/127260 CN2020127260W WO2021139382A1 WO 2021139382 A1 WO2021139382 A1 WO 2021139382A1 CN 2020127260 W CN2020127260 W CN 2020127260W WO 2021139382 A1 WO2021139382 A1 WO 2021139382A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- line
- face image
- sight
- coordinates
- distance
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
- G06V10/225—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- This application relates to the field of image processing technology, and more specifically, to a method, device, readable medium, and electronic equipment for processing a human face image.
- the terminal device can add various effects to the image containing the human face, for example, the effects of laser, light, and tears can be rendered on the human eyes in the image.
- the rendering effect for the human eye is usually fixed and displayed near the position of the human eye, and cannot reflect the real state of the human eye.
- the purpose of this application is to provide a face image processing method, device, readable medium, and electronic equipment, which are used to solve the problem that the existing face image processing method can only be fixedly displayed where the human eye is when performing human eye rendering.
- the proximity of the location does not reflect the technical problem of the real state of the human eye.
- the present disclosure provides a method for processing a face image, the method including:
- the line of sight information in the face image to be processed is obtained.
- the line of sight information includes: the first coordinates and edge coordinates of the human eye.
- the edge coordinates are used to indicate the line of sight and the position of the human eye. State the intersection of the edges of the face image;
- the target area includes a line of sight segment with the first coordinates and the edge coordinates as endpoints;
- the preset effect material is rendered to the target area to obtain a target image.
- the present disclosure provides a face image processing device, the device includes:
- the obtaining module is used to obtain the line of sight information in the face image to be processed according to a preset recognition algorithm.
- the line of sight information includes: the first coordinate of the human eye and the edge coordinate, and the edge coordinate is used to indicate the person The intersection of the line of sight of the eye and the edge of the face image;
- a first determining module configured to determine a target area in the face image according to the line of sight information, the target area including a line of sight segment with the first coordinates and the edge coordinates as endpoints;
- the rendering module is used to render the preset effect material to the target area to obtain the target image.
- the present disclosure provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processing device, the steps of the method described in the first aspect of the present disclosure are implemented.
- an electronic device including:
- a storage device on which a computer program is stored
- the processing device is configured to execute the computer program in the storage device to implement the steps of the method described in the first aspect of the present disclosure.
- the present disclosure first recognizes the face image to be processed according to a preset recognition algorithm to obtain line of sight information including the first coordinates of the human eye and the edge coordinates, wherein the edge coordinates are used to indicate the human eye’s The intersection of the line of sight and the edge of the face image, and then determine the face image according to the line of sight information, including the target area of the line of sight segment with the first coordinate and the edge coordinate as the endpoint, and finally render the preset effect material to the target area, To get the target image.
- the present disclosure recognizes the line of sight information included in the face image, determines the target area that needs to be rendered, and then renders the effect material to the target area, so that the rendering effect can follow the line of sight of the human eye.
- Fig. 1 is a flowchart showing a method for processing a face image according to an exemplary embodiment
- Fig. 2 is an effect material according to an exemplary embodiment
- Fig. 3 is a face image according to an exemplary embodiment
- Fig. 4 is a target image according to an exemplary embodiment
- Fig. 5 is a flowchart showing another method for processing a face image according to an exemplary embodiment
- Fig. 6 is a face image according to an exemplary embodiment
- Fig. 7 is a flowchart showing another method for processing a face image according to an exemplary embodiment
- Fig. 8 is a flowchart showing another method for processing a face image according to an exemplary embodiment
- Fig. 9 is an additional effect material according to an exemplary embodiment
- Fig. 10a shows a face image according to an exemplary embodiment
- Fig. 10b shows a target image according to an exemplary embodiment
- Fig. 11 is a block diagram showing a device for processing a face image according to an exemplary embodiment
- Fig. 12 is a block diagram showing another device for processing a face image according to an exemplary embodiment
- Fig. 13 is a block diagram showing another device for processing a face image according to an exemplary embodiment
- Fig. 14 is a block diagram showing another device for processing a face image according to an exemplary embodiment
- Fig. 15 is a schematic structural diagram of an electronic device used according to an exemplary embodiment.
- Fig. 1 is a flowchart showing a method for processing a face image according to an exemplary embodiment. As shown in Fig. 1, the method includes:
- Step 101 Obtain the line of sight information in the face image to be processed according to a preset recognition algorithm.
- the line of sight information includes: the first coordinate and edge coordinates of the human eye.
- the edge coordinates are used to indicate the line of sight of the human eye and the face image. The intersection of the edges.
- the face image to be processed may be, for example, a photo containing a face taken by a user through a terminal device, or a frame containing a face image in a video shot through a terminal device, or it may be a picture taken by the user on the terminal.
- the line of sight information contained in the face image is recognized.
- the recognition algorithm can first recognize the face in the face image, and then further determine the position of the eye in the face, and finally obtain the line of sight information.
- the line of sight information can describe the line of sight of the human eye, for example, it may include the first coordinate of the human eye on the face image, and the edge coordinate of the intersection of the line of sight of the human eye and the edge of the face image on the face image. Through the first coordinate and the edge coordinate, the direction of the line of sight (that is, which direction the human eye looks in) can be described.
- the line of sight contained in the face image takes the first coordinate as the starting point and the edge coordinate as the end point.
- the human face image may include one or more human faces, and each human face may include two human eyes, so each human eye corresponds to a line of sight.
- each line of sight corresponds to a set of first coordinates and edge coordinates. It can be understood that when the face image includes N eyes, N pieces of line of sight information are acquired in step 101, and each line of sight information includes a set of first coordinates and edge coordinates, which are used to describe the line of sight of a human eye.
- Step 102 Determine a target area in the face image according to the line of sight information, where the target area includes a line of sight segment with the first coordinates and edge coordinates as endpoints.
- the user can select a specified effect through the terminal device (for example, it can be laser, light, tears, etc.), then to render the specified effect to the human eye, it is necessary to determine a target area in the face image based on the line of sight information. That is, the position where the specified effect needs to be displayed in the face image.
- the target area includes a line of sight segment with the first coordinates and edge coordinates as endpoints.
- the target area may be a rectangle or other shapes including the line of sight segment.
- the target area can be a certain rectangle with the midpoint of the line of sight as the center, the length of the line of sight as the length, and the length of the human eye as the width.
- the target area is centered on the midpoint of the extended line segment and the length of the extended line segment is set as the length. As the width of the width, the rectangle will be obtained.
- the target area corresponds to the line of sight one-to-one.
- the face image includes multiple eyes (for example, two eyes of a single face in a face image, or multiple faces in a face image) Personal eye)
- each human eye corresponds to a set of first coordinates and edge coordinates
- each set of first coordinates and edge coordinates determine a line of sight segment
- each corresponding line of sight segment determines a target area, that is, it is determined in the face image Multiple target areas.
- Step 103 Render the preset effect material to the target area to obtain the target image.
- the effect material corresponding to the specified effect can also be found in the pre-stored material library according to the effect specified by the user.
- the material library includes the effect material corresponding to each of the various effects.
- the material library can be pre-stored on the terminal device or stored in a server that the terminal device can access. When the terminal device needs to use a certain effect corresponding When the effect material is selected, search for and obtain the effect material from the server. After determining the effect material, use openGL (English: Open Graphics Library, Chinese: Open Graphics Library) to use the effect material as a texture map and render it into the target area to obtain the target image. Since the target area is determined based on the line of sight information, the effect material is rendered in the target area, so that the rendering effect can follow the line of sight of the human eye and reflect the real state of the human eye.
- openGL English: Open Graphics Library, Chinese: Open Graphics Library
- the effect material may be set to material with inconsistent widths at both ends.
- rendering the target area render the smaller end of the effect material to the end of the target area close to the first coordinate (that is, the end close to the human eye), and render the wider end of the effect material to the target area The end of the middle near the edge coordinates.
- the effect material as a laser as an example, the target image obtained in this way has a thinner laser beam at one end of the human eye, and a thicker laser beam at the edge of the image, thereby producing a visual 3D effect.
- the target image can be displayed on the display interface of the terminal device, or the target image can be stored in a designated storage.
- the target image can also be sent to a designated server for sharing, etc., which is not specifically limited in the present disclosure.
- the present disclosure first recognizes the face image to be processed according to a preset recognition algorithm to obtain line-of-sight information including the first coordinates and edge coordinates of the human eye.
- the edge coordinates are used to indicate the human eye’s The intersection of the line of sight and the edge of the face image, and then determine the face image according to the line of sight information, including the target area of the line of sight segment with the first coordinate and the edge coordinate as the endpoint, and finally render the preset effect material to the target area, To get the target image.
- the present disclosure recognizes the line of sight information included in the face image, determines the target area that needs to be rendered, and then renders the effect material to the target area, so that the rendering effect can follow the line of sight of the human eye.
- Fig. 5 is a flowchart showing another method for processing a face image according to an exemplary embodiment.
- the line of sight information also includes a depth of field distance, which is the distance between the human eye and the lens that takes the face image .
- the implementation of step 102 may include:
- Step 1021 Determine the first distance according to the depth of field distance, and determine the second distance according to the image size of the face image, the first distance is negatively correlated with the depth of field distance, and the second distance is positively correlated with the image size.
- step 1022 a rectangular area including the line of sight segment, the width being the first distance, and the length being the second distance is taken as the target area.
- the target area may also be determined in combination with the depth of field distance included in the line of sight information.
- the depth of field distance can be understood as the human eye in the face image, the distance between the lens that took the face image, the closer the human eye is to the lens, the smaller the depth of field distance, the farther the human eye is from the lens, the greater the depth of field distance .
- the first distance is determined according to the depth of field distance
- the second distance is determined according to the image size of the face image, wherein the first distance is negatively correlated with the depth of field distance, and the second distance is positively correlated with the image size.
- a rectangle including the line of sight segment is determined as the target area.
- the first distance can be determined according to Formula 1:
- W represents the first distance
- Z represents the depth of field distance
- ⁇ and ⁇ are preset adjustment parameters used to adjust the sensitivity of W with Z. Use the arctangent function To limit W to be too large or too small.
- the target area can be extended beyond the range of the face image.
- the second distance can be set to the length of the diagonal of the face image, or the length of the diagonal can be used as the minimum value of the second distance to ensure that the effect material will not be disconnected on the face image.
- step 103 may be:
- Adjust the size of the effect material according to the first distance and the second distance and render the adjusted effect material to the target area to obtain the target image.
- the effect material is resized according to the first distance and the second distance, the width of the adjusted effect material is the first distance and the length is the second distance, and then the adjusted effect material is used as a texture map through openGL and rendered to The target area to get the target image.
- the adjusted effect material is used as a texture map through openGL and rendered to The target area to get the target image.
- determining the target area in step 1022 may include the following steps:
- Step 1) Determine the first side, the side length of the first side is the first distance, the midpoint of the first side is the first coordinate, and the first side is perpendicular to the line of sight segment.
- Step 2) determine the second side, the side length of the second side is the second distance, and the second side is perpendicular to the first side.
- Step 3 taking a rectangle composed of the first side and the second side and including the line of sight segment as the target area.
- the first side is the normal line of the line of sight segment, and the first side is the line segment whose side length is the first distance and the midpoint is the first coordinate.
- the second side of the target area is determined.
- the second side is a line segment perpendicular to the first side and the length is the second length, and the intersection of the second side and the first side is located at an end point of the first side.
- the rectangle formed by the first side and the second side is used as the target area, and the line of sight is included in the target area.
- the first coordinate and edge coordinates of the left eye: (P0, P1), the first side determined by step 1) to step 3) is MN, the second side is MO, and the left eye
- the target area is A1, as shown in Figure 6.
- the method of determining the target area of the right eye is the same as that of the left eye, and will not be repeated here.
- the coordinates of the four vertices of A1 can be determined according to formula 2:
- Vertex LeftTop represents the coordinates of the upper left corner of A1
- Vertex RightTop represents the coordinates of the upper right corner of A1
- Vertex LeftBottom represents the coordinates of the lower left corner of A1
- Vertex RightBottom represents the coordinates of the lower right corner of A1
- W represents the first distance (ie the side length of MN)
- H represents the second distance (that is, the side length of MO)
- dy represents sin ⁇
- dx represents cos ⁇
- ⁇ is the angle between the line of sight and the lower edge of the face image.
- step 101 may be:
- the face image is input to the pre-trained line-of-sight recognition model to obtain the first coordinates, edge coordinates, and depth of field distance output by the line-of-sight recognition model.
- the recognition algorithm may input a face image to a pre-trained line of sight recognition model, and the line of sight recognition model can output the first coordinates, edge coordinates, and depth of field distance in the face image.
- the sight recognition model may be a neural network trained according to a preset sample input set and sample output set, for example, it may be a convolutional neural network (English: Convolutional Neural Networks, abbreviation: CNN).
- the convolutional neural network is only an example of the neural network of the embodiment of the present disclosure, and the present disclosure is not limited to this, and may also include other various neural networks.
- the sight recognition model may include, for example, an input layer, a convolutional layer, a feedback layer, a fully connected layer, and an output layer.
- the face image is input to the input layer, and the convolution layer features are extracted from the face image through the convolution layer.
- the feedback layer combine the features of the previous feedback layer and the features of the next feedback layer, extract the current feedback layer features from the convolutional layer, and then abstract the feedback layer features through the fully connected layer to generate a face
- the first coordinates, edge coordinates, and depth of field distance of the image, and finally the first coordinates, edge coordinates, and depth of field distance are output through the output layer.
- Fig. 7 is a flowchart showing another method for processing a face image according to an exemplary embodiment. As shown in Fig. 7, in a scene where there are multiple line of sight segments in the face image, after step 102, the Methods also include:
- Step 104 Determine the coordinates of the intersection of multiple line-of-sight line segments according to the line-of-sight information.
- Step 105 Use edge coordinates and/or intersection coordinates as additional effect coordinates.
- Step 106 Determine an additional area centered on the additional effect coordinates.
- step 103 includes:
- the eyes of a single face in a face image may correspond to two line-of-sight segments, and the two line-of-sight segments may intersect.
- the face image may include multiple faces.
- the line of sight corresponding to the face may intersect.
- the line of sight of the human eye may also intersect the edge of the face image (that is, the point indicated by the edge coordinates).
- additional effects for example, collisions, sparks, etc.
- the coordinates of the intersection point can be zero (that is, multiple line-of-sight segments are parallel to each other, or there is no intersection point in the face image), one or more.
- the coordinates of the intersection point of ab and cd can be determined by the following steps:
- the coordinates of the four endpoints in ab and cd are: the abscissa of a is ax, the ordinate of a is ay, the abscissa of a is bx, the ordinate of b is by, the abscissa of c is cx, the ordinate of c Is cy, the abscissa of d is dx, and the ordinate of d is dy.
- area_abd (a.x-d.x)*(b.y-d.y)-(a.y-d.y)*(b.x-d.x)
- t can be understood as the ratio of the area of the triangle cda to the area of the quadrilateral abcd, and can also be understood as the ratio of the length from the point a to the intersection to the length of ab.
- the edge coordinates and the intersection point coordinates can be used as the additional effect coordinates, and if the intersection point coordinates are zero, then only the edge coordinates are used as the additional effect coordinates. Then determine the additional area centered on the additional effect coordinates.
- the additional area can be a rectangle centered on the additional effect coordinates, or other shapes.
- the additional effect material can be rendered to the additional area to obtain the target image.
- the additional effect can also be selected by the user, and then according to the additional effect selected by the user, additional effect materials corresponding to the additional effect are found in the pre-stored additional material library.
- Fig. 8 is a flow chart showing another method for processing a face image according to an exemplary embodiment.
- the line of sight information also includes a depth of field distance.
- the depth of field distance is between the human eye and the lens that takes the face image.
- the implementation of step 106 may include:
- Step 1061 according to the depth of field distance and the additional effect coordinates, determine the additional depth of field distance corresponding to the additional effect coordinates.
- Step 1062 Determine an additional area centered on the additional effect coordinates, and the size of the additional area is determined according to the additional depth of field distance.
- the additional area may also be determined in combination with the depth of field distance included in the line of sight information.
- the additional depth of field distance corresponding to the additional effect coordinates can be determined according to the depth of field distance and the additional effect coordinates.
- the additional depth of field distance can be determined by formula 4:
- Z f represents the additional depth-of-field distance
- Z represents the depth-of-field distance
- t is the ratio of the area of the triangle cda to the area of the quadrilateral abcd, as determined in formula 3.
- the size of the additional area is determined, and the size of the additional area is negatively related to the additional depth of field distance. That is, the larger the additional depth of field distance, the farther the additional effect coordinates are from the lens, the smaller the additional area, and the smaller the additional depth of field distance, the closer the additional effect coordinates are to the lens, the larger the additional area.
- the side length of the additional area can be determined according to formula 5:
- W f represents the side length of the additional area
- ⁇ and ⁇ are preset adjustment parameters for adjusting the sensitivity of W f with Z f.
- Use the arctangent function To restrict W f from being too large or too small.
- the GL_MAX_EXT mixing equation in openGL can be used to place the effect material on the layer and the additional effect material.
- the layer is merged with the layer where the face image is located, and then the effect material, additional effect material and the face image are mixed using the filter blending mode to achieve rendering.
- the result color is displayed in the target image
- the color of the face image is the basic color
- the color of the effect material and the additional effect material are the mixed colors
- the present disclosure first recognizes the face image to be processed according to a preset recognition algorithm to obtain line of sight information including the first coordinate of the human eye and the edge coordinate, where the edge coordinate is used to indicate the human eye’s
- the intersection of the line of sight and the edge of the face image is then determined according to the line of sight information.
- the face image includes the target area of the line of sight segment with the first coordinate and the edge coordinate as the endpoint, and finally the preset effect material is rendered to the target area To get the target image.
- the present disclosure recognizes the line of sight information included in the face image, determines the target area that needs to be rendered, and then renders the effect material to the target area, so that the rendering effect can follow the line of sight of the human eye.
- Fig. 11 is a block diagram showing an apparatus for processing a face image according to an exemplary embodiment. As shown in Fig. 11, the apparatus 200 includes:
- the obtaining module 201 is used to obtain the line of sight information in the face image to be processed according to a preset recognition algorithm.
- the line of sight information includes: the first coordinates and edge coordinates of the human eye.
- the edge coordinates are used to indicate the line of sight of the human eye and the human eye. The intersection of the edges of the face image.
- the first determining module 202 is configured to determine a target area in the face image according to the line of sight information, and the target area includes a line of sight segment with the first coordinates and edge coordinates as endpoints.
- the rendering module 203 is configured to render the preset effect material to the target area to obtain the target image.
- Fig. 12 is a block diagram of another face image processing device according to an exemplary embodiment.
- the line of sight information also includes a depth of field distance, which is the distance between the human eye and the lens that takes the face image.
- the first determining module 202 includes:
- the first determining sub-module 2021 is configured to determine the first distance according to the depth of field distance, and determine the second distance according to the image size of the face image, the first distance is negatively correlated with the depth of field distance, and the second distance is positively correlated with the image size.
- the second determining sub-module 2022 is configured to use a rectangular area that includes the line of sight segment, the width is the first distance, and the length is the second distance as the target area.
- the rendering module 203 is used to:
- Adjust the size of the effect material according to the first distance and the second distance and render the adjusted effect material to the target area to obtain the target image.
- the second determining submodule 2022 is configured to perform the following steps:
- Step 1) Determine the first side, the side length of the first side is the first distance, the midpoint of the first side is the first coordinate, and the first side is perpendicular to the line of sight segment.
- Step 2) determine the second side, the side length of the second side is the second distance, and the second side is perpendicular to the first side.
- Step 3 taking a rectangle composed of the first side and the second side and including the line of sight segment as the target area.
- the obtaining module 201 is used to:
- the face image is input to the pre-trained line-of-sight recognition model to obtain the first coordinates, edge coordinates, and depth-of-field distance output by the line-of-sight recognition model.
- Fig. 13 is a block diagram showing another apparatus for processing a face image according to an exemplary embodiment. As shown in Fig. 13, there are multiple line-of-sight segments in the face image, and the apparatus 200 further includes:
- the second determining module 204 is configured to determine the coordinates of the intersection of multiple line-of-sight line segments according to the line-of-sight information after determining the target area in the face image according to the line-of-sight information.
- the second determining module 204 is further configured to use edge coordinates and/or intersection coordinates as additional effect coordinates.
- the second determining module 204 is also used to determine the additional area centered on the additional effect coordinates.
- the rendering module 203 is configured to render the effect material to the target area, and render the preset additional effect material to the additional area to obtain the target image.
- Fig. 14 is a block diagram showing another face image processing apparatus according to an exemplary embodiment.
- the second determining module 204 includes:
- the third determining sub-module 2041 is configured to determine the additional depth of field distance corresponding to the additional effect coordinate according to the depth of field distance and the additional effect coordinate.
- the fourth determining sub-module 2042 is used to determine the additional area centered on the additional effect coordinates, and the size of the additional area is determined according to the additional depth of field distance.
- the present disclosure first recognizes the face image to be processed according to a preset recognition algorithm to obtain line of sight information including the first coordinate of the human eye and the edge coordinate, where the edge coordinate is used to indicate the human eye’s
- the intersection of the line of sight and the edge of the face image is then determined according to the line of sight information.
- the face image includes the target area of the line of sight segment with the first coordinate and the edge coordinate as the endpoint, and finally the preset effect material is rendered to the target area To get the target image.
- the present disclosure recognizes the line of sight information included in the face image, determines the target area that needs to be rendered, and then renders the effect material to the target area, so that the rendering effect can follow the line of sight of the human eye.
- the electronic device in the embodiment of the present disclosure may be a server, which may be a local server or a cloud server, or a terminal device, and the terminal device may include, but is not limited to, mobile Phones, laptops, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), mobile terminals such as car navigation terminals, and mobile terminals such as digital TVs, desktops Fixed terminals for computers, etc.
- the user can upload the face image by logging in to the server, or directly upload the face image through the terminal device, or collect the face image through the terminal device.
- the electronic device shown in FIG. 15 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
- the electronic device 300 may include a processing device (such as a central processing unit, a graphics processor, etc.) 301, which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 302 or from a storage device 308
- the program in the memory (RAM) 303 executes various appropriate actions and processing.
- various programs and data required for the operation of the electronic device 300 are also stored.
- the processing device 301, the ROM 302, and the RAM 303 are connected to each other through a bus 304.
- An input/output (I/O) interface 305 is also connected to the bus 304.
- the following devices can be connected to the I/O interface 305: including input devices 306 such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc.; including, for example, liquid crystal displays (LCD), speakers, vibrations
- input devices 306 such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc.
- LCD liquid crystal displays
- An output device 307 such as a device
- a storage device 308 such as a magnetic tape, a hard disk, etc.
- the communication device 309 may allow the electronic device 300 to perform wireless or wired communication with other devices to exchange data.
- FIG. 15 shows an electronic device 300 having various devices, it should be understood that it is not required to implement or have all of the illustrated devices. It may be implemented alternatively or provided with more or fewer devices.
- an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a non-transitory computer readable medium, and the computer program contains program code for executing the method shown in the flowchart.
- the computer program may be downloaded and installed from the network through the communication device 309, or installed from the storage device 308, or installed from the ROM 302.
- the processing device 301 When the computer program is executed by the processing device 301, the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
- the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
- the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above.
- Computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable removable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein.
- This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
- the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
- the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
- the terminal device and the server can communicate with any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium.
- Communication e.g., communication network
- Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (for example, the Internet), and end-to-end networks (for example, ad hoc end-to-end networks), as well as any currently known or future research and development network of.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
- the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: obtains the line of sight information in the face image to be processed according to a preset recognition algorithm
- the line of sight information includes: first coordinates of the human eye and edge coordinates, and the edge coordinates are used to indicate the intersection of the line of sight of the human eye and the edge of the face image.
- a target area in the face image is determined according to the line of sight information, where the target area includes a line of sight segment with the first coordinates and the edge coordinates as endpoints.
- the preset effect material is rendered to the target area to obtain a target image.
- the computer program code used to perform the operations of the present disclosure can be written in one or more programming languages or a combination thereof.
- the above-mentioned programming languages include but are not limited to object-oriented programming languages such as Java, Smalltalk, C++, and Including conventional procedural programming languages-such as "C" language or similar programming languages.
- the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
- the remote computer can be connected to the user’s computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to Connect via the Internet).
- LAN local area network
- WAN wide area network
- each block in the flowchart or block diagram can represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logic function.
- Executable instructions can also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.
- each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
- the modules involved in the embodiments described in the present disclosure can be implemented in software or hardware. Among them, the name of the module does not constitute a limitation on the module itself under certain circumstances.
- the first determining module can also be described as a "module for determining the target area”.
- exemplary types of hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Product (ASSP), System on Chip (SOC), Complex Programmable Logical device (CPLD) and so on.
- FPGA Field Programmable Gate Array
- ASIC Application Specific Integrated Circuit
- ASSP Application Specific Standard Product
- SOC System on Chip
- CPLD Complex Programmable Logical device
- a machine-readable medium may be a tangible medium, which may contain or store a program for use by the instruction execution system, apparatus, or device or in combination with the instruction execution system, apparatus, or device.
- the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- the machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any suitable combination of the foregoing.
- machine-readable storage media would include electrical connections based on one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or flash memory erasable programmable read-only memory
- CD-ROM compact disk read only memory
- magnetic storage device or any suitable combination of the foregoing.
- Example 1 provides a method for processing a face image, including: obtaining line of sight information in a face image to be processed according to a preset recognition algorithm, where the line of sight information includes : The first coordinates and edge coordinates of the human eye, the edge coordinates are used to indicate the intersection of the line of sight of the human eye and the edge of the face image; determine the target area in the face image according to the line of sight information The target area includes a line of sight segment with the first coordinates and the edge coordinates as endpoints; and a preset effect material is rendered to the target area to obtain a target image.
- Example 2 provides the method of Example 1.
- the line of sight information further includes a depth-of-field distance, and the depth-of-field distance is the distance between the human eye and the lens that took the face image.
- Distance; the determining the target area in the face image according to the line of sight information includes: determining a first distance according to the depth of field distance, and determining a second distance according to the image size of the face image, the first A distance is negatively correlated with the depth of field distance, and the second distance is positively correlated with the image size; a rectangle that includes the line of sight segment, the width is the first distance, and the length is the second distance
- the area serves as the target area.
- Example 3 provides the method of Example 2.
- the rendering the preset effect material to the target area to obtain the target image includes: according to the first distance and the The second distance adjusts the size of the effect material, and renders the adjusted effect material to the target area to obtain the target image.
- Example 4 provides the method of Example 2.
- a rectangle that includes the line of sight segment and has a width of the first distance and a length of the second distance is used as the The target area includes: determining a first side, the side length of the first side being the first distance, the midpoint of the first side being the first coordinate, and the first side and the line of sight
- the line segment is vertical; the second side is determined, the side length of the second side is the second distance, and the second side is perpendicular to the first side; it will be composed of the first side and the second side ,
- the rectangle including the line of sight segment is used as the target area.
- Example 5 provides the method of any one of Examples 2 to 4.
- obtaining line of sight information in a face image to be processed includes: The face image is input to a pre-trained line of sight recognition model to obtain the first coordinates, the edge coordinates, and the depth distance output by the line of sight recognition model.
- Example 6 provides the method of any one of Examples 1 to 4, there are a plurality of the line of sight segments in the face image, and the line of sight is determined according to the line of sight information.
- the method further includes: determining intersection coordinates of a plurality of the line of sight line segments according to the line of sight information; using the edge coordinates and/or the coordinates of the intersection point as additional effect coordinates Determining the additional area centered on the additional effect coordinates; the rendering the preset effect material to the target area to obtain the target image includes: rendering the effect material to the target area, and The preset additional effect material is rendered to the additional area to obtain the target image.
- Example 7 provides the method of Example 6, the line of sight information further includes a depth-of-field distance, and the depth-of-field distance is the distance between the human eye and the lens that took the face image Distance; the determining the additional area centered on the additional effect coordinates includes: determining the additional depth distance corresponding to the additional effect coordinates according to the depth of field distance and the additional effect coordinates; determining the additional effect coordinates The additional area is the center, and the size of the additional area is determined according to the additional depth of field distance.
- Example 8 provides an apparatus for processing a face image
- the apparatus includes: an acquisition module for acquiring the face image in the face image to be processed according to a preset recognition algorithm
- Line of sight information includes: first coordinates of the human eye and edge coordinates, the edge coordinates are used to indicate the intersection of the line of sight of the human eye and the edge of the face image
- a first determining module for Determine a target area in the face image according to the line of sight information, where the target area includes a line of sight segment with the first coordinates and the edge coordinates as endpoints
- a rendering module for rendering preset effect materials To the target area to obtain a target image.
- Example 9 provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processing device, the steps of the methods described in Examples 1 to 7 are implemented.
- Example 10 provides an electronic device, including: a storage device on which a computer program is stored; and a processing device for executing the computer program in the storage device to Implement the steps of the methods described in Example 1 to Example 7.
- Example 11 provides a computer program, including: a storage device on which the computer program is stored; and a processing device for executing the computer program in the storage device to Implement the steps of the methods described in Example 1 to Example 7.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Geometry (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Ophthalmology & Optometry (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Image Generation (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (11)
- 一种人脸图像的处理方法,其特征在于,所述方法包括:按照预设的识别算法,获取待处理的人脸图像中的视线信息,所述视线信息包括:人眼的第一坐标和边缘坐标,所述边缘坐标用于指示所述人眼的视线与所述人脸图像的边缘的交点;根据所述视线信息确定所述人脸图像中的目标区域,所述目标区域包括以所述第一坐标和所述边缘坐标为端点的视线线段;将预设的效果素材渲染至所述目标区域,以得到目标图像。
- 根据权利要求1所述的方法,其特征在于,所述视线信息还包括景深距离,所述景深距离为所述人眼到拍摄所述人脸图像的镜头之间的距离;所述根据所述视线信息确定所述人脸图像中的目标区域,包括:根据所述景深距离确定第一距离,并根据所述人脸图像的图像尺寸确定第二距离,所述第一距离与所述景深距离为负相关,所述第二距离与所述图像尺寸为正相关;将包括所述视线线段,且宽度为所述第一距离,长度为所述第二距离的矩形区域作为所述目标区域。
- 根据权利要求2所述的方法,其特征在于,所述将预设的效果素材渲染至所述目标区域,以得到目标图像,包括:按照所述第一距离和所述第二距离调整所述效果素材的大小,并将调整后的所述效果素材渲染至所述目标区域,以得到所述目标图像。
- 根据权利要求2或3所述的方法,其特征在于,所述将包括所述视线线段,且宽度为所述第一距离,长度为所述第二距离的矩形作为所述目标区域,包括:确定第一边,所述第一边的边长为所述第一距离,所述第一边的中点为所述第一坐标,且所述第一边与所述视线线段垂直;确定第二边,所述第二边的边长为所述第二距离,且所述第二边与所述第一边垂直;将由所述第一边和所述第二边组成的,且包括所述视线线段的矩形作为所述目标区域。
- 根据权利要求2-4中任一项所述的方法,其特征在于,所述按照预设的识别算法,获取待处理的人脸图像中的视线信息,包括:将所述人脸图像输入至预先训练的视线识别模型,以得到所述视线识别模型输出的所述第一坐标、所述边缘坐标和所述景深距离。
- 根据权利要求1-5中任一项所述的方法,其特征在于,所述人脸图像中存在多个所述视线线段,在所述根据所述视线信息确定所述人脸图像中的目标区域之后,所述方法还包括:根据所述视线信息确定多个所述视线线段的交点坐标;将所述边缘坐标,和/或所述交点坐标作为附加效果坐标;确定以所述附加效果坐标为中心的附加区域;所述将预设的效果素材渲染至所述目标区域,以得到目标图像,包括:将所述效果素材渲染至所述目标区域,并将预设的附加效果素材渲染至所述附加区域,以得到所述目标图像。
- 根据权利要求6所述的方法,其特征在于,所述视线信息还包括景深距离,所述景深距离为所述人眼到拍摄所述人脸图像的镜头之间的距离;所述确定以所述附加效果坐标为中心的附加区域,包括:根据所述景深距离和所述附加效果坐标,确定所述附加效果坐标对应的附加景深距离;确定以所述附加效果坐标为中心的所述附加区域,所述附加区域的大小为根据所述附加景深距离确定的。
- 一种人脸图像的处理装置,其特征在于,所述装置包括:获取模块,用于按照预设的识别算法,获取待处理的人脸图像中的视线信息,所述视线信息包括:人眼的第一坐标和边缘坐标,所述边缘坐标用于指示所述人眼的视线与所述人脸图像的边缘的交点;第一确定模块,用于根据所述视线信息确定所述人脸图像中的目标区域,所述目标区域包括以所述第一坐标和所述边缘坐标为端点的视线线段;渲染模块,用于将预设的效果素材渲染至所述目标区域,以得到目标图像。
- 一种计算机可读介质,其上存储有计算机程序,其特征在于,该程序被处理装置执行时实现权利要求1-7中任一项所述方法的步骤。
- 一种电子设备,其特征在于,包括:存储装置,其上存储有计算机程序;处理装置,用于执行所述存储装置中的所述计算机程序,以实现权利要求1-7中任一项所述方法的步骤。
- 一种计算机程序,其特征在于,包括程序代码,当计算机运行所述计算机程序时,所述程序代码执行如权利要求1-7中任一项所述方法的步骤。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2117372.9A GB2599036B (en) | 2020-01-06 | 2020-11-06 | Face image processing method and apparatus, readable medium, and electronic device |
JP2021571584A JP7316387B2 (ja) | 2020-01-06 | 2020-11-06 | 顔画像の処理方法、デバイス、可読媒体及び電子装置 |
US17/616,961 US11887325B2 (en) | 2020-01-06 | 2020-11-06 | Face image processing method and apparatus, readable medium, and electronic device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010010716.5 | 2020-01-06 | ||
CN202010010716.5A CN111243049B (zh) | 2020-01-06 | 2020-01-06 | 人脸图像的处理方法、装置、可读介质和电子设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021139382A1 true WO2021139382A1 (zh) | 2021-07-15 |
Family
ID=70865325
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/127260 WO2021139382A1 (zh) | 2020-01-06 | 2020-11-06 | 人脸图像的处理方法、装置、可读介质和电子设备 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11887325B2 (zh) |
JP (1) | JP7316387B2 (zh) |
CN (1) | CN111243049B (zh) |
GB (1) | GB2599036B (zh) |
WO (1) | WO2021139382A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113838189A (zh) * | 2021-09-13 | 2021-12-24 | 厦门美图之家科技有限公司 | 一种睫毛渲染方法及装置 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111243049B (zh) | 2020-01-06 | 2021-04-02 | 北京字节跳动网络技术有限公司 | 人脸图像的处理方法、装置、可读介质和电子设备 |
CN111754613A (zh) * | 2020-06-24 | 2020-10-09 | 北京字节跳动网络技术有限公司 | 图像装饰方法、装置、计算机可读介质和电子设备 |
CN112257598B (zh) * | 2020-10-22 | 2024-06-18 | 北京字跳网络技术有限公司 | 图像中四边形的识别方法、装置、可读介质和电子设备 |
CN114202617A (zh) * | 2021-12-13 | 2022-03-18 | 北京字跳网络技术有限公司 | 视频图像处理方法、装置、电子设备及存储介质 |
CN116934577A (zh) * | 2022-04-01 | 2023-10-24 | 北京字跳网络技术有限公司 | 一种风格图像生成方法、装置、设备及介质 |
CN117095108B (zh) * | 2023-10-17 | 2024-01-23 | 海马云(天津)信息技术有限公司 | 虚拟数字人的纹理渲染方法及装置、云服务器和存储介质 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170262970A1 (en) * | 2015-09-11 | 2017-09-14 | Ke Chen | Real-time face beautification features for video images |
CN107563353A (zh) * | 2017-09-26 | 2018-01-09 | 维沃移动通信有限公司 | 一种图像处理方法、装置及移动终端 |
CN107909058A (zh) * | 2017-11-30 | 2018-04-13 | 广东欧珀移动通信有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
CN107909057A (zh) * | 2017-11-30 | 2018-04-13 | 广东欧珀移动通信有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
CN108958610A (zh) * | 2018-07-27 | 2018-12-07 | 北京微播视界科技有限公司 | 基于人脸的特效生成方法、装置和电子设备 |
CN109584152A (zh) * | 2018-11-30 | 2019-04-05 | 深圳市脸萌科技有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
US20190385290A1 (en) * | 2018-06-15 | 2019-12-19 | Beijing Xiaomi Mobile Software Co., Ltd. | Face image processing method, device and apparatus, and computer-readable storage medium |
CN111243049A (zh) * | 2020-01-06 | 2020-06-05 | 北京字节跳动网络技术有限公司 | 人脸图像的处理方法、装置、可读介质和电子设备 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4809869B2 (ja) * | 2008-05-26 | 2011-11-09 | 株式会社ソニー・コンピュータエンタテインメント | 画像処理装置、画像処理方法及びプログラム |
JP2011049988A (ja) | 2009-08-28 | 2011-03-10 | Nikon Corp | 画像処理装置およびカメラ |
CN106249413B (zh) * | 2016-08-16 | 2019-04-23 | 杭州映墨科技有限公司 | 一种模拟人眼对焦的虚拟动态景深变化处理方法 |
DK179867B1 (en) * | 2017-05-16 | 2019-08-06 | Apple Inc. | RECORDING AND SENDING EMOJI |
CN107818305B (zh) | 2017-10-31 | 2020-09-22 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备和计算机可读存储介质 |
JP7146585B2 (ja) * | 2018-11-13 | 2022-10-04 | 本田技研工業株式会社 | 視線検出装置、プログラム、及び、視線検出方法 |
CN110378847A (zh) * | 2019-06-28 | 2019-10-25 | 北京字节跳动网络技术有限公司 | 人脸图像处理方法、装置、介质及电子设备 |
CN110378839A (zh) * | 2019-06-28 | 2019-10-25 | 北京字节跳动网络技术有限公司 | 人脸图像处理方法、装置、介质及电子设备 |
CN110555798B (zh) * | 2019-08-26 | 2023-10-17 | 北京字节跳动网络技术有限公司 | 图像变形方法、装置、电子设备及计算机可读存储介质 |
-
2020
- 2020-01-06 CN CN202010010716.5A patent/CN111243049B/zh active Active
- 2020-11-06 US US17/616,961 patent/US11887325B2/en active Active
- 2020-11-06 GB GB2117372.9A patent/GB2599036B/en active Active
- 2020-11-06 WO PCT/CN2020/127260 patent/WO2021139382A1/zh active Application Filing
- 2020-11-06 JP JP2021571584A patent/JP7316387B2/ja active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170262970A1 (en) * | 2015-09-11 | 2017-09-14 | Ke Chen | Real-time face beautification features for video images |
CN107563353A (zh) * | 2017-09-26 | 2018-01-09 | 维沃移动通信有限公司 | 一种图像处理方法、装置及移动终端 |
CN107909058A (zh) * | 2017-11-30 | 2018-04-13 | 广东欧珀移动通信有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
CN107909057A (zh) * | 2017-11-30 | 2018-04-13 | 广东欧珀移动通信有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
US20190385290A1 (en) * | 2018-06-15 | 2019-12-19 | Beijing Xiaomi Mobile Software Co., Ltd. | Face image processing method, device and apparatus, and computer-readable storage medium |
CN108958610A (zh) * | 2018-07-27 | 2018-12-07 | 北京微播视界科技有限公司 | 基于人脸的特效生成方法、装置和电子设备 |
CN109584152A (zh) * | 2018-11-30 | 2019-04-05 | 深圳市脸萌科技有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
CN111243049A (zh) * | 2020-01-06 | 2020-06-05 | 北京字节跳动网络技术有限公司 | 人脸图像的处理方法、装置、可读介质和电子设备 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113838189A (zh) * | 2021-09-13 | 2021-12-24 | 厦门美图之家科技有限公司 | 一种睫毛渲染方法及装置 |
CN113838189B (zh) * | 2021-09-13 | 2024-02-02 | 厦门美图之家科技有限公司 | 一种睫毛渲染方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
US11887325B2 (en) | 2024-01-30 |
JP7316387B2 (ja) | 2023-07-27 |
GB2599036A9 (en) | 2023-06-07 |
GB2599036B (en) | 2024-09-18 |
GB202117372D0 (en) | 2022-01-12 |
CN111243049B (zh) | 2021-04-02 |
JP2022535524A (ja) | 2022-08-09 |
GB2599036A (en) | 2022-03-23 |
US20220327726A1 (en) | 2022-10-13 |
CN111243049A (zh) | 2020-06-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021139382A1 (zh) | 人脸图像的处理方法、装置、可读介质和电子设备 | |
WO2021139408A1 (zh) | 显示特效的方法、装置、存储介质及电子设备 | |
US11849211B2 (en) | Video processing method, terminal device and storage medium | |
WO2020248900A1 (zh) | 全景视频的处理方法、装置及存储介质 | |
WO2022042290A1 (zh) | 一种虚拟模型处理方法、装置、电子设备和存储介质 | |
WO2023207379A1 (zh) | 图像处理方法、装置、设备及存储介质 | |
WO2023207963A1 (zh) | 图像处理方法、装置、电子设备及存储介质 | |
US12019669B2 (en) | Method, apparatus, device, readable storage medium and product for media content processing | |
US20220139016A1 (en) | Sticker generating method and apparatus, and medium and electronic device | |
CN114900625A (zh) | 虚拟现实空间的字幕渲染方法、装置、设备及介质 | |
WO2021244651A1 (zh) | 信息显示方法、装置、终端及存储介质 | |
CN111862342B (zh) | 增强现实的纹理处理方法、装置、电子设备及存储介质 | |
WO2023246302A1 (zh) | 字幕的显示方法、装置、设备及介质 | |
WO2023138467A1 (zh) | 虚拟物体的生成方法、装置、设备及存储介质 | |
CN110047126B (zh) | 渲染图像的方法、装置、电子设备和计算机可读存储介质 | |
WO2023098649A1 (zh) | 视频生成方法、装置、设备及存储介质 | |
US20230284768A1 (en) | Beauty makeup special effect generation method, device, and storage medium | |
US11651529B2 (en) | Image processing method, apparatus, electronic device and computer readable storage medium | |
US20240177409A1 (en) | Image processing method and apparatus, electronic device, and readable storage medium | |
CN112465692A (zh) | 图像处理方法、装置、设备及存储介质 | |
JP2023550970A (ja) | 画面の中の背景を変更する方法、機器、記憶媒体、及びプログラム製品 | |
CN111223105B (zh) | 图像处理方法和装置 | |
RU2802724C1 (ru) | Способ и устройство обработки изображений, электронное устройство и машиночитаемый носитель информации | |
WO2021004171A1 (zh) | 水波纹图像实现方法及装置 | |
CN118840467A (zh) | 图像处理方法、装置、终端和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20912963 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021571584 Country of ref document: JP Kind code of ref document: A Ref document number: 202117372 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20201106 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20912963 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09.02.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20912963 Country of ref document: EP Kind code of ref document: A1 |