WO2022199102A1 - Procédé et dispositif de traitement d'image - Google Patents

Procédé et dispositif de traitement d'image Download PDF

Info

Publication number
WO2022199102A1
WO2022199102A1 PCT/CN2021/134644 CN2021134644W WO2022199102A1 WO 2022199102 A1 WO2022199102 A1 WO 2022199102A1 CN 2021134644 W CN2021134644 W CN 2021134644W WO 2022199102 A1 WO2022199102 A1 WO 2022199102A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
point
key point
face key
special effect
Prior art date
Application number
PCT/CN2021/134644
Other languages
English (en)
Chinese (zh)
Inventor
孟维遮
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2022199102A1 publication Critical patent/WO2022199102A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular, to an image processing method, an apparatus, an electronic device, and a storage medium.
  • the current image special effect processing process may include: the user clicks on a set special effect template (eg, a set animal image template, a set decoration template, etc.). After receiving the click input for setting the special effect template, the terminal may perform special effect fusion on the selected setting special effect template and the user's face image, and display the fused special effect image. Afterwards, the terminal may receive the movement track input by the user on the special effect image, and draw a line pattern indicating the movement track on a fixed position of the movement track.
  • a set special effect template eg, a set animal image template, a set decoration template, etc.
  • the present disclosure provides an image processing method, apparatus, electronic device and storage medium.
  • an image processing method comprising:
  • the trajectory position information of at least one trajectory point in the movement trajectory determine the relative position of each trajectory point relative to all the trajectory points. relative position of the face image
  • the image special effect processing is repeatedly performed, and the image special effect processing includes:
  • the first special effect line is displayed.
  • the at least two face key points include: a first face key point, a second face key point and a target face key point, the first face key point and the first face key point
  • the two face key points are symmetrical with respect to the target face key point, and the target face key point is any face key point on the symmetry axis of the face image.
  • the first position information of at least two face key points, and the track position of at least one track point in the movement track information to determine the relative position of each trajectory point relative to the face image including:
  • the first position information of the first face key point and the second face key point determine the first rotation matrix and the first scaling matrix of the current face posture in the face image
  • the relative position of the trajectory point relative to the face image is obtained.
  • both the first position information and the track position information include absolute coordinates on the display screen
  • Determining the first rotation matrix and the first scaling matrix of the current face in the face image according to the first position information of the first face key point and the second face key point including:
  • the first rotation matrix is obtained, and the first length is the first face key point to the length of the first vector of the second face key point;
  • the first scaling matrix is obtained, and the reference length is for the face image The first length set by the face in the facing posture.
  • obtaining the relative position of the trajectory point relative to the face image according to the first rotation matrix, the first scaling matrix and the translation vector includes:
  • the relative position is obtained according to the first scaling matrix, the first rotation matrix, the translation vector and a first formula, where the first formula includes:
  • Q represents the relative position
  • Ms 1 represents the first scaling matrix
  • Mr 1 represents the first rotation matrix
  • the relative position is converted according to the second position information of the face key points in the face image displayed on the display page after the current time, to obtain each The first absolute position of the trajectory point on the display screen, including:
  • the first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information of the target face key point, and the relative position.
  • the second position information includes absolute coordinates on the display screen, and the determination is determined according to the second position information of the first face key point and the second face key point
  • the second rotation matrix and the second scaling matrix of the current face posture in the face image including:
  • the second rotation matrix is obtained, and the second length is the first face key point point to the length of the second vector of the second face key point;
  • the second scaling matrix is obtained, and the reference length is for the face image The second length set by the face in the facing posture.
  • obtaining the first position of the trajectory point according to the second rotation matrix, the second scaling matrix, the second position information of the target face key point, and the relative position an absolute position including:
  • the second rotation matrix the second scaling matrix, the second position information of the target face key point, the relative position and the second formula, the first absolute position of the trajectory point is obtained, and the The second formula includes:
  • R represents the first absolute position
  • Mr 2 represents the second rotation matrix
  • Ms 2 represents the second scaling matrix
  • (x q , y q ) represents the relative position
  • (x c , y c ) ) represents the second position information of the target face key point
  • T represents the transposition process.
  • the image special effect processing further includes:
  • the second special effect line is displayed in the display page.
  • the method further includes:
  • each track point determines the relative position of the symmetry point of each track point relative to the face image, and the symmetry point and the track point take the face as the
  • the benchmark is left and right symmetrical;
  • the generating a second special effect line symmetrical with the first special effect line according to the first special effect line comprising:
  • the relative position includes relative coordinates relative to the face image, and the symmetry point of each trajectory point is determined according to the relative position of each trajectory point relative to the face image
  • the relative position relative to the face image including:
  • the updated relative position is determined as the relative position of the symmetry point.
  • an image processing apparatus comprising:
  • an acquisition module used for acquiring the movement track input by the user in the display page including the face image in response to the special effect display instruction
  • the determining module is configured to determine the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory position information of at least one trajectory point in the moving trajectory. the relative position of the trajectory point relative to the face image;
  • An image special effect processing module configured to repeatedly perform image special effect processing, wherein the image special effect processing includes:
  • the first special effect line is displayed.
  • the at least two face key points include: a first face key point, a second face key point and a target face key point, the first face key point and the first face key point
  • the two face key points are symmetrical with respect to the target face key point, and the target face key point is any face key point on the symmetry axis of the face image.
  • the determining module is further configured to:
  • the first position information of the first face key point and the second face key point determine the first rotation matrix and the first scaling matrix of the current face posture in the face image
  • the relative position of the trajectory point relative to the face image is obtained.
  • both the first position information and the track position information include absolute coordinates on the display screen, and the determining module is further configured to:
  • the first rotation matrix is obtained, and the first length is the first face key point to the length of the first vector of the second face key point;
  • the first scaling matrix is obtained, and the reference length is for the face image The first length set by the face in the facing posture.
  • the determining module is further configured to:
  • the relative position is obtained according to the first scaling matrix, the first rotation matrix, the translation vector and a first formula, where the first formula includes:
  • Q represents the relative position
  • Ms 1 represents the first scaling matrix
  • Mr 1 represents the first rotation matrix
  • the image special effect processing module is further configured to:
  • the first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information of the target face key point, and the relative position.
  • the second position information includes absolute coordinates on the display screen
  • the image special effect processing module is further configured to:
  • the second rotation matrix is obtained, and the second length is the first face key point point to the length of the second vector of the second face key point;
  • the second scaling matrix is obtained, and the reference length is for the face image The second length set by the face in the facing posture.
  • the image special effect processing module is further configured to:
  • the second rotation matrix the second scaling matrix, the second position information of the target face key point, the relative position and the second formula, the first absolute position of the trajectory point is obtained, and the The second formula includes:
  • R represents the first absolute position
  • Mr 2 represents the second rotation matrix
  • Ms 2 represents the second scaling matrix
  • (x q , y q ) represents the relative position
  • (x c , y c ) ) represents the second position information of the target face key point
  • T represents the transposition process.
  • the image special effect processing further includes:
  • the second special effect line is displayed in the display page.
  • the determining module is further configured to:
  • each track point determines the relative position of the symmetry point of each track point relative to the face image, and the symmetry point and the track point take the face as the
  • the benchmark is left and right symmetrical;
  • the relative position includes relative coordinates relative to the face image
  • the determining module is further configured to:
  • the updated relative position is determined as the relative position of the symmetry point.
  • an electronic device including:
  • processors one or more processors
  • one or more memories for storing the one or more processor-executable instructions
  • the one or more processors are configured to execute the image processing method described in the first aspect or any possible implementation manner of the first aspect.
  • a non-volatile computer-readable storage medium which, when the instructions in the non-volatile computer-readable storage medium are executed by a processor of an electronic device, causes all the The electronic device can execute the image processing method described in the first aspect or any possible implementation manner of the first aspect.
  • a computer program product including a computer program, and when the computer program is executed by a processor, realizes the image described in the first aspect or any possible implementation manner of the first aspect Approach.
  • the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory of at least one trajectory point in the movement trajectory input by the user on the display page can be obtained.
  • Position information to determine the relative position of each track point relative to the face image.
  • the first absolute position of each track point on the display screen is obtained, and the first absolute position of each track point on the display screen is obtained.
  • the first special effect line is drawn according to the movement trajectory input by the user, which realizes the function of the user to draw the special effect independently. Moreover, after obtaining the relative positions of each track point in the moving track and the current face image, the position information of the key points of the face and the relative position of each track point in the face image displayed in real time on the display page can be used to determine each track. point at the first absolute position on the display screen, so that the first special effect line is generated and displayed after connecting the track points located at the first absolute positions. In this way, the display position of the generated first special effect line will change with the change of the display position of the face image displayed in real time on the display page, realizing the special effect that the first special effect line moves with the face, and enriching the special effect display effect.
  • Fig. 1 is a flowchart of an image processing method according to an exemplary embodiment.
  • FIG. 2 is a schematic diagram of a human face image according to an exemplary embodiment.
  • Fig. 3 is a flow chart of a method for determining relative positions of track points according to an exemplary embodiment.
  • Fig. 4 is a schematic diagram of a display page of a face image according to an exemplary embodiment.
  • Fig. 5 is a schematic diagram of a display page of a face image according to an exemplary embodiment.
  • Fig. 6 is a flow chart of a method for determining the first absolute position of a track point according to an exemplary embodiment.
  • Fig. 7 is a flowchart of a method for generating a second special effect line according to an exemplary embodiment.
  • Fig. 8 is a flowchart of another method for generating a second special effect line according to an exemplary embodiment.
  • Fig. 9 is a block diagram of an image processing apparatus according to an exemplary embodiment.
  • Fig. 10 is a block diagram of an electronic device according to an exemplary embodiment.
  • Fig. 1 is a flowchart of an image processing method according to an exemplary embodiment.
  • the image processing method can be applied to electronic equipment.
  • the electronic device may be a terminal with a display screen, and the terminal may be installed with an application program for performing image special effect processing on the face image.
  • the embodiments of the present application are described by taking an electronic device as a terminal as an example.
  • the image processing method may include the following steps 101-103:
  • step 101 in response to the special effect display instruction, the movement track input by the user in the display page including the face image is acquired.
  • the user can perform image special effects processing on the face image when he wants to use the terminal to take photos, video captures, or live webcasts and other processes of capturing faces.
  • the face image may include not only the face but also the background.
  • the background may be a building or a landscape or the like.
  • the user may operate the terminal to open an application program with an image special effect processing function, and display a display page including a face image in the application program on the terminal. After receiving the special effect display instruction, the terminal may acquire the movement track input by the user in the display page including the face image in response to the special effect display instruction.
  • the special effect display instruction may be triggered after the terminal receives and executes the setting operation on the display page.
  • the special effect display instruction may be triggered after the user performs a setting operation on the self-drawn control.
  • the setting operation may include input in the form of click, long press, swipe, or voice for the self-drawn control.
  • the display page including the face image may be a shooting interface, a live broadcast interface, or a short (long) video shooting interface, and the like.
  • the trajectory of movement of the user input may be the trajectory of the user moving the input.
  • the input member may be a user's finger or a stylus or the like.
  • the movement track may include at least one track point arranged in a movement order.
  • the at least one trajectory point refers to one or more trajectory points.
  • acquiring the movement trajectory input by the user by the terminal may refer to: acquiring the trajectory position information of at least one trajectory point input by the user by the terminal.
  • the track position information of the track point refers to the absolute position of the track point on the display screen of the terminal.
  • the position information of the track point may be the absolute coordinates of the track point, where the absolute coordinates refer to the position coordinates relative to the specific point on the display screen with a specific point (eg, center point) of the display screen as the origin.
  • the user wants to add a rabbit ear special effect to his face during the webcast.
  • the user can operate the terminal to open the webcast application, and display a display page including the user's face image on the terminal.
  • the user performs a click operation on the self-drawn icon on the display page, and can use a finger to slide and draw a line in the shape of a left rabbit ear at the upper left position of the face of the face image displayed on the display page.
  • the terminal can generate a special effect display instruction, and respond to the special effect display instruction.
  • the human face in the human face image included in the display page can have a rabbit ear special effect.
  • step 102 according to the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory position information of at least one trajectory point in the moving trajectory, determine the relative face of each trajectory point The relative position of the image.
  • the terminal may acquire the first position information of at least two face key points in the face image displayed at the current moment, and the track position information of each track point in the movement track.
  • the relative position of each track point relative to the face image is determined according to the first position information of the at least two face key points and the track position information of at least one track point.
  • the relative position of the track point relative to the face image may be represented by a vector that points to the track point from the target face key point.
  • the relative coordinate representation of the track point relative to the face image can also be used.
  • the embodiments of the present disclosure use the relative coordinates of the track points to the face image to represent the relative positions of the track points to the face image.
  • the terminal may obtain at least two face key points in the face image by performing face key point detection processing on the face image displayed on the display page.
  • the terminal may use an artificial intelligence (Artificial Intelligence, AI) technology to implement face key point detection processing on a face image.
  • AI Artificial Intelligence
  • the at least two face key points may include: a first face key point, a second face key point, and a target face key point.
  • the target face key point can be any face key point on the symmetry axis of the face image.
  • the first face key point and the second face key point may be symmetrical according to the target face key point.
  • the target face key point is the anchor point of the line connecting the first face key point and the second face key point. The connection line between the first face key point and the second face key point can follow the movement of the target face key point.
  • the first face key point and the second face key point are two face key points symmetrical about the target key point, and the target key point is the key point on the symmetry axis of the face image
  • the first face key point is The inclination angle of the line connecting the point and the second face key point can better reflect the rotation angle of the face in the face image displayed on the display page.
  • the first position information of the midpoint of the connection line located on the symmetry axis of the face image can reflect the position information of the face image, so that the position and posture information of the current face in the face image are considered, and each The relative position of the track point or the first absolute position has higher accuracy.
  • the first position information of the face key point may be absolute position information of the face key point on the display screen.
  • the first position information of the face key points may be absolute coordinates of the face key points.
  • the absolute coordinates refer to the position coordinates relative to the specific point on the display screen with a specific point (eg, a center point) of the display screen as the origin.
  • FIG. 2 it shows a schematic diagram of a human face image according to an exemplary embodiment.
  • the key point C of the target face can be a point at the tip of the nose of the face, and is located on the symmetry axis of the face image.
  • the first face key point A and the second face key point B may be two symmetrical points located on both sides of the edge of the face.
  • the inclination angle of the line between the first face key point A and the second face key point B can be used to reflect the rotation angle of the face.
  • the terminal may determine the relative position of each trajectory point relative to the face image by processing the absolute position of each trajectory point on the display screen through spatial change processing.
  • the terminal determines each face image according to the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory position information of at least one trajectory point in the moving trajectory.
  • the process of the relative position of the track point relative to the face image may include the following steps 1021 to 1023 .
  • step 1021 for each track point, according to the first position information of the target face key point and the track position information of the track point, a translation vector of the track point pointing to the target face key point is determined.
  • the terminal may calculate the translation vector of the track point pointing to the target face key point according to the first position information of the target face key point and the track position information of the track point. to perform a translation operation on the track point.
  • the translation vector represents the translation gesture information from the track point to the target face key point, that is, the relative position of the track point and the target face key point.
  • the absolute coordinates (x c1 , y c1 ) of the target face key point C in the face image displayed on the display page at the current moment, the absolute coordinates (x c1 , y c1 ) of the target face key point C.
  • the absolute coordinates (x p , y p ) of a track point P in the finger movement track is (x p -x c1 ,y p -y c1 ) .
  • a first rotation matrix and a first scaling matrix of the current face pose in the face image are determined according to the first position information of the first face key point and the second face key point.
  • the first rotation matrix may represent the rotation attitude information of the current face attitude in the face image.
  • the first scaling matrix may represent scaling pose information of the current face pose in the face image.
  • the terminal in the case that the first position information of the face key point and the trajectory position information of the track point are both absolute coordinates on the display screen of the terminal, the terminal according to the first face key point and the second person
  • the process of determining the first position information of the face key point, the first rotation matrix and the first scaling matrix of the current face pose in the face image may include the following steps:
  • the first rotation matrix is obtained, and the first length is the first face key point pointing to the second face key point. the length of a vector
  • a first scaling matrix is obtained, and the reference length is the first scale set for the face in the face-up posture in the face image. length.
  • the terminal may obtain a first vector pointing from the first face key point to the second face key point according to the first position information of the first face key point and the second face key point.
  • a first length of the first vector is determined.
  • the first rotation matrix may be obtained according to the first position information of the first face key point, the first position information of the second face key point, and the first length.
  • the terminal calculates and determines the first vector according to the first position information of the first face key point A is (x a1 , y a1 ), and the first position information of the second face key point B is (x b1 , y b1 ) is (x a1 -x b1 ,y a1 -y b1 ). Then calculate the first vector
  • the first length of is
  • the first rotation matrix M r1 is obtained.
  • the first rotation matrix M r1 can be used for the translation vector Perform rotation processing, that is, according to the rotation attitude information of the current face attitude in the face image, the translation vector Performs rotation processing at the set scale.
  • step 1021 continue to take the assumed face key point in step 1021 as an example to illustrate the assumed trajectory point P in a schematic manner.
  • first vector The first length of is
  • the first scaling matrix M s1 is obtained.
  • the first scaling matrix M s1 can be used for the translation vector Perform zoom processing, that is, according to the zoom pose information of the current face pose in the face image, the translation vector Perform scaling processing at the set ratio.
  • the set ratio may be D:1. In some embodiments, D may be 100.
  • the inclination angle can better reflect the rotation angle of the face in the face image displayed on the display page. Therefore, according to the first length of the first vector of the first face key point pointing to the second face key point, the accuracy of the determined first rotation matrix used to indicate the rotation attitude information of the current face in the face image is more accurate. high. And at the same time, the length of the face in the face-up posture in the face image by the connection between the first face key point and the second face key point, and the real first length of the connection in the current face image, can be obtained. On the basis of repeatedly using the relevant information of the connection between the first face key point and the second face key point, a first scaling matrix for indicating the scaling posture information of the current face in the face image is determined, thereby reducing the calculation of the terminal. quantity.
  • step 1023 the relative position of the trajectory point relative to the face image is obtained according to the first rotation matrix, the first scaling matrix and the translation vector.
  • the process that the terminal obtains the relative position of the trajectory point with respect to the face image according to the first rotation matrix, the first scaling matrix and the translation vector may include: The vector and the first formula to get the relative position.
  • the first formula includes:
  • Q represents the relative position of the trajectory point relative to the face image
  • M s1 represents the first scaling matrix
  • M r1 represents the first rotation matrix
  • the translation vector since the translation vector of the track point pointing to the target face key point can reflect the relative distance of the track point relative to the face image, the first rotation matrix can reflect the rotation attitude information of the current face in the face image.
  • the first scaling matrix may reflect the scaling posture information of the current face in the face image.
  • the formula factors of the first formula include the first scaling matrix, the first rotation matrix and the translation vector. Therefore, using the first formula to calculate the relative position of the track point relative to the face image can take into account the various types of the current face in the face image. attitude information, so that the accuracy of the relative position of the calculated trajectory point relative to the face image is high.
  • the scheme of calculating the relative position of the trajectory point relative to the face image according to the first rotation matrix and the first scaling matrix of the current face posture in the face image and the translation vector of the trajectory point pointing to the target face key point since the translation vector of the track point pointing to the target face key point can reflect the relative distance of the track point relative to the face image, the first rotation matrix can reflect the rotation attitude information of the current face in the face image.
  • the first scaling matrix may reflect the scaling posture information of the current face in the face image. Therefore, in the case of considering various pose information of the current face in the face image, the relative position of the trajectory point and the face image can be obtained more realistically. Therefore, the accuracy of the relative position calculated according to the first rotation matrix and the first scaling matrix of the current face posture in the face image is high.
  • image special effect processing is repeatedly performed.
  • the image special effect processing includes: converting the relative position according to the second position information of the key points of the face in the face image displayed on the display page after the current moment, so as to obtain the first absolute position of each track point on the display screen; The trajectory points of each first absolute position generate a first special effect line. In the display page, the first special effect line is displayed.
  • the terminal after the terminal acquires the movement trajectory input by the user and before the terminal draws the first special effect line, the user's face may also undergo posture changes such as tilt and side head. Therefore, when the terminal calculates the relative position of the trajectory point relative to the face image, and when the terminal performs image special effect processing, the face pose in the face image displayed on the display interface may be different.
  • the terminal needs to calculate the absolute position of the trajectory point on the display screen according to the face image displayed in real time after acquiring the movement trajectory input by the user, so as to generate and display the first special effect line.
  • the special effect line (the first special effect line and the subsequent second special effect line collectively) displayed on the terminal has a refresh rate.
  • the image special effect processing needs to be performed repeatedly, so that the terminal can display the latest (latest) special effect line drawn, and the special effect line has the same position relative to the face in each face image. Therefore, it can be regarded as the same line that moves with the face visually.
  • the user's face may also change in posture during each refresh interval. Therefore, the image special effect processing is performed by using the face key points of the face image displayed by the terminal in real time.
  • FIG. 4 shows a schematic diagram of a face image of a display page provided by an embodiment of the present disclosure.
  • the shown face image is the face image when the user inputs the movement track.
  • the broken line drawn by the dotted line in the shape of an ear is the movement trajectory L0 input by the user.
  • P is a point on the moving trajectory L0.
  • FIG. 5 shows a schematic diagram of a face image of a display page provided by an embodiment of the present disclosure.
  • the shown face image is the face image displayed by the terminal after the current moment of the display page in the process of executing the image special effect processing.
  • the face image produces a posture change in which the head of the face is tilted.
  • the broken line L1 shown by the dotted line at the upper left of the head in the face image in FIG. 5 is the special effect line corresponding to the movement track input by the user shown in FIG. 4 .
  • the P1 point on the special effect line corresponds to the P point on the movement track.
  • 4 and 5 are face images of the same user at different times.
  • the face key points are the same target face key point C, the same first face key point A, and the same second face key B.
  • the terminal may repeatedly perform image special effect processing until receiving the special effect closing instruction.
  • the display position of the first special effect line that is, the absolute position on the display screen
  • the special effect closing instruction may be triggered after performing a setting operation on the display page.
  • the special effect closing instruction may be triggered by the user performing a setting input for the special effect triggering control.
  • the special effect triggering control may also be a special effect button in the display page.
  • the setting input may include input in the form of click, long press, swipe, or voice for the special effect trigger control.
  • the image special effect processing process includes the following steps:
  • the first special effect line is displayed.
  • the relative position is converted according to the second position information of the face key points in the face image displayed on the display page after the current moment, so as to obtain the first absolute position of each track point on the display screen.
  • the terminal may acquire the second position information of the key points of the face in the face image displayed on the display page after the current moment. After acquiring the second position information, the terminal can convert the relative position of each track point relative to the face image according to the second position information of the face key point to obtain the first absolute position corresponding to each track point on the display screen.
  • the acquired face key points in the face image are the same as at least two face key points in step 102 .
  • the first absolute position of each track point on the display screen may be represented by coordinates of the track point in the pixel coordinate system of the display screen.
  • the terminal converts the relative position according to the second position information of the key points of the face in the face image displayed on the display page after the current moment, and obtains each track point on the display screen.
  • the process of the first absolute position may include the following steps 1031 to 1032.
  • step 1031 for each track point, a second rotation matrix and a second scaling matrix of the current face pose in the face image are determined according to the second position information of the first face key point and the second face key point.
  • the second rotation matrix may represent the rotation posture information of the current face posture in the face image.
  • the second scaling matrix may represent the scaling pose information of the current face pose in the face image.
  • the terminal for each trajectory point in the at least one trajectory point may include the following steps:
  • a second rotation matrix is obtained, and the second length is the first face key point pointing to the second face key point. the length of the two vectors;
  • the first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information and the relative position of the target face key point.
  • a second rotation matrix is obtained according to the second position information of the first face key point and the second face key point, and the second length, and the second length is that the first face key point points to the second The length of the second vector of face keypoints.
  • the terminal may obtain a second vector pointing from the first face key point to the second face key point according to the second position information of the first face key point and the second face key point.
  • a second length of the second vector is determined.
  • a second rotation matrix may be obtained according to the second position information of the first face key point, the second position information of the second face key point, and the second length.
  • the terminal After the terminal acquires the movement trajectory input by the user, the terminal performs image special effect processing, and before the first special effect line is generated and displayed, the posture of the user's head changes. At this time, the face position of the face image displayed by the terminal on the display page is changed. For example, after the terminal acquires the movement track input by the user, and before the terminal performs image special effect processing and generates and displays the first special effect line, the user's head changes from the posture shown in FIG. 4 to the posture shown in FIG. 5 .
  • the terminal obtains the second vector according to the second position information (x a2 , y a2 ) of the first face key point A and the second position information (x b2 , y b2 ) of the second face key point B is (x a2 -x b2 ,y a2 -y b2 ). determine the second vector
  • the second length of is
  • the second rotation matrix M r2 is obtained, that is,
  • the second rotation matrix M r1 can be used to perform rotation processing changes on the relative positions of the trajectory points, that is, according to the rotation attitude information of the current face posture in the face image, the relative positions of the trajectory points are subjected to a set ratio of rotation processing changes.
  • the second scaling matrix is obtained according to the reference length of the line connecting the first face key point and the second face key point, and the second length.
  • the reference length is the second length set for the face in the face-up posture in the face image.
  • the first length and the second length set for the face in the face-up posture in the face image are equal.
  • second vector The second length of is
  • the second scaling matrix M s2 can be used to perform scaling processing and conversion on the relative position of the trajectory points, that is, performing scaling processing and conversion on the relative position of the trajectory points according to the scaling posture information of the current face posture in the face image.
  • the set ratio may be D:1. In some embodiments, D may be 100.
  • the inclination angle can better reflect the rotation angle of the face in the face image displayed on the display page. Therefore, according to the second length of the second vector of the first face key point pointing to the second face key point, the accuracy of the determined second rotation matrix used to indicate the rotation attitude information of the current face in the face image is more accurate. high.
  • the length of the face in the face-up posture in the face image by the connection between the first face key point and the second face key point, and the real length of the connection in the current face image can be repeated in Based on the related information of the connection between the first face key point and the second face key point, a second scaling matrix for indicating the scaling posture information of the current face in the face image is determined, thereby reducing the amount of calculation of the terminal.
  • step 1032 the first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information and the relative position of the target face key point.
  • the process that the terminal obtains the first absolute position of the trajectory point according to the second rotation matrix, the second scaling matrix, the second position information and the relative position of the target face key point may include: the terminal rotates according to the second The matrix, the second scaling matrix, the second position information of the target face key point, the relative position, and the second formula are used to obtain the first absolute position of the track point.
  • the second formula includes:
  • R represents the first absolute position of the trajectory point on the display screen
  • Mr 2 represents the second rotation matrix
  • Ms 2 represents the second scaling matrix
  • (x q , y q ) represents the relative position of the trajectory point relative to the face image
  • ( x c , y c ) represents the second position information of the key points of the target face
  • T represents the transposition processing.
  • the second rotation matrix can reflect the rotation posture information of the current face in the face image.
  • the second scaling matrix can reflect the scaling posture information of the current face in the face image. Therefore, using the second formula to determine the first absolute position of the track point can not only make the first absolute position of the track point change with the change of the display position of the face image, but also change due to the rotation posture of the current face in the face image. information as well as the zoom pose information. In this way, the generated first special effect lines can not only follow the movement of the human face in the face image, but also can rotate and zoom following the human face in the face image, thereby enriching the special effect display effect by connecting the trajectory points of the first absolute positions.
  • the facial posture in the face image displayed by the terminal when performing the aforementioned step 102 changes from the facial posture in the facial image displayed by the terminal when performing step 103, then for the same trajectory point, The relative position of the trajectory point obtained in step 102 relative to the face image is different from the absolute position of the trajectory point obtained in step 103 relative to the display screen. If the facial posture in the face image displayed by the terminal when performing the aforementioned step 102 does not change from the facial posture in the facial image displayed by the terminal when performing the step 103, then for the same trajectory point, the result obtained in the step 102 The relative position of the track point relative to the face image is the same as the absolute position of the track point obtained in step 103 relative to the display screen. That is, the two positions coincide.
  • the first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information and the relative position of the target face key points, because the second rotation matrix can reflect the human The rotational pose information of the current face in the face image.
  • the second scaling matrix may reflect the scaling posture information of the current face in the face image. Therefore, using the second rotation matrix, the second scaling matrix and the second position information of the target face key point to convert the relative position of the trajectory point into the first absolute position of the trajectory point can not only make the first absolute position of the trajectory point It changes with the change of the display position of the face image, and also changes due to the change of the rotation attitude information and the zoom attitude information of the current face in the face image. In this way, the generated first special effect lines can not only follow the movement of the human face in the face image, but also can rotate and zoom following the human face in the face image, thereby enriching the special effect display effect by connecting the trajectory points of the first absolute positions.
  • the trajectory points located at the first absolute positions are connected to generate a first special effect line.
  • the terminal may generate the first special effect line by connecting the trajectory points of the first absolute positions corresponding to the trajectory points according to the arrangement order of the trajectory points in the movement trajectory input by the user.
  • the movement track includes track points X1 , track points X2 and track points X3 arranged in sequence.
  • the first absolute position of the trajectory point X1 is Y1.
  • the first absolute position of the trajectory point X2 is Y2.
  • the first absolute position of the trajectory point X3 is Y3.
  • the terminal sequentially connects the trajectory point located at the first absolute position Y1, the trajectory point located at the first absolute position Y2 and the trajectory located at the first absolute position Y3 according to the sorting order of the trajectory point X1, the trajectory point X2 and the trajectory point X3. Click to generate the first special effect line.
  • the first special effect line is displayed.
  • the terminal may display the generated first special effect line on the display page currently displayed by the terminal.
  • the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory of at least one trajectory point in the movement trajectory input by the user on the display page can be obtained.
  • Position information to determine the relative position of each track point relative to the face image.
  • the first absolute position of each track point on the display screen is obtained, and the first absolute position of each track point on the display screen is obtained.
  • the first special effect line is drawn according to the movement trajectory input by the user, so that the user can draw the special effect independently.
  • the position information of the key points of the face and the relative position of each track point in the face image displayed in real time on the display page can be used to determine each track. point at the first absolute position on the display screen, so that the first special effect line is generated and displayed after connecting the track points located at the first absolute positions.
  • the display position of the generated first special effect line will change with the change of the display position of the face image displayed in real time on the display page, realizing the special effect that the first special effect line moves with the face, and enriching the special effect display effect.
  • the terminal may not only draw the first special effect line corresponding to the movement trajectory independently drawn according to the movement trajectory input by the user.
  • a second special effect line that is symmetrical to the first special effect line may also be drawn according to the first special effect line.
  • the second special effect line and the first special effect line are left-right symmetrical on the basis of the face in the face image.
  • the terminal can not only draw the special effect line L1 in the shape of an ear as shown by the dotted line at the upper left of the head in the face image in FIG. 5 . It is also possible to draw a special effect line L2 in the shape of an ear as shown by the dotted line at the upper right of the head in the face image in FIG. 5 .
  • the special effect line L1 and the special effect line L2 are left-right symmetrical on the basis of the face in the face image.
  • the point P1 on the special effect line L1 and the point P2 on the special effect line L2 are left-right symmetrical on the basis of the face in the face image.
  • the image special effect processing may further include the following:
  • a second special effect line symmetrical with the first special effect line is generated, and the second special effect line and the first special effect line are left-right symmetrical based on the face in the face image;
  • a second special effect line symmetrical to the first special effect line is generated according to the first special effect line.
  • the second special effect line and the first special effect line are left and right symmetrical on the basis of the face in the face image.
  • the terminal may generate, according to the first special effect line, a left-right symmetrical second special effect line based on the face in the face image currently displayed by the terminal.
  • the terminal may generate a second special effect line symmetrical to the first special effect line according to the first special effect line.
  • the embodiments of the present disclosure are described by taking the following two examples as examples.
  • the process for the terminal to generate a second special effect line symmetrical to the first special effect line according to the first special effect line may include the following steps 701 to 702 .
  • step 701 the second absolute position of each symmetrical point on the display screen is determined according to the second position information of the face key point and the relative position of each symmetrical point.
  • the terminal performs the above step 102 according to the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory position information of at least one trajectory point in the movement trajectory, After determining the relative position of each track point relative to the face image.
  • the terminal can also determine the relative position of the symmetrical point of each track point with respect to the face image according to the relative position of each track point with respect to the face image.
  • the symmetrical point and the trajectory point are left and right symmetrical with the face as the reference.
  • the process of determining the relative position of the symmetrical point of each trajectory point relative to the face image by the terminal may include: the terminal performs positive and negative on the coordinate value of the first direction in the relative coordinates of the trajectory point. Digital conversion processing is performed to obtain processed coordinate values, and the first direction is perpendicular to the symmetry axis of the face image.
  • the relative position of the track point is updated, so that the coordinate value of the first direction in the updated relative position is the processed coordinate value.
  • the updated relative position is determined as the relative position of the symmetry point.
  • the first direction may be a direction perpendicular to the symmetry axis of the left-right symmetry of the face image.
  • the relative coordinates of the trajectory point P1 determined by the terminal relative to the face image are (x q , y q ).
  • the first direction is a direction perpendicular to the symmetry axis of the left-right symmetry of the face image, that is, the x-axis direction.
  • the terminal performs positive and negative conversion processing on the coordinate value in the first direction in the relative coordinates of the trajectory point, and obtains the processed coordinate value -x q .
  • the relative position of the symmetrical point is (-x q , y q ).
  • the terminal determines the second absolute position of each symmetrical point on the display screen according to the second position information of the face key point and the relative position of each symmetrical point. That is, the terminal converts the relative position of each symmetrical point according to the second position information of the key point of the face in the face image displayed on the display page after the current moment, and obtains the second absolute position of each symmetrical point on the display screen. .
  • the process of determining the second absolute position of each symmetrical point on the display screen by the terminal according to the second position information of the key point of the face and the relative position of each symmetrical point may refer to step A in the aforementioned image special effect processing process, according to the display page In the face image displayed after the current moment, the second position information of the key points of the face converts the relative positions to obtain the first absolute position of each track point on the display screen, which is not performed in this embodiment of the present disclosure. Repeat.
  • step 702 the symmetrical points located at the second absolute positions are connected to generate a second special effect line.
  • the terminal may, according to the arrangement order of the trajectory points in the movement trajectory input by the user, connect the trajectory points of the second absolute positions among the symmetrical points corresponding to the trajectory points, and generate the second special effect line .
  • the movement track includes track points X1 , track points X2 and track points X3 arranged in sequence.
  • the second absolute position of the symmetrical point X4 corresponding to the trajectory point X1 is Y4.
  • the second absolute position of the symmetrical point X5 corresponding to the trajectory point X2 is Y5.
  • the second absolute position of the symmetrical point X6 corresponding to the trajectory point X3 is Y6.
  • the terminal sequentially connects the trajectory point located at the second absolute position Y4, the trajectory point located at the second absolute position Y5 and the trajectory located at the second absolute position Y6 according to the sorting order of the trajectory point X1, the trajectory point X2 and the trajectory point X3 Click to generate the second special effect line.
  • the process of generating the second special effect line symmetrical to the first special effect line by the terminal according to the first special effect line may include the following steps 801 to 802 .
  • step 801 the second absolute position of the symmetrical point of each trajectory point on the display screen is determined according to the second position information of the face key point and the first absolute position of each trajectory point.
  • the trajectory point and the symmetry point are left and right symmetrical based on the face.
  • the process that the terminal determines the second absolute position of the symmetrical point of each trajectory point on the display screen according to the second position information of the face key point and the first absolute position of each trajectory point may include the following steps:
  • the fourth vector pointing to the track point from the target face key point is obtained;
  • the second absolute position of the symmetrical point is obtained.
  • a second vector pointing from the first face key point to the second face key point is obtained, and The third vector of two vectors.
  • the terminal obtains the second vector according to the second position information (x a2 , y a2 ) of the first face key point A and the second position information (x b2 , y b2 ) of the second face key point B is (x a2 -x b2 ,y a2 -y b2 ). and the vertical with the second vector the third vector of is (y b2 -y a2 ,x a2 -x b2 ).
  • a fourth vector pointing from the target face key point to the track point is obtained.
  • step 1021 continue to take the assumed face key point in step 1021 as an example to illustrate the assumed trajectory point P schematically.
  • the first absolute position of the trajectory point P is (x r , y r ).
  • the terminal obtains the first position from the target face key point to the trajectory point.
  • four vector is (x r -x c ,y r -y c ) .
  • the second absolute position of the symmetrical point is obtained according to the second vector, the third vector, the fourth vector and the second position information of the target face key point.
  • the terminal obtains the second absolute position of the symmetry point according to the second vector, the third vector, the fourth vector, the second position information of the target face key point, and the third formula.
  • the third formula includes:
  • M represents the second absolute position of the symmetry point
  • (x c , y c ) represents the second current position information of the target face key point.
  • the symmetrical point and the trajectory point can be directly left and right symmetrical with the face as the reference, and the relative coordinates of the trajectory point can be directly converted to the first one that is perpendicular to the symmetry axis of the face image.
  • the obtained relative coordinate is determined as the relative coordinate of the symmetrical point of the trajectory point.
  • the relative positions of the symmetrical points are converted according to the second position information of the face key points in the face image displayed on the display page after the current moment, to obtain the symmetrical points on the display screen.
  • Second absolute position Compared with the second implementation manner, the process of determining the second absolute position of the symmetrical point on the display screen is simplified, and the calculation efficiency of the second absolute position of the symmetrical point is improved.
  • step 802 the symmetrical points located at the second absolute positions are connected to generate a second special effect line.
  • the terminal may, according to the arrangement order of the trajectory points in the movement trajectory input by the user, connect the trajectory points of the second absolute positions among the symmetrical points corresponding to the trajectory points, and generate the second special effect line .
  • the movement track includes track points X1 , track points X2 and track points X3 arranged in sequence.
  • the second absolute position of the symmetrical point X4 corresponding to the trajectory point X1 is Y4.
  • the second absolute position of the symmetrical point X5 corresponding to the trajectory point X2 is Y5.
  • the second absolute position of the symmetrical point X6 corresponding to the trajectory point X3 is Y6.
  • the terminal sequentially connects the trajectory point located at the second absolute position Y4, the trajectory point located at the second absolute position Y5 and the trajectory located at the second absolute position Y6 according to the sorting order of the trajectory point X1, the trajectory point X2 and the trajectory point X3 Click to generate the second special effect line.
  • the second effect line is displayed in the display page.
  • the terminal may display the generated second special effect line on the display page currently displayed by the terminal.
  • the terminal may generate, according to the first special effect line, a second special effect line that is left-right symmetrical with the first special effect line based on the human face in the face image. And display the second special effect line in the display page. It realizes the function that the user can independently draw the left and right symmetrical special effects lines based on the face in the face image.
  • the terminal After acquiring the relative positions of the symmetrical points of each trajectory point in the moving trajectory input by the user, the terminal adopts the second position information of the key points of the face and the relative positions of each symmetrical point in the face image displayed on the real-time page. , the determined second absolute position of the symmetrical point of each track point on the display screen. Thereby, the second effect lines generated by connecting the symmetrical points of the second absolute positions are connected. Therefore, the display position of the generated second special effect line will change with the change of the display position of the face image displayed on the display page in real time. The special effect of the second special effect line following the movement of the face is realized. In this way, the left and right symmetrical special effects lines drawn on the basis of the human face can move with the human face, enriching the special effect display effect.
  • the first position information of at least two face key points in the face image displayed on the display page at the current moment, and the trajectory of at least one trajectory point in the movement trajectory input by the user on the display page can be obtained.
  • Position information to determine the relative position of each track point relative to the face image.
  • the first absolute position of each track point on the display screen is obtained, and the first absolute position of each track point on the display screen is obtained.
  • the first special effect line is drawn according to the movement trajectory input by the user, which realizes the function of the user to draw the special effect independently. Moreover, after obtaining the relative positions of each track point in the moving track and the current face image, the position information of the key points of the face and the relative position of each track point in the face image displayed in real time on the display page can be used to determine each track. point at the first absolute position on the display screen, so that the first special effect line is generated and displayed after connecting the track points located at the first absolute positions. In this way, the display position of the generated first special effect line will change with the change of the display position of the face image displayed in real time on the display page, realizing the special effect that the first special effect line moves with the face, and enriching the special effect display effect.
  • Fig. 9 is a block diagram of an image processing apparatus according to an exemplary embodiment.
  • the image processing apparatus 900 includes: an acquisition module 901 , a determination module 902 and an image special effect processing module 903 .
  • an acquisition module 901 configured to acquire the movement track input by the user in the display page including the face image in response to the special effect display instruction
  • the determining module 902 is configured to determine the relative position of each track point relative to the human face according to the first position information of at least two face key points and the track position information of at least one track point in the moving track in the face image displayed on the display page at the current moment. the relative position of the face image;
  • the image special effect processing module 903 is used to repeatedly perform image special effect processing, and the image special effect processing includes:
  • the first special effect line is displayed.
  • the at least two face key points include: a first face key point, a second face key point, and a target face key point, and the first face key point and the second face key point are related to
  • the target face key point is symmetrical, and the target face key point is any face key point on the symmetry axis of the face image.
  • the determining module 902 is further configured to:
  • the relative position of the trajectory point relative to the face image is obtained.
  • both the first position information and the track position information include absolute coordinates on the display screen, and the determining module 902 is further configured to:
  • the first rotation matrix is obtained, and the first length is the first face key point pointing to the second face key point. the length of a vector
  • a first scaling matrix is obtained, and the reference length is the first scale set for the face in the face-up posture in the face image. length.
  • the determining module 902 is further configured to:
  • the relative position is obtained according to the first scaling matrix, the first rotation matrix, the translation vector and the first formula, and the first formula includes:
  • Q represents the relative position
  • Ms 1 represents the first scaling matrix
  • Mr 1 represents the first rotation matrix
  • the image special effect processing module 903 is further configured to:
  • the first absolute position of the trajectory point is obtained according to the second rotation matrix, the second scaling matrix, the second position information and the relative position of the target face key point.
  • the second position information includes absolute coordinates on the display screen
  • the image special effect processing module 903 is further configured to:
  • a second rotation matrix is obtained, and the second length is the first face key point pointing to the second face key point. the length of the two vectors;
  • a second scaling matrix is obtained, and the reference length is the second set for the face in the face-up posture in the face image. length.
  • the image special effect processing module 903 is further configured to:
  • the second rotation matrix the second scaling matrix, the second position information of the target face key point, the relative position and the second formula, the first absolute position of the trajectory point is obtained, and the second formula includes:
  • R represents the first absolute position
  • Mr 2 represents the second rotation matrix
  • Ms 2 represents the second scaling matrix
  • (x q , y q ) represents the relative position
  • (x c , y c ) represents the target face key point
  • T represents the transposition process.
  • the image special effect processing further includes:
  • a second special effect line symmetrical with the first special effect line is generated, and the second special effect line and the first special effect line are left-right symmetrical based on the face in the face image;
  • the determining module 902 is further configured to:
  • each track point determines the relative position of the symmetry point of each track point relative to the face image, and the symmetry point and the track point are left and right symmetrical with the face as the benchmark;
  • the image special effect processing module 903 is also used for:
  • the relative position includes relative coordinates relative to the face image
  • the determining module 902 is further configured to:
  • the determination module can be used to determine the first position information of at least two face key points in the face image displayed on the display page at the current moment, and at least one trajectory point in the movement trajectory input by the user on the display page.
  • the position information of the trajectory is determined, and the relative position of each trajectory point relative to the face image is determined.
  • the image special effect processing module repeatedly perform the conversion according to the second position information of the face key points in the face image displayed on the display page after the current moment, the relative positions are converted to obtain the first absolute position of each track point on the display screen. , and on the display page, the process of displaying the first special effect line formed by connecting the trajectory points located at the first absolute positions.
  • the first special effect line is drawn according to the movement trajectory input by the user, which realizes the function of the user to draw the special effect independently. Moreover, after obtaining the relative positions of each track point in the moving track and the current face image, the position information of the key points of the face and the relative position of each track point in the face image displayed in real time on the display page can be used to determine each track. point at the first absolute position on the display screen, so that the first special effect line is generated and displayed after connecting the track points located at the first absolute positions. In this way, the display position of the generated first special effect line will change with the change of the display position of the face image displayed in real time on the display page, realizing the special effect that the first special effect line moves with the face, and enriching the special effect display effect.
  • Fig. 10 is a block diagram of an electronic device according to an exemplary embodiment.
  • the electronic device may be a terminal.
  • the electronic device 1000 can be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, a moving picture expert compression standard Audio Layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, a moving picture expert compression standard Audio Layer 4) Player, Laptop or Desktop.
  • Electronic device 1000 may also be called user equipment, portable terminal, laptop terminal, desktop terminal, and the like by other names.
  • the electronic device 1000 includes: a processor 1001 and a memory 1002 .
  • the processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 1001 can use at least one hardware form among DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish.
  • the processor 1001 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the wake-up state, also called CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor for processing data in a standby state.
  • the processor 1001 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen.
  • the processor 1001 may further include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, artificial intelligence
  • Memory 1002 may include one or more non-volatile computer-readable storage media, which may be non-transitory. Memory 1002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more disk storage devices, flash storage devices. In some embodiments, the non-transitory computer-readable storage medium in the memory 1002 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 1001 to realize the information display provided by the method embodiments in this application. method.
  • the electronic device 1000 may further include: a peripheral device interface 1003 and at least one peripheral device.
  • the processor 1001, the memory 1002 and the peripheral device interface 1003 may be connected through a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 1003 through a bus, a signal line or a circuit board.
  • the peripheral device includes at least one of a radio frequency circuit 1004 , a display screen 1005 , a camera 1006 , an audio circuit 1007 , a positioning component 1008 and a power supply 1009 .
  • the peripheral device interface 1003 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1001 and the memory 1002 .
  • processor 1001, memory 1002, and peripherals interface 1003 are integrated on the same chip or circuit board; in some other embodiments, any one of processor 1001, memory 1002, and peripherals interface 1003 or The two may be implemented on a separate chip or circuit board, which is not limited by the embodiments of the present disclosure.
  • the radio frequency circuit 1004 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 1004 communicates with the communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 1004 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • radio frequency circuitry 1004 includes: an antenna system, an RF transceiver, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and the like.
  • the radio frequency circuit 1004 can communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocols include, but are not limited to, metropolitan area networks, mobile communication networks of various generations (2G, 3G, 4G and 5G), wireless local area networks and/or WiFi (Wireless Fidelity, wireless fidelity) networks.
  • the radio frequency circuit 1004 may further include a circuit related to NFC (Near Field Communication, short-range wireless communication), which is not limited in this application.
  • the display screen 1005 is used for displaying UI (User Interface, user interface).
  • the UI can include graphics, text, icons, video, and any combination thereof.
  • the display screen 1005 also has the ability to acquire touch signals on or above the surface of the display screen 1005 .
  • the touch signal can be input to the processor 1001 as a control signal for processing.
  • the display screen 1005 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards.
  • the display screen 1005 there may be one display screen 1005, which is arranged on the front panel of the electronic device 1000; in other embodiments, there may be at least two display screens 1005, which are respectively arranged on different surfaces of the electronic device 1000 or in a folded design. ; In still other embodiments, the display screen 1005 may be a flexible display screen, disposed on a curved surface or a folding surface of the electronic device 1000 . Even, the display screen 1005 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen.
  • the display screen 1005 can be prepared by using materials such as LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, organic light emitting diode).
  • the camera assembly 1006 is used to capture images or video.
  • camera assembly 1006 includes a front-facing camera and a rear-facing camera.
  • the front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal.
  • there are at least two rear cameras which are any one of a main camera, a depth-of-field camera, a wide-angle camera, and a telephoto camera, so as to realize the fusion of the main camera and the depth-of-field camera to realize the background blur function
  • the main camera It is integrated with the wide-angle camera to achieve panoramic shooting and VR (Virtual Reality, virtual reality) shooting functions or other integrated shooting functions.
  • VR Virtual Reality, virtual reality
  • the camera assembly 1006 may also include a flash.
  • the flash can be a single color temperature flash or a dual color temperature flash. Dual color temperature flash refers to the combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
  • Audio circuitry 1007 may include a microphone and speakers.
  • the microphone is used to collect the sound waves of the user and the environment, convert the sound waves into electrical signals, and input them to the processor 1001 for processing, or to the radio frequency circuit 1004 to realize voice communication.
  • the microphone may also be an array microphone or an omnidirectional collection microphone.
  • the speaker is used to convert the electrical signal from the processor 1001 or the radio frequency circuit 1004 into sound waves.
  • the loudspeaker can be a traditional thin-film loudspeaker or a piezoelectric ceramic loudspeaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible to humans, but also convert electrical signals into sound waves inaudible to humans for distance measurement and other purposes.
  • the audio circuit 1007 may also include a headphone jack.
  • the positioning component 1008 is used to locate the current geographic location of the electronic device 1000 to implement navigation or LBS (Location Based Service).
  • the positioning component 1008 may be a positioning component based on the GPS (Global Positioning System, global positioning system) of the United States, the Beidou system of China, the Grenas system of Russia, or the Galileo system of the European Union.
  • Power supply 1009 is used to power various components in electronic device 1000 .
  • the power source 1009 may be alternating current, direct current, disposable batteries or rechargeable batteries.
  • the rechargeable battery can support wired charging or wireless charging.
  • the rechargeable battery can also be used to support fast charging technology.
  • the electronic device 1000 also includes one or more sensors 1010 .
  • the one or more sensors 1010 include, but are not limited to, an acceleration sensor 1011, a gyro sensor 1012, a pressure sensor 1013, a fingerprint sensor 1014, an optical sensor 1015, and a proximity sensor 1016.
  • the acceleration sensor 1011 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the electronic device 1000 .
  • the acceleration sensor 1011 can be used to detect the components of the gravitational acceleration on the three coordinate axes.
  • the processor 1001 can control the display screen 1005 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1011 .
  • the acceleration sensor 1011 can also be used for game or user movement data collection.
  • the gyroscope sensor 1012 can detect the body direction and rotation angle of the electronic device 1000 , and the gyroscope sensor 1012 can cooperate with the acceleration sensor 1011 to collect the 3D actions of the user on the electronic device 1000 .
  • the processor 1001 can implement the following functions according to the data collected by the gyro sensor 1012: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 1013 may be disposed on the side frame of the electronic device 1000 and/or the lower layer of the display screen 1005 .
  • the processor 1001 can perform left and right hand recognition or quick operation according to the holding signal collected by the pressure sensor 1013 .
  • the processor 1001 controls the operability controls on the UI interface according to the user's pressure operation on the display screen 1005.
  • the operability controls include at least one of button controls, scroll bar controls, icon controls, and menu controls.
  • the fingerprint sensor 1014 is used to collect the user's fingerprint, and the processor 1001 identifies the user's identity according to the fingerprint collected by the fingerprint sensor 1014, or the fingerprint sensor 1014 identifies the user's identity according to the collected fingerprint. When the user's identity is identified as a trusted identity, the processor 1001 authorizes the user to perform relevant sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, making payments, and changing settings.
  • the fingerprint sensor 1014 may be provided on the front, back, or side of the electronic device 1000 . When the electronic device 1000 is provided with physical buttons or a manufacturer's logo, the fingerprint sensor 1014 can be integrated with the physical buttons or the manufacturer's logo.
  • the optical sensor 1015 is used to collect ambient light intensity.
  • the processor 1001 can control the display brightness of the display screen 1005 according to the ambient light intensity collected by the optical sensor 1015 . In some embodiments, when the ambient light intensity is high, the display brightness of the display screen 1005 is increased; when the ambient light intensity is low, the display brightness of the display screen 1005 is decreased. In another embodiment, the processor 1001 may also dynamically adjust the shooting parameters of the camera assembly 1006 according to the ambient light intensity collected by the optical sensor 1015 .
  • a proximity sensor 1016 also called a distance sensor, is usually provided on the front panel of the electronic device 1000 .
  • the proximity sensor 1016 is used to collect the distance between the user and the front of the electronic device 1000 .
  • the processor 1001 controls the display screen 1005 to switch from the bright screen state to the off screen state; when the proximity sensor 1016 detects When the distance between the user and the front of the electronic device 1000 gradually increases, the processor 1001 controls the display screen 1005 to switch from the off-screen state to the bright-screen state.
  • FIG. 10 does not constitute a limitation on the electronic device 1000, and may include more or less components than the one shown, or combine some components, or adopt different component arrangements.
  • a non-volatile computer-readable storage medium is also provided, when the instructions in the storage medium are executed by the processor of the electronic device, the electronic device can execute the methods provided by the above method embodiments. image processing methods.
  • the computer-readable storage medium can be ROM (Read-Only Memory, read-only memory), RAM (Random Access Memory, random access memory), CD-ROM (Compact Disc Read-Only Memory, read-only optical disk), Tape, floppy disk, and optical data storage devices, etc.
  • a computer program product including a computer program.
  • the computer program is executed by the processor, the image processing methods provided by the above method embodiments can be executed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention se rapporte au domaine technique du traitement d'image, et concerne en particulier un procédé et un dispositif de traitement d'image. Le procédé consiste à : en réponse à une instruction d'affichage d'effet spécial, acquérir une trajectoire de mouvement entrée par un utilisateur dans une page d'affichage comprenant une image de visage ; en fonction de premières informations de position d'au moins deux points clés de visage dans l'image de visage affichée au moment actuel, et d'informations de position de trajectoire d'au moins un point de trajectoire dans la trajectoire de mouvement, déterminer la position relative de chaque point de trajectoire par rapport à l'image de visage ; et exécuter de manière répétée le traitement d'effet spécial d'image, le traitement d'effet spécial d'image consistant à : convertir la position relative en fonction de deuxièmes informations de position des points clés de visage dans l'image de visage affichée dans la page d'affichage après le moment actuel pour obtenir une première position absolue de chaque point de trajectoire sur un écran d'affichage ; et relier les points de trajectoire situés aux premières positions absolues, et générer et afficher une première ligne d'effet spécial.
PCT/CN2021/134644 2021-03-26 2021-11-30 Procédé et dispositif de traitement d'image WO2022199102A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110328694.1 2021-03-26
CN202110328694.1A CN113160031B (zh) 2021-03-26 2021-03-26 图像处理方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022199102A1 true WO2022199102A1 (fr) 2022-09-29

Family

ID=76885649

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/134644 WO2022199102A1 (fr) 2021-03-26 2021-11-30 Procédé et dispositif de traitement d'image

Country Status (2)

Country Link
CN (1) CN113160031B (fr)
WO (1) WO2022199102A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160031B (zh) * 2021-03-26 2024-05-14 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN113744135A (zh) * 2021-09-16 2021-12-03 北京字跳网络技术有限公司 图像处理方法、装置、电子设备及存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060093309A1 (en) * 2004-10-05 2006-05-04 Magix Ag System and method for creating a photo movie
CN106231434A (zh) * 2016-07-25 2016-12-14 武汉斗鱼网络科技有限公司 一种基于人脸检测的直播互动特效实现方法及系统
CN107888845A (zh) * 2017-11-14 2018-04-06 腾讯数码(天津)有限公司 一种视频图像处理方法、装置及终端
CN107948667A (zh) * 2017-12-05 2018-04-20 广州酷狗计算机科技有限公司 在直播视频中添加显示特效的方法和装置
CN111242881A (zh) * 2020-01-07 2020-06-05 北京字节跳动网络技术有限公司 显示特效的方法、装置、存储介质及电子设备
CN111753784A (zh) * 2020-06-30 2020-10-09 广州酷狗计算机科技有限公司 视频的特效处理方法、装置、终端及存储介质
CN111954055A (zh) * 2020-07-01 2020-11-17 北京达佳互联信息技术有限公司 视频特效的展示方法、装置、电子设备及存储介质
CN112035041A (zh) * 2020-08-31 2020-12-04 北京字节跳动网络技术有限公司 一种图像处理方法、装置、电子设备和存储介质
CN113160031A (zh) * 2021-03-26 2021-07-23 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1895393A1 (fr) * 2006-09-01 2008-03-05 Research In Motion Limited Méthode pour faciliter la navigation et la sélection avec une boule de commande
CN110809089B (zh) * 2019-10-30 2021-11-16 联想(北京)有限公司 处理方法和处理装置
CN112017254B (zh) * 2020-06-29 2023-12-15 浙江大学 一种混合式光线跟踪绘制方法及系统

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060093309A1 (en) * 2004-10-05 2006-05-04 Magix Ag System and method for creating a photo movie
CN106231434A (zh) * 2016-07-25 2016-12-14 武汉斗鱼网络科技有限公司 一种基于人脸检测的直播互动特效实现方法及系统
CN107888845A (zh) * 2017-11-14 2018-04-06 腾讯数码(天津)有限公司 一种视频图像处理方法、装置及终端
CN107948667A (zh) * 2017-12-05 2018-04-20 广州酷狗计算机科技有限公司 在直播视频中添加显示特效的方法和装置
CN111242881A (zh) * 2020-01-07 2020-06-05 北京字节跳动网络技术有限公司 显示特效的方法、装置、存储介质及电子设备
CN111753784A (zh) * 2020-06-30 2020-10-09 广州酷狗计算机科技有限公司 视频的特效处理方法、装置、终端及存储介质
CN111954055A (zh) * 2020-07-01 2020-11-17 北京达佳互联信息技术有限公司 视频特效的展示方法、装置、电子设备及存储介质
CN112035041A (zh) * 2020-08-31 2020-12-04 北京字节跳动网络技术有限公司 一种图像处理方法、装置、电子设备和存储介质
CN113160031A (zh) * 2021-03-26 2021-07-23 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN113160031B (zh) 2024-05-14
CN113160031A (zh) 2021-07-23

Similar Documents

Publication Publication Date Title
CN110502954B (zh) 视频分析的方法和装置
CN110992493B (zh) 图像处理方法、装置、电子设备及存储介质
CN110427110B (zh) 一种直播方法、装置以及直播服务器
CN111464749B (zh) 进行图像合成的方法、装置、设备及存储介质
CN110971930A (zh) 虚拟形象直播的方法、装置、终端及存储介质
CN110841285B (zh) 界面元素的显示方法、装置、计算机设备及存储介质
CN112907725B (zh) 图像生成、图像处理模型的训练、图像处理方法和装置
CN109166150B (zh) 获取位姿的方法、装置存储介质
CN110134744B (zh) 对地磁信息进行更新的方法、装置和系统
WO2022134632A1 (fr) Procédé et appareil de traitement de travail
CN109886208B (zh) 物体检测的方法、装置、计算机设备及存储介质
WO2022052620A1 (fr) Procédé de génération d'image et dispositif électronique
CN111768454A (zh) 位姿确定方法、装置、设备及存储介质
CN111897429A (zh) 图像显示方法、装置、计算机设备及存储介质
WO2022199102A1 (fr) Procédé et dispositif de traitement d'image
CN110288689B (zh) 对电子地图进行渲染的方法和装置
CN111897465B (zh) 弹窗显示方法、装置、设备及存储介质
CN110839174A (zh) 图像处理的方法、装置、计算机设备以及存储介质
CN113384880A (zh) 虚拟场景显示方法、装置、计算机设备及存储介质
CN110837300B (zh) 虚拟交互的方法、装置、电子设备及存储介质
CN111385525B (zh) 视频监控方法、装置、终端及系统
CN112396076A (zh) 车牌图像生成方法、装置及计算机存储介质
CN110349527B (zh) 虚拟现实显示方法、装置及系统、存储介质
CN112967261B (zh) 图像融合方法、装置、设备及存储介质
CN108881715B (zh) 拍摄模式的启用方法、装置、终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21932710

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.01.2024)