WO2020035001A1 - Methods and devices for replacing expression, and computer readable storage media - Google Patents
Methods and devices for replacing expression, and computer readable storage media Download PDFInfo
- Publication number
- WO2020035001A1 WO2020035001A1 PCT/CN2019/100601 CN2019100601W WO2020035001A1 WO 2020035001 A1 WO2020035001 A1 WO 2020035001A1 CN 2019100601 W CN2019100601 W CN 2019100601W WO 2020035001 A1 WO2020035001 A1 WO 2020035001A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- expression
- key points
- coordinates
- acquiring
- face model
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2021—Shape modification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Definitions
- the present disclosure relates to a field of portrait processing technologies, and more particularly, to a method and a device for replacing an expression, and a computer readable storage medium.
- Embodiments of a first aspect of the present disclosure provide a method for replacing an expression.
- the method includes: acquiring a current expression represented by a currently-reconstructed three-dimensional (3D) face model; acquiring a target expression from a user; acquiring, based on the current expression and the target expression, values for adjusting coordinates of a first set of key points on the currently-reconstructed 3D face model; and adjusting the coordinates of the first set of key points on the currently-reconstructed 3D face model based on the values, to generate a 3D face model representing the target expression.
- 3D three-dimensional
- acquiring the target expression from the user includes: displaying a list of expressions to the user; and acquiring an expression selected by the user on the list as the target expression.
- acquiring the target expression from the user includes: capturing an expression of the user by a camera; matching the expression captured by the camera with a preset list of expressions; and in response to the expression captured by the camera matching one expression in the preset list, using the expression captured by the camera as the target expression.
- acquiring, based on the current expression and the target expression, values for adjusting coordinates of a first set of key points on the currently-reconstructed 3D face model includes: acquiring a second set of key points of the current expression, and coordinates of the second set of key points; acquiring a third set of key points of the target expression, and coordinates of the third set of key points; acquiring the first set of key points based on the second set of key points and the third set of key points, and acquiring the values for adjusting the coordinates of the first set of key points based on the coordinates of the second set of key points and the coordinates of the third set of key points; the second set of key points of the current expression, and the coordinates of the second set of key points being preset; and the third set of key points of the target expression, and the coordinates of the third set of key points being preset.
- acquiring, based on the current expression and the target expression, values for adjusting coordinates of a first set of key points on the currently-reconstructed 3D face model includes: querying, based on the current expression and the target expression, a preset database to acquire the values for adjusting the coordinates of the first set of key points, the preset database comprises a plurality of expressions, and values for adjusting coordinates of a corresponding set of key points from one of the plurality of expressions to another of the plurality of expressions.
- the method further includes: displaying one or more adjustable widgets, each of the one or more adjustable widgets being configured to adjust a corresponding key portion on the 3D face model representing the target expression within a preset range; acquiring an operation on one of the one or more adjustable widgets; acquiring an adjustment angle based on the operation; and adjusting the corresponding key portion based on the adjustment angle.
- the method further includes: acquiring a preset state feature of a key portion corresponding to the target expression; and adjusting a state of the key portion in the 3D face model representing the target expression based on the preset state feature.
- Embodiments of a second aspect of the present disclosure provides a device for replacing an expression including: a first acquiring module configured to, acquire a current expression represented by a currently-reconstructed three-dimensional (3D) face model; a second acquiring module configured to, acquire a target expression from a user; a third acquiring module configured to, acquire, based on the current expression and the target expression, values for adjusting coordinates of a first set of key points on the currently-reconstructed 3D face model; and a generating module configured to, adjust the coordinates of the first set of key points on the currently-reconstructed 3D face model based on the values, to generate a 3D face model representing the target expression.
- a first acquiring module configured to, acquire a current expression represented by a currently-reconstructed three-dimensional (3D) face model
- a second acquiring module configured to, acquire a target expression from a user
- a third acquiring module configured to, acquire, based on the current expression and the target expression, values for adjusting coordinates of a
- the second acquiring module is configured to: display a list of expressions to the user; and acquire an expression selected by the user on the list as the target expression.
- the second acquiring module is configured to: capture an expression of the user by a camera; match the expression captured by the camera with a preset list of expressions; and in response to the expression captured by the camera matching one expression in the preset list, use the expression captured by the camera as the target expression.
- the third acquiring module is configured to: acquire a second set of key points of the current expression, and coordinates of the second set of key points; acquire a third set of key points of the target expression, and coordinates of the third set of key points; acquire the first set of key points based on the second set of key points and the third set of key points, and acquire the values for adjusting the coordinates of the first set of key points based on the coordinates of the second set of key points and the coordinates of the third set of key points; the second set of key points of the current expression, and the coordinates of the second set of key points being preset; and the third set of key points of the target expression, and the coordinates of the third set of key points being preset.
- the third acquiring module is configured to: query, based on the current expression and the target expression, a preset database to acquire the values for adjusting the coordinates of the first set of key points, the preset database comprises a plurality of expressions, and values for adjusting coordinates of a corresponding set of key points from one of the plurality of expressions to another of the plurality of expressions.
- the device further includes: a first adjusting module configured to: display one or more adjustable widgets, each of the one or more adjustable widgets being configured to adjust a corresponding key portion on the 3D face model representing the target expression within a preset range; acquire an operation on one of the one or more adjustable widgets; acquire an adjustment angle based on the operation; and adjust the corresponding key portion based on the adjustment angle.
- a first adjusting module configured to: display one or more adjustable widgets, each of the one or more adjustable widgets being configured to adjust a corresponding key portion on the 3D face model representing the target expression within a preset range; acquire an operation on one of the one or more adjustable widgets; acquire an adjustment angle based on the operation; and adjust the corresponding key portion based on the adjustment angle.
- the device further includes: a second adjusting module configured to: acquire a preset state feature of a key portion corresponding to the target expression; and adjust a state of the key portion in the 3D face model representing the target expression based on the preset state feature.
- a second adjusting module configured to: acquire a preset state feature of a key portion corresponding to the target expression; and adjust a state of the key portion in the 3D face model representing the target expression based on the preset state feature.
- Embodiments of a third aspect of the present disclosure provide a computer readable storage medium having a computer program stored thereon.
- the computer program is executed by a processor, the method for replacing the expression as described in the above embodiments of the first aspect is implemented.
- FIG. 1 is a flowchart of a method for replacing an expression according to embodiments of the present disclosure.
- FIG. 2 is a flowchart of a method for replacing an expression according to embodiments of the present disclosure.
- FIG. 3 is a flowchart of a method for replacing an expression according to embodiments of the present disclosure.
- FIG. 4 is a flowchart of a method for replacing an expression according to embodiments of the present disclosure.
- FIG. 5 is a flowchart of a method for replacing an expression according to embodiments of the present disclosure.
- FIG. 6 is a flowchart of a method for replacing an expression according to embodiments of the present disclosure.
- FIG. 7 is a flowchart of a method for replacing an expression according to embodiments of the present disclosure.
- FIG. 8 is a schematic diagram of a scenario of a method for replacing an expression according to an embodiment of the present disclosure.
- FIG. 9 is a flowchart of a method for replacing an expression according to embodiments of the present disclosure.
- FIG. 10 is a block diagram of a device for replacing an expression according to embodiments of the present disclosure.
- FIG. 11 is a block diagram of a device for replacing an expression according to embodiments of the present disclosure.
- FIG. 12 is a block diagram of a device for replacing an expression according to embodiments of the present disclosure.
- FIG. 13 is a block diagram of a device for replacing an expression according to embodiments of the present disclosure.
- FIG. 14 is a schematic diagram of an electronic device according to an embodiment of the present disclosure.
- FIG. 15 is a block diagram of an image processing circuit in an embodiment.
- FIG. 16 is a schematic diagram of an image processing circuit as one possible implementation.
- the present disclosure provides a method, a device, and a computer readable storage medium for replacing an expression.
- a difference between a satisfactory 3D face model and a currently-reconstructed 3D face model may be found, and the currently-reconstructed 3D face model may be adjusted based on the difference to acquire the satisfactory 3D face model, thereby improving the modeling efficiency on the 3D face model.
- the method provided in the embodiment of the present disclosure may be applicable to computer devices having an apparatus for acquiring depth information and color information.
- the apparatus for acquiring depth information and color information i.e., 2D information
- the computer devices may be hardware devices having various operating systems, touch screens, and/or display screens, such as mobile phones, tablet computers, personal digital assistants, wearable devices, or the like.
- FIG. 1 is a flowchart of a method for replacing an expression according to embodiments of the present disclosure. As illustrated in FIG. 1, the method includes acts in the following blocks.
- a current expression represented by a currently-reconstructed 3D face model is acquired.
- the 3D face model may be actually represented by points and a triangular network formed by connecting the points. Some points corresponding to portions having main influence (i.e., key portions) on a shape of the entire 3D face model may be referred to as key points.
- the expression may be represented by a set of key points. Different sets of key points may distinguish different expressions.
- the set of key points may correspond to the key portions (such as mouth and eyes) representing differentiation of the expression.
- the currently-reconstructed 3D face model is scanned to acquire a plurality of key portions and key points of the plurality of key portions.
- a feature vector of the plurality of key portions is extracted based on coordinates of the key points of the plurality of key portions, and distances among the plurality of key portions.
- the feature vector is analyzed by a pre-trained neural network model to determine the current expression.
- the neural network model is trained in advance based on a large amount of experimental data.
- Inputs of the neural network model may be the feature vector corresponding to the coordinates of the key points of the plurality of key portions and the distances among the plurality of key portions.
- An output of the neural network model is the expression.
- the key points of the plurality of key portions in the currently-reconstructed 3D face model are determined.
- the key points of the key portions (such as the mouth) are determined by image recognition technologies.
- the feature vector of the plurality of key portions is extracted, based on the coordinates of the key points of the plurality of key portions and the distances among the plurality of key portions.
- the feature vector of the plurality of key portions is analyzed through the pre-trained neural network model to determine the current expression of the 3D face model.
- a target expression from a user is acquired.
- the act in block 102 may include acts at block 1021 and block 1022.
- a list of expressions is displayed to the user.
- an expression selected by the user on the list is acquired as the target expression.
- the act in block 102 may include acts at block 1023, block 1024, and block 1025.
- an expression of the user is captured by a camera.
- the camera may capture 2D face images for the same scene, and acquire the expression of the user based on the 2D face images thought the image processing technologies.
- the expression captured by the camera is matched with a preset list of expressions.
- the expression captured by the camera is used as the target expression.
- the list of expressions may be preset in advance, which may basically, cover all requirements of the user for changing expressions.
- the list may include four commonly-used expressions such as happy, sad, distressed and mourning.
- the list may further include other expressions, which is not limited herein.
- values for adjusting coordinates of a first set of key points on the currently-reconstructed 3D face model are acquired.
- the coordinates of the first set of key points on the currently-reconstructed 3D face model is adjusted based on the values, to generate a 3D face model representing the target expression.
- the 3D face model is actually reconstructed by the points. Therefore, the change and reconstruction of the face model is actually realized by changing coordinate values of the points. Therefore, in the embodiment of the present disclosure, in order to realize the reconstruction of the 3D face model corresponding to the target expression, it is necessary to acquire the values for adjusting the coordinates of the first set of key points on the currently-reconstructed 3D face model, so as to correspondingly adjust the coordinates of the first set of key points based on the values, to generate the 3D face model representing to the target expression.
- Manners of acquiring the values for adjusting the coordinates of the first set of key points on the currently-reconstructed 3D face model may be varied with scenarios.
- the examples are as follows.
- the values for adjusting the coordinates of the first set of key points on the currently-reconstructed 3D face model may be acquired by the following acts.
- a second set of key points of the current expression, and coordinates of the second set of key points are acquired.
- the second set of key points of the current expression, and the coordinates of the second set of key points may be preset in advance.
- a third set of key points of the target expression, and coordinates of the third set of key points are acquired.
- the third set of key points of the target expression, and the coordinates of the third set of key points being preset may be preset in advance.
- the first set of key points is acquired based on the second set of key points and the third set of key points.
- the values for adjusting the coordinates of the first set of key points are acquired based on the coordinates of the second set of key points and the coordinates of the third set of key points.
- the preset database may include a plurality of expressions.
- the plurality of expressions may be acquired in advance.
- the corresponding set of key points, and coordinates of the corresponding set of key points may also be acquired in advance and stored in the database. If the current expression and the target expression are acquired, the database may be searched for to acquire the second set of key points of the current expression, and the coordinates of the second set of key points, and the third set of key points of the target expression, and the coordinates of the third set of key points.
- the first set of key points may be acquired.
- the first set of key points may include key points in the second set and in the third set. And then, the values for adjusting the coordinates of the first set of key points may be acquired based on the coordinates of the second set of key points and the coordinates of the third set of key points.
- the preset database may include a plurality of expressions.
- the plurality of expressions may be acquired in advance.
- the corresponding set of key points, and coordinates of the corresponding set of key points may also be acquired in advance and stored in the database.
- the values for adjusting the coordinates of the corresponding set of key points from one of the plurality of expressions to another of the plurality of expressions may be calculated in advance and stored in the preset database.
- the values for adjusting the coordinates of the first set of key points on the currently-reconstructed 3D face model may be acquired by the following acts.
- a preset database is queried, to acquire the values for adjusting the coordinates of the first set of key points, the preset database comprises a plurality of expressions.
- the user after generating the 3D face model representing to the target expression, the user is also provided with an adjustable space.
- the method further includes acts in the following blocks.
- one or more adjustable widgets are displayed to the user.
- Each of the one or more adjustable widgets is configured to adjust a corresponding key portion on the 3D face model representing the target expression within a preset range.
- the strength of the adjustable widget may be within the range, so as to ensure that the adjusted expression still belongs to the same category as the target expression.
- the adjusted expression and the target expression both are sad.
- the preset ranges may be different due to different 3D face models.
- the adjustable widget corresponding to each key portion is generated.
- the implementation manners of the adjustable widget may be different in different scenarios.
- the adjustable widget may be an adjustable progress bar. As illustrated in FIG. 8, an adjustable progress bar corresponding to each key portion is generated, and the userās movement operation on the adjustable progress bar corresponding to the key portion may be detected. Different progress locations of the progress bar may correspond to an adjustment angel of the key portion to a certain direction, for example, for eyes, different progress locations of the progress bar may correspond to different degrees of curvature of the eyes.
- an operation on one of the one or more adjustable widgets from the user is acquired.
- an adjustment angle is acquired based on the operation.
- the corresponding key portion is adjusted based on the adjustment angle.
- an identifier of the adjustable widget may be acquired.
- the identifier may be a name of a key portion or the like.
- the operation from the user on the adjustable widget is acquired, and the identifier and the adjustment angle are acquired based on the operation.
- the adjustable widget is a progress bar
- the key position corresponding to the progress bar dragged by the user and the corresponding drag distance are acquired, and then the key portion corresponding to the identifier is adjusted based on the adjustment angle.
- fine adjustment according to the user's personal preference may be performed on the basis of ensuring the target expression, which satisfies the personalized requirements of the user.
- the 3D face model representing to the target expression may also be adjusted based on the personal preference of the user.
- the method further includes act in the following blocks.
- a preset state feature of a key portion corresponding to the target expression is acquired.
- the key portion corresponding to the target expression may be a relevant portion adapted to the target expression.
- the corresponding key portion may include a mouth, a cheek, and an eyebrow.
- Preset state features corresponding to key portions may include the states of the relevant portions.
- the corresponding state feature may include open and close of eyes.
- the state features of the key portion may be preset by the user based on personal preferences.
- a state of the key portion in the 3D face model representing the target expression is adjusted based on the preset state feature.
- the state of the corresponding key portion in the 3D face model is adjusted based on the state feature, so that the adjusted 3D face model is more in line with the userās personal preference. It needs that with the state of the key portion in this embodiment, it may render emotional effects consistent with the target expression rather than changing the emotion expressed by the target expression.
- the acquired state feature of the key portion corresponding to the target expression is the sunken on the dimple and the cheek, thus the cheek position and the dimple position in the 3D face model is adjusted based on the state feature to create a dimple effect, making the happy mood rendered by laughter more prominent.
- the state feature of the key portion corresponding to the target expression is the slight narrow of the right eye, thus the position of the right eye in the 3D face model is adjusted based on the state feature to create a blinking effect, making the happy mood rendered by the smile more prominent.
- the current expression represented by the currently-reconstructed 3D face model is acquired; the target expression from the user is acquired; based on the current expression and the target expression, the values for adjusting the coordinates of the first set of key points on the currently-reconstructed 3D face model are acquired; and the coordinates of the first set of key points on the currently-reconstructed 3D face model are adjusted based on the values, to generate the 3D face model representing the target expression. Therefore, a speed of modeling the 3D face model based on expression replacement is improved.
- FIG. 10 is a block diagram of a device for replacing an expression according to an embodiment of the present disclosure. As illustrated in FIG. 10, the device includes a first acquiring module 10, a second acquiring module 20, a third acquiring module 30, and a generating module 40.
- the first acquiring module 10 is configured to acquire a current expression represented by a currently-reconstructed 3D face model.
- the second acquiring module 20 is configured to acquire a target expression from a user.
- the third acquiring module 30 is configured to acquire, based on the current expression and the target expression, values for adjusting coordinates of a first set of key points on the currently-reconstructed 3D face model.
- the generating module 40 is configured to adjust the coordinates of the first set of key points on the currently-reconstructed 3D face model based on the values, to generate a 3D face model representing the target expression.
- the first acquiring module 10 includes a first determining unit 11, an extracting unit 12, and a second determining unit 13.
- the first determining unit 11 is configured to determine key points of a plurality of key portions in the currently-reconstructed 3D face model.
- the extracting unit 12 is configured to extract a feature vector of the plurality of key portions, based on coordinate information of the key points of the plurality of key portions and distances among the plurality of key portions.
- the second determining unit 13 is configured to determine the current expression of the 3D face model, by analyzing the feature vector of the plurality of key portions through a pre-trained neural network.
- the second acquiring module 20 is configured to display a list of expressions to the user and acquire an expression selected by the user on the list as the target expression.
- the second acquiring module 20 is configured to capture an expression of the user by a camera; match the expression captured by the camera with a preset list of expressions; and in response to the expression captured by the camera matching one expression in the preset list, use the expression captured by the camera as the target expression.
- the third acquiring module 30 is configured to: acquire a second set of key points of the current expression, and coordinates of the second set of key points; acquire a third set of key points of the target expression, and coordinates of the third set of key points; acquire the first set of key points based on the second set of key points and the third set of key points, and acquiring the values for adjusting the coordinates of the first set of key points based on the coordinates of the second set of key points and the coordinates of the third set of key points.
- the second set of key points of the current expression, and the coordinates of the second set of key points are preset.
- the third set of key points of the target expression, and the coordinates of the third set of key points are preset.
- the third acquiring module 30 is configured to: query, based on the current expression and the target expression, a preset database to acquire the values for adjusting the coordinates of the first set of key points.
- the preset database includes a plurality of expressions, and values for adjusting coordinates of a corresponding set of key points from one of the plurality of expressions to another of the plurality of expressions.
- the device further includes a first adjusting module 50.
- the first adjusting module 50 is configured to display one or more adjustable widgets to the user, each of the one or more adjustable widgets being configured to adjust a corresponding key portion on the 3D face model representing the target expression within a preset range; acquire an operation on one of the one or more adjustable widgets from the user; acquire an adjustment angle based on the operation; and adjust the corresponding key portion based on the adjustment angle.
- the device further includes a second adjusting module 60.
- the second adjusting module 60 is configured to acquire a preset state feature of a key portion corresponding to the target expression; and adjust a state of the key portion in the 3D face model representing the target expression based on the preset state feature.
- the present disclosure further provides a computer readable storage medium having a computer program stored thereon.
- the computer program is executed by a processor of the mobile terminal to implement the method for replacing the expression as described in the above embodiments.
- the present disclosure also provides an electronic device.
- FIG. 14 is a schematic diagram of an electronic device according to an embodiment of the present disclosure.
- the electronic device 200 includes a processor 220, a memory 230, a display 240, and an input device 250 that are coupled by a system bus 210.
- the memory 230 of the electronic device 200 stores an operating system and computer readable instructions.
- the computer readable instructions are executable by the processor 220 to implement the method for replacing the expression provided in the embodiments of the present disclosure.
- the processor 220 is configured to provide computing and control capabilities to support the operation of the entire electronic device 200.
- the display 240 of the electronic device 200 may be a liquid crystal display or an electronic ink display or the like.
- the input device 250 may be a touch layer covered on the display 240, or may be a button, a trackball or a touchpad disposed on the housing of the electronic device 200, or an external keyboard, a trackpad or a mouse.
- the electronic device 200 may be a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, or a wearable device (e.g., a smart bracelet, a smart watch, a smart helmet, smart glasses) .
- FIG. 14 is only a schematic diagram of a portion of the structure related to the solution of the present disclosure, and does not constitute a limitation of the electronic device 200 to which the solution of the present disclosure is applied.
- the specific electronic device 200 may include more or fewer components than illustrated in the figures, or some combined components, or have different component arrangement.
- the currently-reconstructed 3D face model may be implemented by an image processing circuit in the terminal device.
- an image processing circuit in the terminal device.
- the image processing circuit includes an image unit 310, a depth information unit 320, and a processing unit 330.
- the image unit 310 is configured to output one or more current original 2D face images of the user.
- the depth information unit 320 is configured to output depth information corresponding to the one or more original 2D face images.
- the processing unit 330 is electrically coupled to the image unit 310 and the depth information unit 320, and configured to perform 3D reconstruction based on the depth information and the one or more original 2D face images to acquire a 3D face model that displays the current expression.
- the image unit 310 may include: an image sensor 311 and an image signal processing (ISP) processor 312 that are electrically coupled with each other.
- ISP image signal processing
- the image sensor 311 is configured to output original image data.
- the ISP processor 312 is configured to output the original 2D face image according to the original image data.
- the original image data captured by the image sensor 311 is first processed by the ISP processor 312, which analyzes the original image data to capture image statistics information that may be used to determine one or more control parameters of the image sensor 311, including face images in YUV (Luma and Chroma) format or RGB format.
- the image sensor 311 may include a color filter array (such as a Bayer filter) and corresponding photosensitive units.
- the image sensor 311 may acquire light intensity and wavelength information captured by each photosensitive unit and provide a set of original image data that may be processed by the ISP processor 312.
- the ISP processor 312 acquires a face image in the YUV format or the RGB format and sends it to the processing unit 330.
- the ISP processor 312 may process the original image data pixel by pixel in a plurality of formats when processing the original image data. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 312 may perform one or more image processing operations on the original image data, collect statistical information about the image data. The image processing operation may be performed with the same or different bit depth precision.
- the depth information unit 320 includes a structured-light sensor 321 and a depth map generation chip 322 that are electrically coupled with each other.
- the structured-light sensor 321 is configured to generate an infrared speckle pattern.
- the depth map generation chip 322 is configured to output depth information corresponding to the original 2D face image based on the infrared speckle pattern.
- the structured-light sensor 321 projects the speckle structure light to the subject, and acquires the structured light reflected by the subject, and acquire an infrared speckle pattern according to the reflected structure light by imaging.
- the structured-light sensor 321 sends the infrared speckle pattern to the depth map generation chip 322, so that the depth map generation chip 322 determines the morphological change of the structured light according to the infrared speckle pattern, and then determines the depth of the subject and acquire a depth map.
- the depth map indicates the depth of each pixel in the infrared speckle pattern.
- the depth map generation chip 322 transmits the depth map to the processing unit 330.
- the processing unit 330 includes: a CPU (Central Processing Unit) 331 and a GPU (Graphics Processing Unit) 332 that are electrically coupled with each other.
- a CPU Central Processing Unit
- GPU Graphics Processing Unit
- the CPU 331 is configured to align the face image and the depth map according to the calibration data, and output the 3D face model according to the aligned face image and depth map.
- the GPU 332 is configured to adjust coordinate information of reference key points according to coordinate differences to generate the 3D face model corresponding to the target expression.
- the CPU 331 acquires a face image from the ISP processor 312, and acquires a depth map from the depth map generation chip 322, aligns the face image with the depth map by combining the previously acquired calibration data, to determine the depth information corresponding to each pixel in the face image. Further, the CPU 331 performs 3D reconstruction based on the depth information and the face image to acquire a 3D face model.
- the CPU 331 transmits the 3D face model to the GPU 332 so that the GPU 332 executes the method for replacing the expression as described in the above embodiment.
- the image processing circuit may further include: a first display unit 341.
- the first display unit 341 is electrically coupled to the processing unit 330 for displaying an adjustable widget of the key portion to be adjusted.
- the image processing circuit may further include: a second display unit 342.
- the second display unit 342 is electrically coupled to the processing unit 340 for displaying the adjusted 3D face model.
- the image processing circuit may further include: an encoder 350 and a memory 360.
- the beautified face image processed by the GPU 332 may also be encoded by the encoder 350 and stored in the memory 360.
- the encoder 350 may be implemented by a coprocessor.
- the memory 360 there may be a plurality of the memory 360, or the memory 360 may be divided into a plurality of storage spaces.
- the image data processed by the GPU 312 may be stored in a dedicated memory, or a dedicated storage space, and may include DMA (Direct Memory Access) feature.
- the memory 360 may be configured to implement one or more frame buffers.
- FIG. 16 is a schematic diagram of an image processing circuit as a possible implementation. For ease of explanation, only the various aspects related to the embodiments of the present disclosure are illustrated.
- the original image data captured by the image sensor 311 is first processed by the ISP processor 312, which analyzes the original image data to capture image statistics information that may be used to determine one or more control parameters of the image sensor 311, including face images in YUV format or RGB format.
- the image sensor 311 may include a color filter array (such as a Bayer filter) and corresponding photosensitive units.
- the image sensor 311 may acquire light intensity and wavelength information captured by each photosensitive unit and provide a set of original image data that may be processed by the ISP processor 312.
- the ISP processor 312 processes the original image data to acquire a face image in the YUV format or the RGB format, and transmits the face image to the CPU 331.
- the ISP processor 312 may process the original image data pixel by pixel in a plurality of formats when processing the original image data. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 312 may perform one or more image processing operations on the original image data, collect statistical information about the image data. The image processing operations may be performed with the same or different bit depth precision.
- the structured-light sensor 321 projects the speckle structure light toward the subject, and acquires the structured light reflected by the subject, and acquire an infrared speckle pattern according to the reflected structured light.
- the structured-light sensor 321 transmits the infrared speckle pattern to the depth map generation chip 322, so that the depth map generation chip 322 determines the morphological change of the structured light according to the infrared speckle pattern, and then determines the depth of the subject to acquire a depth map.
- the depth map indicates the depth of each pixel in the infrared speckle pattern.
- the depth map generation chip 322 transmits the depth map to the CPU 331.
- the CPU 331 acquires a face image from the ISP processor 312, and acquires a depth map from the depth map generation chip 322, aligns the face image with the depth map by combining the previously acquired calibration data, to determine the depth information corresponding to each pixel in the face image. Further, the CPU 331 performs 3D reconstruction based on the depth information and the face image to acquire a 3D face model
- the CPU 331 transmits the 3D face model to the GPU 332, so that the GPU 332 performs the method described in the above embodiment based on the 3D face model to generate the 3D face model corresponding to the target expression.
- the 3D face model corresponding to the target expression, processed by the GPU 332, may be represented by the display 340 (including the first display unit 341 and the second display unit 351 described above) , and/or encoded by the encoder 350 and stored in the memory 360.
- the encoder 350 is implemented by a coprocessor.
- the memory 360 there may be a plurality of the memory 360, or the memory 360 may be divided into a plurality of storage spaces.
- the image data processed by the GPU 312 may be stored in a dedicated memory, or a dedicated storage space, and may include DMA (Direct Memory Access) feature.
- the memory 360 may be configured to implement one or more frame buffers.
- the following acts are implemented by using the processor 220 in FIG. 14 or using the imaging processing circuits (the CPU 331 and the GPU 332) in FIG. 16.
- the CPU 331 acquires a 2D face image and depth information corresponding to the face image.
- the CPU 331 performs 3D reconstruction according to the depth information and the face image to acquire a 3D face model.
- the GPU 332 acquires adjusting parameters of the 3D face model for the user, and adjusts key points on the original 3D face model based on the shaping parameters of the 3D face model, to acquire a 3D face model corresponding to the target expression
- first and second are used herein for purposes of description and are not intended to indicate or imply relative importance or significance.
- the feature defined with āfirstā and āsecondā may comprise one or more this feature distinctly or implicitly.
- āa plurality ofā means two or more than two, unless specified otherwise.
- the flow chart or any process or method described herein in other manners may represent a module, segment, or portion of code that comprises one or more executable instructions to implement the specified logic function (s) or that comprises one or more executable instructions of the steps of the progress.
- the flow chart shows a specific order of execution, it is understood that the order of execution may differ from that which is depicted. For example, the order of execution of two or more boxes may be scrambled relative to the order shown.
- the logic and/or step described in other manners herein or shown in the flow chart, for example, a particular sequence table of executable instructions for realizing the logical function may be specifically achieved in any computer readable medium to be used by the instruction execution system, device or equipment (such as the system based on computers, the system comprising processors or other systems capable of acquiring the instruction from the instruction execution system, device and equipment and executing the instruction) , or to be used in combination with the instruction execution system, device and equipment.
- the computer readable medium may be any device adaptive for including, storing, communicating, propagating or transferring programs to be used by or in combination with the instruction execution system, device or equipment.
- the computer readable medium comprise but are not limited to: an electronic connection (an electronic device) with one or more wires, a portable computer enclosure (amagnetic device) , a random access memory (RAM) , a read only memory (ROM) , an erasable programmable read-only memory (EPROM or a flash memory) , an optical fiber device and a portable compact disk read-only memory (CDROM) .
- the computer readable medium may even be a paper or other appropriate medium capable of printing programs thereon, this is because, for example, the paper or other appropriate medium may be optically scanned and then edited, decrypted or processed with other appropriate methods when necessary to acquire the programs in an electric manner, and then the programs may be stored in the computer memories.
- each part of the present disclosure may be realized by the hardware, software, firmware or their combination.
- a plurality of steps or methods may be realized by the software or firmware stored in the memory and executed by the appropriate instruction execution system.
- the steps or methods may be realized by one or a combination of the following techniques known in the art: a discrete logic circuit having a logic gate circuit for realizing a logic function of a data signal, an application-specific integrated circuit having an appropriate combination logic gate circuit, a programmable gate array (PGA) , a field programmable gate array (FPGA) , etc.
- each function cell of the embodiments of the present disclosure may be integrated in a processing module, or these cells may be separate physical existence, or two or more cells are integrated in a processing module.
- the integrated module may be realized in a form of hardware or in a form of software function modules. When the integrated module is realized in a form of software function module and is sold or used as a standalone product, the integrated module may be stored in a computer readable storage medium.
- the storage medium mentioned above may be read-only memories, magnetic disks, CD, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Architecture (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present disclosure provides a method and a device for replacing an expression. The method includes: acquiring a current expression represented by a currently-reconstructed 3D face model; acquiring a target expression from a user; acquiring, based on the current expression and the target expression, values for adjusting coordinates of a first set of key points on the currently-reconstructed 3D face model; and adjusting the coordinates of the first set of key points on the currently-reconstructed 3D face model based on the values, to generate a 3D face model representing the target expression.
Description
TheĀ presentĀ disclosureĀ relatesĀ toĀ aĀ fieldĀ ofĀ portraitĀ processingĀ technologies,Ā andĀ moreĀ particularly,Ā toĀ aĀ methodĀ andĀ aĀ deviceĀ forĀ replacingĀ anĀ expression,Ā andĀ aĀ computerĀ readableĀ storageĀ medium.
AsĀ computerĀ technologiesĀ progress,Ā face-basedĀ imageĀ processingĀ technologiesĀ developĀ fromĀ two-dimensionĀ (2D)Ā toĀ three-dimensionĀ (3D)Ā .Ā TheĀ 3DĀ imageĀ processingĀ technologiesĀ haveĀ gotĀ wideĀ attentionĀ dueĀ toĀ theĀ senseĀ ofĀ reality.
InĀ theĀ relatedĀ art,Ā afterĀ reconstructingĀ theĀ 3DĀ faceĀ model,Ā ifĀ theĀ userĀ isĀ notĀ satisfiedĀ withĀ theĀ reconstructedĀ 3DĀ faceĀ model,Ā itĀ isĀ requiredĀ toĀ reconstructĀ theĀ 3DĀ faceĀ modelĀ again,Ā whichĀ resultsĀ inĀ aĀ largeĀ amountĀ ofĀ calculationĀ andĀ lowĀ modelingĀ efficiency.
SUMMARY
EmbodimentsĀ ofĀ aĀ firstĀ aspectĀ ofĀ theĀ presentĀ disclosureĀ provideĀ aĀ methodĀ forĀ replacingĀ anĀ expression.Ā TheĀ methodĀ includes:Ā acquiringĀ aĀ currentĀ expressionĀ representedĀ byĀ aĀ currently-reconstructedĀ three-dimensionalĀ (3D)Ā faceĀ model;Ā acquiringĀ aĀ targetĀ expressionĀ fromĀ aĀ user;Ā acquiring,Ā basedĀ onĀ theĀ currentĀ expressionĀ andĀ theĀ targetĀ expression,Ā valuesĀ forĀ adjustingĀ coordinatesĀ ofĀ aĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ model;Ā andĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ basedĀ onĀ theĀ values,Ā toĀ generateĀ aĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expression.
InĀ anĀ embodiment,Ā acquiringĀ theĀ targetĀ expressionĀ fromĀ theĀ userĀ includes:Ā displayingĀ aĀ listĀ ofĀ expressionsĀ toĀ theĀ user;Ā andĀ acquiringĀ anĀ expressionĀ selectedĀ byĀ theĀ userĀ onĀ theĀ listĀ asĀ theĀ targetĀ expression.
InĀ anĀ embodiment,Ā acquiringĀ theĀ targetĀ expressionĀ fromĀ theĀ userĀ includes:Ā capturingĀ anĀ expressionĀ ofĀ theĀ userĀ byĀ aĀ camera;Ā matchingĀ theĀ expressionĀ capturedĀ byĀ theĀ cameraĀ withĀ aĀ presetĀ listĀ ofĀ expressions;Ā andĀ inĀ responseĀ toĀ theĀ expressionĀ capturedĀ byĀ theĀ cameraĀ matchingĀ oneĀ expressionĀ inĀ theĀ presetĀ list,Ā usingĀ theĀ expressionĀ capturedĀ byĀ theĀ cameraĀ asĀ theĀ targetĀ expression.
InĀ anĀ embodiment,Ā acquiring,Ā basedĀ onĀ theĀ currentĀ expressionĀ andĀ theĀ targetĀ expression,Ā valuesĀ forĀ adjustingĀ coordinatesĀ ofĀ aĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ model,Ā includes:Ā acquiringĀ aĀ secondĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ currentĀ expression,Ā andĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ points;Ā acquiringĀ aĀ thirdĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ targetĀ expression,Ā andĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ points;Ā acquiringĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ basedĀ onĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ andĀ theĀ thirdĀ setĀ ofĀ keyĀ points,Ā andĀ acquiringĀ theĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ basedĀ onĀ theĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ andĀ theĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ points;Ā theĀ secondĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ currentĀ expression,Ā andĀ theĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ beingĀ preset;Ā andĀ theĀ thirdĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ targetĀ expression,Ā andĀ theĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ pointsĀ beingĀ preset.
InĀ anĀ embodiment,Ā acquiring,Ā basedĀ onĀ theĀ currentĀ expressionĀ andĀ theĀ targetĀ expression,Ā valuesĀ forĀ adjustingĀ coordinatesĀ ofĀ aĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ model,Ā includes:Ā querying,Ā basedĀ onĀ theĀ currentĀ expressionĀ andĀ theĀ targetĀ expression,Ā aĀ presetĀ databaseĀ toĀ acquireĀ theĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ points,Ā theĀ presetĀ databaseĀ comprisesĀ aĀ pluralityĀ ofĀ expressions,Ā andĀ valuesĀ forĀ adjustingĀ coordinatesĀ ofĀ aĀ correspondingĀ setĀ ofĀ keyĀ pointsĀ fromĀ oneĀ ofĀ theĀ pluralityĀ ofĀ expressionsĀ toĀ anotherĀ ofĀ theĀ pluralityĀ ofĀ expressions.
InĀ anĀ embodiment,Ā theĀ methodĀ furtherĀ includes:Ā displayingĀ oneĀ orĀ moreĀ adjustableĀ widgets,Ā eachĀ ofĀ theĀ oneĀ orĀ moreĀ adjustableĀ widgetsĀ beingĀ configuredĀ toĀ adjustĀ aĀ correspondingĀ keyĀ portionĀ onĀ theĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expressionĀ withinĀ aĀ presetĀ range;Ā acquiringĀ anĀ operationĀ onĀ oneĀ ofĀ theĀ oneĀ orĀ moreĀ adjustableĀ widgets;Ā acquiringĀ anĀ adjustmentĀ angleĀ basedĀ onĀ theĀ operation;Ā andĀ adjustingĀ theĀ correspondingĀ keyĀ portionĀ basedĀ onĀ theĀ adjustmentĀ angle.
InĀ anĀ embodiment,Ā theĀ methodĀ furtherĀ includes:Ā acquiringĀ aĀ presetĀ stateĀ featureĀ ofĀ aĀ keyĀ portionĀ correspondingĀ toĀ theĀ targetĀ expression;Ā andĀ adjustingĀ aĀ stateĀ ofĀ theĀ keyĀ portionĀ inĀ theĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expressionĀ basedĀ onĀ theĀ presetĀ stateĀ feature.
EmbodimentsĀ ofĀ aĀ secondĀ aspectĀ ofĀ theĀ presentĀ disclosureĀ providesĀ aĀ deviceĀ forĀ replacingĀ anĀ expressionĀ including:Ā aĀ firstĀ acquiringĀ moduleĀ configuredĀ to,Ā acquireĀ aĀ currentĀ expressionĀ representedĀ byĀ aĀ currently-reconstructedĀ three-dimensionalĀ (3D)Ā faceĀ model;Ā aĀ secondĀ acquiringĀ moduleĀ configuredĀ to,Ā acquireĀ aĀ targetĀ expressionĀ fromĀ aĀ user;Ā aĀ thirdĀ acquiringĀ moduleĀ configuredĀ to,Ā acquire,Ā basedĀ onĀ theĀ currentĀ expressionĀ andĀ theĀ targetĀ expression,Ā valuesĀ forĀ adjustingĀ coordinatesĀ ofĀ aĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ model;Ā andĀ aĀ generatingĀ moduleĀ configuredĀ to,Ā adjustĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ basedĀ onĀ theĀ values,Ā toĀ generateĀ aĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expression.
InĀ anĀ embodiment,Ā theĀ secondĀ acquiringĀ moduleĀ isĀ configuredĀ to:Ā displayĀ aĀ listĀ ofĀ expressionsĀ toĀ theĀ user;Ā andĀ acquireĀ anĀ expressionĀ selectedĀ byĀ theĀ userĀ onĀ theĀ listĀ asĀ theĀ targetĀ expression.
InĀ anĀ embodiment,Ā theĀ secondĀ acquiringĀ moduleĀ isĀ configuredĀ to:Ā captureĀ anĀ expressionĀ ofĀ theĀ userĀ byĀ aĀ camera;Ā matchĀ theĀ expressionĀ capturedĀ byĀ theĀ cameraĀ withĀ aĀ presetĀ listĀ ofĀ expressions;Ā andĀ inĀ responseĀ toĀ theĀ expressionĀ capturedĀ byĀ theĀ cameraĀ matchingĀ oneĀ expressionĀ inĀ theĀ presetĀ list,Ā useĀ theĀ expressionĀ capturedĀ byĀ theĀ cameraĀ asĀ theĀ targetĀ expression.
InĀ anĀ embodiment,Ā theĀ thirdĀ acquiringĀ moduleĀ isĀ configuredĀ to:Ā acquireĀ aĀ secondĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ currentĀ expression,Ā andĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ points;Ā acquireĀ aĀ thirdĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ targetĀ expression,Ā andĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ points;Ā acquireĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ basedĀ onĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ andĀ theĀ thirdĀ setĀ ofĀ keyĀ points,Ā andĀ acquireĀ theĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ basedĀ onĀ theĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ andĀ theĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ points;Ā theĀ secondĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ currentĀ expression,Ā andĀ theĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ beingĀ preset;Ā andĀ theĀ thirdĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ targetĀ expression,Ā andĀ theĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ pointsĀ beingĀ preset.
InĀ anĀ embodiment,Ā theĀ thirdĀ acquiringĀ moduleĀ isĀ configuredĀ to:Ā query,Ā basedĀ onĀ theĀ currentĀ expressionĀ andĀ theĀ targetĀ expression,Ā aĀ presetĀ databaseĀ toĀ acquireĀ theĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ points,Ā theĀ presetĀ databaseĀ comprisesĀ aĀ pluralityĀ ofĀ expressions,Ā andĀ valuesĀ forĀ adjustingĀ coordinatesĀ ofĀ aĀ correspondingĀ setĀ ofĀ keyĀ pointsĀ fromĀ oneĀ ofĀ theĀ pluralityĀ ofĀ expressionsĀ toĀ anotherĀ ofĀ theĀ pluralityĀ ofĀ expressions.
InĀ anĀ embodiment,Ā theĀ deviceĀ furtherĀ includes:Ā aĀ firstĀ adjustingĀ moduleĀ configuredĀ to:Ā displayĀ oneĀ orĀ moreĀ adjustableĀ widgets,Ā eachĀ ofĀ theĀ oneĀ orĀ moreĀ adjustableĀ widgetsĀ beingĀ configuredĀ toĀ adjustĀ aĀ correspondingĀ keyĀ portionĀ onĀ theĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expressionĀ withinĀ aĀ presetĀ range;Ā acquireĀ anĀ operationĀ onĀ oneĀ ofĀ theĀ oneĀ orĀ moreĀ adjustableĀ widgets;Ā acquireĀ anĀ adjustmentĀ angleĀ basedĀ onĀ theĀ operation;Ā andĀ adjustĀ theĀ correspondingĀ keyĀ portionĀ basedĀ onĀ theĀ adjustmentĀ angle.
InĀ anĀ embodiment,Ā theĀ deviceĀ furtherĀ includes:Ā aĀ secondĀ adjustingĀ moduleĀ configuredĀ to:Ā acquireĀ aĀ presetĀ stateĀ featureĀ ofĀ aĀ keyĀ portionĀ correspondingĀ toĀ theĀ targetĀ expression;Ā andĀ adjustĀ aĀ stateĀ ofĀ theĀ keyĀ portionĀ inĀ theĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expressionĀ basedĀ onĀ theĀ presetĀ stateĀ feature.
EmbodimentsĀ ofĀ aĀ thirdĀ aspectĀ ofĀ theĀ presentĀ disclosureĀ provideĀ aĀ computerĀ readableĀ storageĀ mediumĀ havingĀ aĀ computerĀ programĀ storedĀ thereon.Ā WhenĀ theĀ computerĀ programĀ isĀ executedĀ byĀ aĀ processor,Ā theĀ methodĀ forĀ replacingĀ theĀ expressionĀ asĀ describedĀ inĀ theĀ aboveĀ embodimentsĀ ofĀ theĀ firstĀ aspectĀ isĀ implemented.
AdditionalĀ aspectsĀ andĀ advantagesĀ ofĀ theĀ presentĀ disclosureĀ willĀ beĀ givenĀ inĀ partĀ inĀ theĀ followingĀ descriptions,Ā becomeĀ apparentĀ inĀ partĀ fromĀ theĀ followingĀ descriptions,Ā orĀ beĀ learnedĀ fromĀ theĀ practiceĀ ofĀ theĀ embodimentsĀ ofĀ theĀ presentĀ disclosure.
TheseĀ andĀ otherĀ aspectsĀ and/orĀ advantagesĀ ofĀ embodimentsĀ ofĀ theĀ presentĀ disclosureĀ willĀ becomeĀ apparentĀ andĀ moreĀ readilyĀ appreciatedĀ fromĀ theĀ followingĀ descriptionsĀ madeĀ withĀ referenceĀ toĀ theĀ accompanyingĀ drawings,Ā inĀ which:
FIG.Ā 1Ā isĀ aĀ flowchartĀ ofĀ aĀ methodĀ forĀ replacingĀ anĀ expressionĀ accordingĀ toĀ embodimentsĀ ofĀ theĀ presentĀ disclosure.
FIG.Ā 2Ā isĀ aĀ flowchartĀ ofĀ aĀ methodĀ forĀ replacingĀ anĀ expressionĀ accordingĀ toĀ embodimentsĀ ofĀ theĀ presentĀ disclosure.
FIG.Ā 3Ā isĀ aĀ flowchartĀ ofĀ aĀ methodĀ forĀ replacingĀ anĀ expressionĀ accordingĀ toĀ embodimentsĀ ofĀ theĀ presentĀ disclosure.
FIG.Ā 4Ā isĀ aĀ flowchartĀ ofĀ aĀ methodĀ forĀ replacingĀ anĀ expressionĀ accordingĀ toĀ embodimentsĀ ofĀ theĀ presentĀ disclosure.
FIG.Ā 5Ā isĀ aĀ flowchartĀ ofĀ aĀ methodĀ forĀ replacingĀ anĀ expressionĀ accordingĀ toĀ embodimentsĀ ofĀ theĀ presentĀ disclosure.
FIG.Ā 6Ā isĀ aĀ flowchartĀ ofĀ aĀ methodĀ forĀ replacingĀ anĀ expressionĀ accordingĀ toĀ embodimentsĀ ofĀ theĀ presentĀ disclosure.
FIG.Ā 7Ā isĀ aĀ flowchartĀ ofĀ aĀ methodĀ forĀ replacingĀ anĀ expressionĀ accordingĀ toĀ embodimentsĀ ofĀ theĀ presentĀ disclosure.
FIG.Ā 8Ā isĀ aĀ schematicĀ diagramĀ ofĀ aĀ scenarioĀ ofĀ aĀ methodĀ forĀ replacingĀ anĀ expressionĀ accordingĀ toĀ anĀ embodimentĀ ofĀ theĀ presentĀ disclosure.
FIG.Ā 9Ā isĀ aĀ flowchartĀ ofĀ aĀ methodĀ forĀ replacingĀ anĀ expressionĀ accordingĀ toĀ embodimentsĀ ofĀ theĀ presentĀ disclosure.
FIG.Ā 10Ā isĀ aĀ blockĀ diagramĀ ofĀ aĀ deviceĀ forĀ replacingĀ anĀ expressionĀ accordingĀ toĀ embodimentsĀ ofĀ theĀ presentĀ disclosure.
FIG.Ā 11Ā isĀ aĀ blockĀ diagramĀ ofĀ aĀ deviceĀ forĀ replacingĀ anĀ expressionĀ accordingĀ toĀ embodimentsĀ ofĀ theĀ presentĀ disclosure.
FIG.Ā 12Ā isĀ aĀ blockĀ diagramĀ ofĀ aĀ deviceĀ forĀ replacingĀ anĀ expressionĀ accordingĀ toĀ embodimentsĀ ofĀ theĀ presentĀ disclosure.
FIG.Ā 13Ā isĀ aĀ blockĀ diagramĀ ofĀ aĀ deviceĀ forĀ replacingĀ anĀ expressionĀ accordingĀ toĀ embodimentsĀ ofĀ theĀ presentĀ disclosure.
FIG.Ā 14Ā isĀ aĀ schematicĀ diagramĀ ofĀ anĀ electronicĀ deviceĀ accordingĀ toĀ anĀ embodimentĀ ofĀ theĀ presentĀ disclosure.
FIG.Ā 15Ā isĀ aĀ blockĀ diagramĀ ofĀ anĀ imageĀ processingĀ circuitĀ inĀ anĀ embodiment.
FIG.Ā 16Ā isĀ aĀ schematicĀ diagramĀ ofĀ anĀ imageĀ processingĀ circuitĀ asĀ oneĀ possibleĀ implementation.
EmbodimentsĀ ofĀ theĀ presentĀ disclosureĀ willĀ beĀ describedĀ inĀ detailĀ andĀ examplesĀ ofĀ embodimentsĀ areĀ illustratedĀ inĀ theĀ drawings.Ā TheĀ sameĀ orĀ similarĀ elementsĀ andĀ theĀ elementsĀ havingĀ theĀ sameĀ orĀ similarĀ functionsĀ areĀ denotedĀ byĀ likeĀ referenceĀ numeralsĀ throughoutĀ theĀ descriptions.Ā EmbodimentsĀ describedĀ hereinĀ withĀ referenceĀ toĀ theĀ drawingsĀ areĀ explanatory,Ā serveĀ toĀ explainĀ theĀ presentĀ disclosure,Ā andĀ areĀ notĀ construedĀ toĀ limitĀ embodimentsĀ ofĀ theĀ presentĀ disclosure.
InĀ viewĀ ofĀ theĀ problemĀ ofĀ lowĀ modelingĀ efficiencyĀ dueĀ toĀ reconstructingĀ theĀ 3DĀ faceĀ modelĀ againĀ whenĀ theĀ userĀ isĀ notĀ satisfiedĀ withĀ theĀ reconstructedĀ 3DĀ faceĀ modelĀ inĀ theĀ relatedĀ art,Ā theĀ presentĀ disclosureĀ providesĀ aĀ method,Ā aĀ device,Ā andĀ aĀ computerĀ readableĀ storageĀ mediumĀ forĀ replacingĀ anĀ expression.Ā InĀ theĀ presentĀ disclosure,Ā aĀ differenceĀ betweenĀ aĀ satisfactoryĀ 3DĀ faceĀ modelĀ andĀ aĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ mayĀ beĀ found,Ā andĀ theĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ mayĀ beĀ adjustedĀ basedĀ onĀ theĀ differenceĀ toĀ acquireĀ theĀ satisfactoryĀ 3DĀ faceĀ model,Ā therebyĀ improvingĀ theĀ modelingĀ efficiencyĀ onĀ theĀ 3DĀ faceĀ model.
AĀ method,Ā aĀ device,Ā andĀ aĀ computerĀ readableĀ storageĀ mediumĀ forĀ replacingĀ anĀ expressionĀ providedĀ inĀ anĀ embodimentĀ ofĀ theĀ presentĀ disclosureĀ willĀ beĀ describedĀ belowĀ withĀ referenceĀ toĀ theĀ drawings.Ā TheĀ methodĀ providedĀ inĀ theĀ embodimentĀ ofĀ theĀ presentĀ disclosureĀ mayĀ beĀ applicableĀ toĀ computerĀ devicesĀ havingĀ anĀ apparatusĀ forĀ acquiringĀ depthĀ informationĀ andĀ colorĀ information.Ā TheĀ apparatusĀ forĀ acquiringĀ depthĀ informationĀ andĀ colorĀ informationĀ (i.e.,Ā 2DĀ information)Ā mayĀ beĀ aĀ dual-cameraĀ systemĀ orĀ theĀ like.Ā TheĀ computerĀ devicesĀ mayĀ beĀ hardwareĀ devicesĀ havingĀ variousĀ operatingĀ systems,Ā touchĀ screens,Ā and/orĀ displayĀ screens,Ā suchĀ asĀ mobileĀ phones,Ā tabletĀ computers,Ā personalĀ digitalĀ assistants,Ā wearableĀ devices,Ā orĀ theĀ like.
FIG.Ā 1Ā isĀ aĀ flowchartĀ ofĀ aĀ methodĀ forĀ replacingĀ anĀ expressionĀ accordingĀ toĀ embodimentsĀ ofĀ theĀ presentĀ disclosure.Ā AsĀ illustratedĀ inĀ FIG.Ā 1,Ā theĀ methodĀ includesĀ actsĀ inĀ theĀ followingĀ blocks.
AtĀ block Ā 101,Ā aĀ currentĀ expressionĀ representedĀ byĀ aĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ isĀ acquired.
TheĀ 3DĀ faceĀ modelĀ mayĀ beĀ actuallyĀ representedĀ byĀ pointsĀ andĀ aĀ triangularĀ networkĀ formedĀ byĀ connectingĀ theĀ points.Ā SomeĀ pointsĀ correspondingĀ toĀ portionsĀ havingĀ mainĀ influenceĀ (i.e.,Ā keyĀ portions)Ā onĀ aĀ shapeĀ ofĀ theĀ entireĀ 3DĀ faceĀ modelĀ mayĀ beĀ referredĀ toĀ asĀ keyĀ points.Ā TheĀ expressionĀ mayĀ beĀ representedĀ byĀ aĀ setĀ ofĀ keyĀ points.Ā DifferentĀ setsĀ ofĀ keyĀ pointsĀ mayĀ distinguishĀ differentĀ expressions.Ā TheĀ setĀ ofĀ keyĀ pointsĀ mayĀ correspondĀ toĀ theĀ keyĀ portionsĀ (suchĀ asĀ mouthĀ andĀ eyes)Ā representingĀ differentiationĀ ofĀ theĀ expression.
BasedĀ onĀ differentĀ scenarios,Ā mannersĀ ofĀ acquiringĀ theĀ currentĀ expressionĀ mayĀ beĀ different.Ā AsĀ aĀ possibleĀ manner,Ā asĀ illustratedĀ inĀ FIG.Ā 2,Ā theĀ currentĀ expressionĀ isĀ acquiredĀ throughĀ theĀ followingĀ acts.
AtĀ block Ā 201,Ā theĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ isĀ scannedĀ toĀ acquireĀ aĀ pluralityĀ ofĀ keyĀ portionsĀ andĀ keyĀ pointsĀ ofĀ theĀ pluralityĀ ofĀ keyĀ portions.
AtĀ block Ā 202,Ā aĀ featureĀ vectorĀ ofĀ theĀ pluralityĀ ofĀ keyĀ portionsĀ isĀ extractedĀ basedĀ onĀ coordinatesĀ ofĀ theĀ keyĀ pointsĀ ofĀ theĀ pluralityĀ ofĀ keyĀ portions,Ā andĀ distancesĀ amongĀ theĀ pluralityĀ ofĀ keyĀ portions.
AtĀ block Ā 203,Ā theĀ featureĀ vectorĀ isĀ analyzedĀ byĀ aĀ pre-trainedĀ neuralĀ networkĀ modelĀ toĀ determineĀ theĀ currentĀ expression.
InĀ thisĀ example,Ā theĀ neuralĀ networkĀ modelĀ isĀ trainedĀ inĀ advanceĀ basedĀ onĀ aĀ largeĀ amountĀ ofĀ experimentalĀ data.Ā InputsĀ ofĀ theĀ neuralĀ networkĀ modelĀ mayĀ beĀ theĀ featureĀ vectorĀ correspondingĀ toĀ theĀ coordinatesĀ ofĀ theĀ keyĀ pointsĀ ofĀ theĀ pluralityĀ ofĀ keyĀ portionsĀ andĀ theĀ distancesĀ amongĀ theĀ pluralityĀ ofĀ keyĀ portions.Ā AnĀ outputĀ ofĀ theĀ neuralĀ networkĀ modelĀ isĀ theĀ expression.
TheĀ keyĀ pointsĀ ofĀ theĀ pluralityĀ ofĀ keyĀ portionsĀ inĀ theĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ areĀ determined.Ā ForĀ example,Ā theĀ keyĀ pointsĀ ofĀ theĀ keyĀ portionsĀ (suchĀ asĀ theĀ mouth)Ā areĀ determinedĀ byĀ imageĀ recognitionĀ technologies.Ā TheĀ featureĀ vectorĀ ofĀ theĀ pluralityĀ ofĀ keyĀ portionsĀ isĀ extracted,Ā basedĀ onĀ theĀ coordinatesĀ ofĀ theĀ keyĀ pointsĀ ofĀ theĀ pluralityĀ ofĀ keyĀ portionsĀ andĀ theĀ distancesĀ amongĀ theĀ pluralityĀ ofĀ keyĀ portions.Ā TheĀ featureĀ vectorĀ ofĀ theĀ pluralityĀ ofĀ keyĀ portionsĀ isĀ analyzedĀ throughĀ theĀ pre-trainedĀ neuralĀ networkĀ modelĀ toĀ determineĀ theĀ currentĀ expressionĀ ofĀ theĀ 3DĀ faceĀ model.
AtĀ block Ā 102,Ā aĀ targetĀ expressionĀ fromĀ aĀ userĀ isĀ acquired.
InĀ anĀ embodiment,Ā asĀ illustratedĀ inĀ FIG.Ā 3,Ā theĀ actĀ inĀ block Ā 102Ā mayĀ includeĀ actsĀ atĀ block Ā 1021Ā andĀ block Ā 1022.Ā AtĀ block Ā 1021,Ā aĀ listĀ ofĀ expressionsĀ isĀ displayedĀ toĀ theĀ user.Ā AtĀ block Ā 1022,Ā anĀ expressionĀ selectedĀ byĀ theĀ userĀ onĀ theĀ listĀ isĀ acquiredĀ asĀ theĀ targetĀ expression.
InĀ anotherĀ embodiment,Ā asĀ illustratedĀ inĀ FIG.Ā 4,Ā theĀ actĀ inĀ block Ā 102Ā mayĀ includeĀ actsĀ atĀ block Ā 1023,Ā block Ā 1024,Ā andĀ block Ā 1025.Ā AtĀ block Ā 1023,Ā anĀ expressionĀ ofĀ theĀ userĀ isĀ capturedĀ byĀ aĀ camera.Ā ForĀ example,Ā theĀ cameraĀ mayĀ captureĀ 2DĀ faceĀ imagesĀ forĀ theĀ sameĀ scene,Ā andĀ acquireĀ theĀ expressionĀ ofĀ theĀ userĀ basedĀ onĀ theĀ 2DĀ faceĀ imagesĀ thoughtĀ theĀ imageĀ processingĀ technologies.Ā AtĀ block Ā 1024,Ā theĀ expressionĀ capturedĀ byĀ theĀ cameraĀ isĀ matchedĀ withĀ aĀ presetĀ listĀ ofĀ expressions.Ā AtĀ block Ā 1025,Ā inĀ responseĀ toĀ theĀ expressionĀ capturedĀ byĀ theĀ cameraĀ matchingĀ oneĀ expressionĀ inĀ theĀ presetĀ list,Ā theĀ expressionĀ capturedĀ byĀ theĀ cameraĀ isĀ usedĀ asĀ theĀ targetĀ expression.
TheĀ listĀ ofĀ expressionsĀ mayĀ beĀ presetĀ inĀ advance,Ā whichĀ mayĀ basically,Ā coverĀ allĀ requirementsĀ ofĀ theĀ userĀ forĀ changingĀ expressions.Ā ForĀ example,Ā theĀ listĀ mayĀ includeĀ fourĀ commonly-usedĀ expressionsĀ suchĀ asĀ happy,Ā sad,Ā distressedĀ andĀ mourning.Ā Also,Ā theĀ listĀ mayĀ furtherĀ includeĀ otherĀ expressions,Ā whichĀ isĀ notĀ limitedĀ herein.
AtĀ block Ā 103,Ā basedĀ onĀ theĀ currentĀ expressionĀ andĀ theĀ targetĀ expression,Ā valuesĀ forĀ adjustingĀ coordinatesĀ ofĀ aĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ areĀ acquired.
AtĀ block Ā 104,Ā theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ isĀ adjustedĀ basedĀ onĀ theĀ values,Ā toĀ generateĀ aĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expression.
AsĀ analyzedĀ above,Ā theĀ 3DĀ faceĀ modelĀ isĀ actuallyĀ reconstructedĀ byĀ theĀ points.Ā Therefore,Ā theĀ changeĀ andĀ reconstructionĀ ofĀ theĀ faceĀ modelĀ isĀ actuallyĀ realizedĀ byĀ changingĀ coordinateĀ valuesĀ ofĀ theĀ points.Ā Therefore,Ā inĀ theĀ embodimentĀ ofĀ theĀ presentĀ disclosure,Ā inĀ orderĀ toĀ realizeĀ theĀ reconstructionĀ ofĀ theĀ 3DĀ faceĀ modelĀ correspondingĀ toĀ theĀ targetĀ expression,Ā itĀ isĀ necessaryĀ toĀ acquireĀ theĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ model,Ā soĀ asĀ toĀ correspondinglyĀ adjustĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ basedĀ onĀ theĀ values,Ā toĀ generateĀ theĀ 3DĀ faceĀ modelĀ representingĀ toĀ theĀ targetĀ expression.
MannersĀ ofĀ acquiringĀ theĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ mayĀ beĀ variedĀ withĀ scenarios.Ā TheĀ examplesĀ areĀ asĀ follows.
FirstĀ Way
InĀ thisĀ embodiment,Ā asĀ illustratedĀ inĀ FIG.Ā 5,Ā theĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ mayĀ beĀ acquiredĀ byĀ theĀ followingĀ acts.
AtĀ block Ā 1031,Ā aĀ secondĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ currentĀ expression,Ā andĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ areĀ acquired.
TheĀ secondĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ currentĀ expression,Ā andĀ theĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ mayĀ beĀ presetĀ inĀ advance.
AtĀ block Ā 1032,Ā aĀ thirdĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ targetĀ expression,Ā andĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ pointsĀ areĀ acquired.
TheĀ thirdĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ targetĀ expression,Ā andĀ theĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ pointsĀ beingĀ presetĀ mayĀ beĀ presetĀ inĀ advance.
AtĀ block Ā 1033,Ā theĀ firstĀ setĀ ofĀ keyĀ pointsĀ isĀ acquiredĀ basedĀ onĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ andĀ theĀ thirdĀ setĀ ofĀ keyĀ points.
AtĀ block Ā 1034,Ā theĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ areĀ acquiredĀ basedĀ onĀ theĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ andĀ theĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ points.
TheĀ presetĀ databaseĀ mayĀ includeĀ aĀ pluralityĀ ofĀ expressions.Ā TheĀ pluralityĀ ofĀ expressionsĀ mayĀ beĀ acquiredĀ inĀ advance.Ā ForĀ eachĀ ofĀ theĀ pluralityĀ ofĀ expressions,Ā theĀ correspondingĀ setĀ ofĀ keyĀ points,Ā andĀ coordinatesĀ ofĀ theĀ correspondingĀ setĀ ofĀ keyĀ pointsĀ mayĀ alsoĀ beĀ acquiredĀ inĀ advanceĀ andĀ storedĀ inĀ theĀ database.Ā IfĀ theĀ currentĀ expressionĀ andĀ theĀ targetĀ expressionĀ areĀ acquired,Ā theĀ databaseĀ mayĀ beĀ searchedĀ forĀ toĀ acquireĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ currentĀ expression,Ā andĀ theĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ points,Ā andĀ theĀ thirdĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ targetĀ expression,Ā andĀ theĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ points.Ā Then,Ā theĀ firstĀ setĀ ofĀ keyĀ pointsĀ mayĀ beĀ acquired.Ā TheĀ firstĀ setĀ ofĀ keyĀ pointsĀ mayĀ includeĀ keyĀ pointsĀ inĀ theĀ secondĀ setĀ andĀ inĀ theĀ thirdĀ set.Ā AndĀ then,Ā theĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ mayĀ beĀ acquiredĀ basedĀ onĀ theĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ andĀ theĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ points.
SecondĀ Way
InĀ thisĀ embodiment,Ā theĀ presetĀ databaseĀ mayĀ includeĀ aĀ pluralityĀ ofĀ expressions.Ā TheĀ pluralityĀ ofĀ expressionsĀ mayĀ beĀ acquiredĀ inĀ advance.Ā ForĀ eachĀ ofĀ theĀ pluralityĀ ofĀ expressions,Ā theĀ correspondingĀ setĀ ofĀ keyĀ points,Ā andĀ coordinatesĀ ofĀ theĀ correspondingĀ setĀ ofĀ keyĀ pointsĀ mayĀ alsoĀ beĀ acquiredĀ inĀ advanceĀ andĀ storedĀ inĀ theĀ database.Ā TheĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ correspondingĀ setĀ ofĀ keyĀ pointsĀ fromĀ oneĀ ofĀ theĀ pluralityĀ ofĀ expressionsĀ toĀ anotherĀ ofĀ theĀ pluralityĀ ofĀ expressionsĀ mayĀ beĀ calculatedĀ inĀ advanceĀ andĀ storedĀ inĀ theĀ presetĀ database.Ā AsĀ illustratedĀ inĀ FIG.Ā 6,Ā theĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ mayĀ beĀ acquiredĀ byĀ theĀ followingĀ acts.
AtĀ block Ā 1035,Ā basedĀ onĀ theĀ currentĀ expressionĀ andĀ theĀ targetĀ expression,Ā aĀ presetĀ databaseĀ isĀ queried,Ā toĀ acquireĀ theĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ points,Ā theĀ presetĀ databaseĀ comprisesĀ aĀ pluralityĀ ofĀ expressions.
InĀ theĀ actualĀ execution,Ā itĀ mayĀ sometimesĀ beĀ difficultĀ toĀ adaptĀ toĀ personalizedĀ requirementsĀ ofĀ theĀ user.Ā Therefore,Ā inĀ anĀ embodimentĀ ofĀ theĀ presentĀ disclosure,Ā afterĀ generatingĀ theĀ 3DĀ faceĀ modelĀ representingĀ toĀ theĀ targetĀ expression,Ā theĀ userĀ isĀ alsoĀ providedĀ withĀ anĀ adjustableĀ space.
AsĀ illustratedĀ inĀ FIG.Ā 7,Ā afterĀ theĀ actĀ atĀ block Ā 104,Ā theĀ methodĀ furtherĀ includesĀ actsĀ inĀ theĀ followingĀ blocks.
AtĀ block Ā 401,Ā oneĀ orĀ moreĀ adjustableĀ widgetsĀ areĀ displayedĀ toĀ theĀ user.Ā EachĀ ofĀ theĀ oneĀ orĀ moreĀ adjustableĀ widgetsĀ isĀ configuredĀ toĀ adjustĀ aĀ correspondingĀ keyĀ portionĀ onĀ theĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expressionĀ withinĀ aĀ presetĀ range.
InĀ orderĀ toĀ switchĀ theĀ currentĀ expressionĀ toĀ theĀ targetĀ expression,Ā theĀ strengthĀ ofĀ theĀ adjustableĀ widgetĀ mayĀ beĀ withinĀ theĀ range,Ā soĀ asĀ toĀ ensureĀ thatĀ theĀ adjustedĀ expressionĀ stillĀ belongsĀ toĀ theĀ sameĀ categoryĀ asĀ theĀ targetĀ expression.Ā ForĀ example,Ā theĀ adjustedĀ expressionĀ andĀ theĀ targetĀ expressionĀ bothĀ areĀ sad.Ā TheĀ presetĀ rangesĀ mayĀ beĀ differentĀ dueĀ toĀ differentĀ 3DĀ faceĀ models.
InĀ detail,Ā theĀ adjustableĀ widgetĀ correspondingĀ toĀ eachĀ keyĀ portionĀ isĀ generated.Ā InĀ thisĀ embodiment,Ā theĀ implementationĀ mannersĀ ofĀ theĀ adjustableĀ widgetĀ mayĀ beĀ differentĀ inĀ differentĀ scenarios.Ā AsĀ aĀ possibleĀ implementationĀ manner,Ā theĀ adjustableĀ widgetĀ mayĀ beĀ anĀ adjustableĀ progressĀ bar.Ā AsĀ illustratedĀ inĀ FIG.Ā 8,Ā anĀ adjustableĀ progressĀ barĀ correspondingĀ toĀ eachĀ keyĀ portionĀ isĀ generated,Ā andĀ theĀ userāsĀ movementĀ operationĀ onĀ theĀ adjustableĀ progressĀ barĀ correspondingĀ toĀ theĀ keyĀ portionĀ mayĀ beĀ detected.Ā DifferentĀ progressĀ locationsĀ ofĀ theĀ progressĀ barĀ mayĀ correspondĀ toĀ anĀ adjustmentĀ angelĀ ofĀ theĀ keyĀ portionĀ toĀ aĀ certainĀ direction,Ā forĀ example,Ā forĀ eyes,Ā differentĀ progressĀ locationsĀ ofĀ theĀ progressĀ barĀ mayĀ correspondĀ toĀ differentĀ degreesĀ ofĀ curvatureĀ ofĀ theĀ eyes.
AtĀ block Ā 402,Ā anĀ operationĀ onĀ oneĀ ofĀ theĀ oneĀ orĀ moreĀ adjustableĀ widgetsĀ fromĀ theĀ userĀ isĀ acquired.
AtĀ block Ā 403,Ā anĀ adjustmentĀ angleĀ isĀ acquiredĀ basedĀ onĀ theĀ operation.
AtĀ block Ā 404,Ā theĀ correspondingĀ keyĀ portionĀ isĀ adjustedĀ basedĀ onĀ theĀ adjustmentĀ angle.
Also,Ā anĀ identifierĀ ofĀ theĀ adjustableĀ widgetĀ mayĀ beĀ acquired.Ā TheĀ identifierĀ mayĀ beĀ aĀ nameĀ ofĀ aĀ keyĀ portionĀ orĀ theĀ like.
InĀ detail,Ā theĀ operationĀ fromĀ theĀ userĀ onĀ theĀ adjustableĀ widgetĀ isĀ acquired,Ā andĀ theĀ identifierĀ andĀ theĀ adjustmentĀ angleĀ areĀ acquiredĀ basedĀ onĀ theĀ operation.Ā ForĀ example,Ā whenĀ theĀ adjustableĀ widgetĀ isĀ aĀ progressĀ bar,Ā theĀ keyĀ positionĀ correspondingĀ toĀ theĀ progressĀ barĀ draggedĀ byĀ theĀ userĀ andĀ theĀ correspondingĀ dragĀ distanceĀ (theĀ dragĀ distanceĀ correspondsĀ toĀ theĀ adjustmentĀ angle)Ā areĀ acquired,Ā andĀ thenĀ theĀ keyĀ portionĀ correspondingĀ toĀ theĀ identifierĀ isĀ adjustedĀ basedĀ onĀ theĀ adjustmentĀ angle.
Therefore,Ā inĀ theĀ embodiment,Ā afterĀ ensuringĀ thatĀ theĀ 3DĀ faceĀ modelĀ representingĀ toĀ theĀ targetĀ expressionĀ isĀ reconstructed,Ā fineĀ adjustmentĀ accordingĀ toĀ theĀ user'sĀ personalĀ preferenceĀ mayĀ beĀ performedĀ onĀ theĀ basisĀ ofĀ ensuringĀ theĀ targetĀ expression,Ā whichĀ satisfiesĀ theĀ personalizedĀ requirementsĀ ofĀ theĀ user.
InĀ anĀ embodimentĀ ofĀ theĀ presentĀ disclosure,Ā inĀ orderĀ toĀ furtherĀ makeĀ theĀ 3DĀ faceĀ modelĀ representingĀ toĀ theĀ targetĀ expressionĀ satisfyĀ theĀ personalizedĀ requirementsĀ ofĀ theĀ user,Ā theĀ 3DĀ faceĀ modelĀ representingĀ toĀ theĀ targetĀ expressionĀ mayĀ alsoĀ beĀ adjustedĀ basedĀ onĀ theĀ personalĀ preferenceĀ ofĀ theĀ user.
AsĀ illustratedĀ inĀ FIG.Ā 9,Ā afterĀ theĀ aboveĀ actĀ atĀ block Ā 104,Ā theĀ methodĀ furtherĀ includesĀ actĀ inĀ theĀ followingĀ blocks.
AtĀ block Ā 501,Ā aĀ presetĀ stateĀ featureĀ ofĀ aĀ keyĀ portionĀ correspondingĀ toĀ theĀ targetĀ expressionĀ isĀ acquired.
TheĀ keyĀ portionĀ correspondingĀ toĀ theĀ targetĀ expressionĀ mayĀ beĀ aĀ relevantĀ portionĀ adaptedĀ toĀ theĀ targetĀ expression.Ā ForĀ example,Ā whenĀ theĀ targetĀ expressionĀ isĀ smile,Ā theĀ correspondingĀ keyĀ portionĀ mayĀ includeĀ aĀ mouth,Ā aĀ cheek,Ā andĀ anĀ eyebrow.Ā PresetĀ stateĀ featuresĀ correspondingĀ toĀ keyĀ portionsĀ mayĀ includeĀ theĀ statesĀ ofĀ theĀ relevantĀ portions.Ā ForĀ example,Ā whenĀ theĀ keyĀ portionĀ isĀ eyes,Ā theĀ correspondingĀ stateĀ featureĀ mayĀ includeĀ openĀ andĀ closeĀ ofĀ eyes.Ā TheĀ stateĀ featuresĀ ofĀ theĀ keyĀ portionĀ mayĀ beĀ presetĀ byĀ theĀ userĀ basedĀ onĀ personalĀ preferences.
AtĀ block Ā 502,Ā aĀ stateĀ ofĀ theĀ keyĀ portionĀ inĀ theĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expressionĀ isĀ adjustedĀ basedĀ onĀ theĀ presetĀ stateĀ feature.
InĀ detail,Ā theĀ stateĀ ofĀ theĀ correspondingĀ keyĀ portionĀ inĀ theĀ 3DĀ faceĀ modelĀ isĀ adjustedĀ basedĀ onĀ theĀ stateĀ feature,Ā soĀ thatĀ theĀ adjustedĀ 3DĀ faceĀ modelĀ isĀ moreĀ inĀ lineĀ withĀ theĀ userāsĀ personalĀ preference.Ā ItĀ needsĀ thatĀ withĀ theĀ stateĀ ofĀ theĀ keyĀ portionĀ inĀ thisĀ embodiment,Ā itĀ mayĀ renderĀ emotionalĀ effectsĀ consistentĀ withĀ theĀ targetĀ expressionĀ ratherĀ thanĀ changingĀ theĀ emotionĀ expressedĀ byĀ theĀ targetĀ expression.
ForĀ example,Ā whenĀ theĀ targetĀ expressionĀ isĀ bigĀ laugh,Ā theĀ acquiredĀ stateĀ featureĀ ofĀ theĀ keyĀ portionĀ correspondingĀ toĀ theĀ targetĀ expressionĀ isĀ theĀ sunkenĀ onĀ theĀ dimpleĀ andĀ theĀ cheek,Ā thusĀ theĀ cheekĀ positionĀ andĀ theĀ dimpleĀ positionĀ inĀ theĀ 3DĀ faceĀ modelĀ isĀ adjustedĀ basedĀ onĀ theĀ stateĀ featureĀ toĀ createĀ aĀ dimpleĀ effect,Ā makingĀ theĀ happyĀ moodĀ renderedĀ byĀ laughterĀ moreĀ prominent.
ForĀ example,Ā whenĀ theĀ targetĀ expressionĀ isĀ smile,Ā theĀ stateĀ featureĀ ofĀ theĀ keyĀ portionĀ correspondingĀ toĀ theĀ targetĀ expressionĀ isĀ theĀ slightĀ narrowĀ ofĀ theĀ rightĀ eye,Ā thusĀ theĀ positionĀ ofĀ theĀ rightĀ eyeĀ inĀ theĀ 3DĀ faceĀ modelĀ isĀ adjustedĀ basedĀ onĀ theĀ stateĀ featureĀ toĀ createĀ aĀ blinkingĀ effect,Ā makingĀ theĀ happyĀ moodĀ renderedĀ byĀ theĀ smileĀ moreĀ prominent.
InĀ conclusion,Ā withĀ theĀ methodĀ forĀ replacingĀ theĀ expressionĀ providedĀ inĀ theĀ embodimentsĀ ofĀ theĀ presentĀ disclosure,Ā theĀ currentĀ expressionĀ representedĀ byĀ theĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ isĀ acquired;Ā theĀ targetĀ expressionĀ fromĀ theĀ userĀ isĀ acquired;Ā basedĀ onĀ theĀ currentĀ expressionĀ andĀ theĀ targetĀ expression,Ā theĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ areĀ acquired;Ā andĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ areĀ adjustedĀ basedĀ onĀ theĀ values,Ā toĀ generateĀ theĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expression.Ā Therefore,Ā aĀ speedĀ ofĀ modelingĀ theĀ 3DĀ faceĀ modelĀ basedĀ onĀ expressionĀ replacementĀ isĀ improved.
InĀ orderĀ toĀ implementĀ theĀ aboveĀ embodiment,Ā theĀ presentĀ disclosureĀ alsoĀ providesĀ aĀ deviceĀ forĀ replacingĀ anĀ expression.Ā FIG.Ā 10Ā isĀ aĀ blockĀ diagramĀ ofĀ aĀ deviceĀ forĀ replacingĀ anĀ expressionĀ accordingĀ toĀ anĀ embodimentĀ ofĀ theĀ presentĀ disclosure.Ā AsĀ illustratedĀ inĀ FIG.Ā 10,Ā theĀ deviceĀ includesĀ aĀ firstĀ acquiringĀ module Ā 10,Ā aĀ secondĀ acquiringĀ module Ā 20,Ā aĀ thirdĀ acquiringĀ module Ā 30,Ā andĀ aĀ generatingĀ module Ā 40.
TheĀ firstĀ acquiringĀ module Ā 10Ā isĀ configuredĀ toĀ acquireĀ aĀ currentĀ expressionĀ representedĀ byĀ aĀ currently-reconstructedĀ 3DĀ faceĀ model.
TheĀ secondĀ acquiringĀ module Ā 20Ā isĀ configuredĀ toĀ acquireĀ aĀ targetĀ expressionĀ fromĀ aĀ user.
TheĀ thirdĀ acquiringĀ module Ā 30Ā isĀ configuredĀ toĀ acquire,Ā basedĀ onĀ theĀ currentĀ expressionĀ andĀ theĀ targetĀ expression,Ā valuesĀ forĀ adjustingĀ coordinatesĀ ofĀ aĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ model.
TheĀ generatingĀ module Ā 40Ā isĀ configuredĀ toĀ adjustĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ basedĀ onĀ theĀ values,Ā toĀ generateĀ aĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expression.
InĀ anĀ embodimentĀ ofĀ theĀ presentĀ disclosure,Ā asĀ illustratedĀ inĀ FIG.Ā 11,Ā onĀ theĀ basisĀ ofĀ FIG.Ā 10,Ā theĀ firstĀ acquiringĀ module Ā 10Ā includesĀ aĀ firstĀ determiningĀ unit Ā 11,Ā anĀ extractingĀ unit Ā 12,Ā andĀ aĀ secondĀ determiningĀ unit Ā 13.
TheĀ firstĀ determiningĀ unit Ā 11Ā isĀ configuredĀ toĀ determineĀ keyĀ pointsĀ ofĀ aĀ pluralityĀ ofĀ keyĀ portionsĀ inĀ theĀ currently-reconstructedĀ 3DĀ faceĀ model.
TheĀ extractingĀ unit Ā 12Ā isĀ configuredĀ toĀ extractĀ aĀ featureĀ vectorĀ ofĀ theĀ pluralityĀ ofĀ keyĀ portions,Ā basedĀ onĀ coordinateĀ informationĀ ofĀ theĀ keyĀ pointsĀ ofĀ theĀ pluralityĀ ofĀ keyĀ portionsĀ andĀ distancesĀ amongĀ theĀ pluralityĀ ofĀ keyĀ portions.
TheĀ secondĀ determiningĀ unit Ā 13Ā isĀ configuredĀ toĀ determineĀ theĀ currentĀ expressionĀ ofĀ theĀ 3DĀ faceĀ model,Ā byĀ analyzingĀ theĀ featureĀ vectorĀ ofĀ theĀ pluralityĀ ofĀ keyĀ portionsĀ throughĀ aĀ pre-trainedĀ neuralĀ network.
InĀ anĀ embodimentĀ ofĀ theĀ presentĀ disclosure,Ā theĀ secondĀ acquiringĀ module Ā 20Ā isĀ configuredĀ toĀ displayĀ aĀ listĀ ofĀ expressionsĀ toĀ theĀ userĀ andĀ acquireĀ anĀ expressionĀ selectedĀ byĀ theĀ userĀ onĀ theĀ listĀ asĀ theĀ targetĀ expression.
InĀ anĀ embodimentĀ ofĀ theĀ presentĀ disclosure,Ā theĀ secondĀ acquiringĀ module Ā 20Ā isĀ configuredĀ toĀ captureĀ anĀ expressionĀ ofĀ theĀ userĀ byĀ aĀ camera;Ā matchĀ theĀ expressionĀ capturedĀ byĀ theĀ cameraĀ withĀ aĀ presetĀ listĀ ofĀ expressions;Ā andĀ inĀ responseĀ toĀ theĀ expressionĀ capturedĀ byĀ theĀ cameraĀ matchingĀ oneĀ expressionĀ inĀ theĀ presetĀ list,Ā useĀ theĀ expressionĀ capturedĀ byĀ theĀ cameraĀ asĀ theĀ targetĀ expression.
InĀ anĀ embodimentĀ ofĀ theĀ presentĀ disclosure,Ā theĀ thirdĀ acquiringĀ module Ā 30Ā isĀ configuredĀ to:Ā acquireĀ aĀ secondĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ currentĀ expression,Ā andĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ points;Ā acquireĀ aĀ thirdĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ targetĀ expression,Ā andĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ points;Ā acquireĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ basedĀ onĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ andĀ theĀ thirdĀ setĀ ofĀ keyĀ points,Ā andĀ acquiringĀ theĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ basedĀ onĀ theĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ andĀ theĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ points.Ā TheĀ secondĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ currentĀ expression,Ā andĀ theĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ areĀ preset.Ā TheĀ thirdĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ targetĀ expression,Ā andĀ theĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ pointsĀ areĀ preset.
InĀ anĀ embodimentĀ ofĀ theĀ presentĀ disclosure,Ā theĀ thirdĀ acquiringĀ module Ā 30Ā isĀ configuredĀ to:Ā query,Ā basedĀ onĀ theĀ currentĀ expressionĀ andĀ theĀ targetĀ expression,Ā aĀ presetĀ databaseĀ toĀ acquireĀ theĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ points.Ā TheĀ presetĀ databaseĀ includesĀ aĀ pluralityĀ ofĀ expressions,Ā andĀ valuesĀ forĀ adjustingĀ coordinatesĀ ofĀ aĀ correspondingĀ setĀ ofĀ keyĀ pointsĀ fromĀ oneĀ ofĀ theĀ pluralityĀ ofĀ expressionsĀ toĀ anotherĀ ofĀ theĀ pluralityĀ ofĀ expressions.
InĀ anĀ embodimentĀ ofĀ theĀ presentĀ disclosure,Ā asĀ illustratedĀ inĀ 12,Ā theĀ deviceĀ furtherĀ includesĀ aĀ firstĀ adjustingĀ module Ā 50.Ā TheĀ firstĀ adjustingĀ module Ā 50Ā isĀ configuredĀ toĀ displayĀ oneĀ orĀ moreĀ adjustableĀ widgetsĀ toĀ theĀ user,Ā eachĀ ofĀ theĀ oneĀ orĀ moreĀ adjustableĀ widgetsĀ beingĀ configuredĀ toĀ adjustĀ aĀ correspondingĀ keyĀ portionĀ onĀ theĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expressionĀ withinĀ aĀ presetĀ range;Ā acquireĀ anĀ operationĀ onĀ oneĀ ofĀ theĀ oneĀ orĀ moreĀ adjustableĀ widgetsĀ fromĀ theĀ user;Ā acquireĀ anĀ adjustmentĀ angleĀ basedĀ onĀ theĀ operation;Ā andĀ adjustĀ theĀ correspondingĀ keyĀ portionĀ basedĀ onĀ theĀ adjustmentĀ angle.
InĀ anĀ embodimentĀ ofĀ theĀ presentĀ disclosure,Ā asĀ illustratedĀ inĀ 13,Ā theĀ deviceĀ furtherĀ includesĀ aĀ secondĀ adjustingĀ module Ā 60.Ā TheĀ secondĀ adjustingĀ module Ā 60Ā isĀ configuredĀ toĀ acquireĀ aĀ presetĀ stateĀ featureĀ ofĀ aĀ keyĀ portionĀ correspondingĀ toĀ theĀ targetĀ expression;Ā andĀ adjustĀ aĀ stateĀ ofĀ theĀ keyĀ portionĀ inĀ theĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expressionĀ basedĀ onĀ theĀ presetĀ stateĀ feature.
ItĀ shouldĀ beĀ notedĀ thatĀ inĀ theĀ aboveĀ explanationĀ ofĀ theĀ embodiment,Ā theĀ methodĀ forĀ replacingĀ theĀ expressionĀ isĀ alsoĀ applicableĀ toĀ theĀ deviceĀ forĀ replacingĀ theĀ expression,Ā andĀ detailsĀ areĀ notĀ describedĀ hereinĀ again.
InĀ orderĀ toĀ implementĀ theĀ aboveĀ embodiments,Ā theĀ presentĀ disclosureĀ furtherĀ providesĀ aĀ computerĀ readableĀ storageĀ mediumĀ havingĀ aĀ computerĀ programĀ storedĀ thereon.Ā TheĀ computerĀ programĀ isĀ executedĀ byĀ aĀ processorĀ ofĀ theĀ mobileĀ terminalĀ toĀ implementĀ theĀ methodĀ forĀ replacingĀ theĀ expressionĀ asĀ describedĀ inĀ theĀ aboveĀ embodiments.
InĀ orderĀ toĀ implementĀ theĀ aboveĀ embodiments,Ā theĀ presentĀ disclosureĀ alsoĀ providesĀ anĀ electronicĀ device.
FIG.Ā 14Ā isĀ aĀ schematicĀ diagramĀ ofĀ anĀ electronicĀ deviceĀ accordingĀ toĀ anĀ embodimentĀ ofĀ theĀ presentĀ disclosure.Ā TheĀ electronicĀ device Ā 200Ā includesĀ aĀ processor Ā 220,Ā aĀ memory Ā 230,Ā aĀ display Ā 240,Ā andĀ anĀ inputĀ device Ā 250Ā thatĀ areĀ coupledĀ byĀ aĀ systemĀ bus Ā 210.Ā TheĀ memory Ā 230Ā ofĀ theĀ electronicĀ device Ā 200Ā storesĀ anĀ operatingĀ systemĀ andĀ computerĀ readableĀ instructions.Ā TheĀ computerĀ readableĀ instructionsĀ areĀ executableĀ byĀ theĀ processor Ā 220Ā toĀ implementĀ theĀ methodĀ forĀ replacingĀ theĀ expressionĀ providedĀ inĀ theĀ embodimentsĀ ofĀ theĀ presentĀ disclosure.Ā TheĀ processor Ā 220Ā isĀ configuredĀ toĀ provideĀ computingĀ andĀ controlĀ capabilitiesĀ toĀ supportĀ theĀ operationĀ ofĀ theĀ entireĀ electronicĀ device Ā 200.Ā TheĀ display Ā 240Ā ofĀ theĀ electronicĀ device Ā 200Ā mayĀ beĀ aĀ liquidĀ crystalĀ displayĀ orĀ anĀ electronicĀ inkĀ displayĀ orĀ theĀ like.Ā TheĀ inputĀ device Ā 250Ā mayĀ beĀ aĀ touchĀ layerĀ coveredĀ onĀ theĀ display Ā 240,Ā orĀ mayĀ beĀ aĀ button,Ā aĀ trackballĀ orĀ aĀ touchpadĀ disposedĀ onĀ theĀ housingĀ ofĀ theĀ electronicĀ device Ā 200,Ā orĀ anĀ externalĀ keyboard,Ā aĀ trackpadĀ orĀ aĀ mouse.Ā TheĀ electronicĀ device Ā 200Ā mayĀ beĀ aĀ mobileĀ phone,Ā aĀ tabletĀ computer,Ā aĀ notebookĀ computer,Ā aĀ personalĀ digitalĀ assistant,Ā orĀ aĀ wearableĀ deviceĀ (e.g.,Ā aĀ smartĀ bracelet,Ā aĀ smartĀ watch,Ā aĀ smartĀ helmet,Ā smartĀ glasses)Ā .
ItĀ willĀ beĀ understoodĀ byĀ thoseĀ skilledĀ inĀ theĀ artĀ thatĀ theĀ structureĀ illustratedĀ inĀ FIG.Ā 14Ā isĀ onlyĀ aĀ schematicĀ diagramĀ ofĀ aĀ portionĀ ofĀ theĀ structureĀ relatedĀ toĀ theĀ solutionĀ ofĀ theĀ presentĀ disclosure,Ā andĀ doesĀ notĀ constituteĀ aĀ limitationĀ ofĀ theĀ electronicĀ device Ā 200Ā toĀ whichĀ theĀ solutionĀ ofĀ theĀ presentĀ disclosureĀ isĀ applied.Ā TheĀ specificĀ electronicĀ device Ā 200Ā mayĀ includeĀ moreĀ orĀ fewerĀ componentsĀ thanĀ illustratedĀ inĀ theĀ figures,Ā orĀ someĀ combinedĀ components,Ā orĀ haveĀ differentĀ componentĀ arrangement.
BasedĀ onĀ theĀ aboveĀ embodiments,Ā inĀ theĀ embodimentĀ ofĀ theĀ presentĀ disclosure,Ā theĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ mayĀ beĀ implementedĀ byĀ anĀ imageĀ processingĀ circuitĀ inĀ theĀ terminalĀ device.Ā InĀ orderĀ toĀ makeĀ theĀ processĀ ofĀ reconstructingĀ theĀ 3DĀ faceĀ modelĀ clearĀ forĀ thoseĀ skilledĀ inĀ theĀ art,Ā theĀ followingĀ descriptionĀ isĀ combinedĀ withĀ aĀ possibleĀ imageĀ processingĀ circuit.
AsĀ illustratedĀ inĀ FIG.Ā 15,Ā theĀ imageĀ processingĀ circuitĀ includesĀ anĀ imageĀ unit Ā 310,Ā aĀ depthĀ informationĀ unit Ā 320,Ā andĀ aĀ processingĀ unit Ā 330.
TheĀ imageĀ unit Ā 310Ā isĀ configuredĀ toĀ outputĀ oneĀ orĀ moreĀ currentĀ originalĀ 2DĀ faceĀ imagesĀ ofĀ theĀ user.
TheĀ depthĀ informationĀ unit Ā 320Ā isĀ configuredĀ toĀ outputĀ depthĀ informationĀ correspondingĀ toĀ theĀ oneĀ orĀ moreĀ originalĀ 2DĀ faceĀ images.
TheĀ processingĀ unit Ā 330Ā isĀ electricallyĀ coupledĀ toĀ theĀ imageĀ unit Ā 310Ā andĀ theĀ depthĀ informationĀ unit Ā 320,Ā andĀ configuredĀ toĀ performĀ 3DĀ reconstructionĀ basedĀ onĀ theĀ depthĀ informationĀ andĀ theĀ oneĀ orĀ moreĀ originalĀ 2DĀ faceĀ imagesĀ toĀ acquireĀ aĀ 3DĀ faceĀ modelĀ thatĀ displaysĀ theĀ currentĀ expression.
InĀ theĀ embodimentsĀ ofĀ theĀ presentĀ disclosure,Ā theĀ imageĀ unit Ā 310Ā mayĀ include:Ā anĀ imageĀ sensor Ā 311Ā andĀ anĀ imageĀ signalĀ processingĀ (ISP)Ā processor Ā 312Ā thatĀ areĀ electricallyĀ coupledĀ withĀ eachĀ other.
TheĀ imageĀ sensor Ā 311Ā isĀ configuredĀ toĀ outputĀ originalĀ imageĀ data.
TheĀ ISPĀ processor Ā 312Ā isĀ configuredĀ toĀ outputĀ theĀ originalĀ 2DĀ faceĀ imageĀ accordingĀ toĀ theĀ originalĀ imageĀ data.
InĀ theĀ embodimentsĀ ofĀ theĀ presentĀ disclosure,Ā theĀ originalĀ imageĀ dataĀ capturedĀ byĀ theĀ imageĀ sensor Ā 311Ā isĀ firstĀ processedĀ byĀ theĀ ISPĀ processor Ā 312,Ā whichĀ analyzesĀ theĀ originalĀ imageĀ dataĀ toĀ captureĀ imageĀ statisticsĀ informationĀ thatĀ mayĀ beĀ usedĀ toĀ determineĀ oneĀ orĀ moreĀ controlĀ parametersĀ ofĀ theĀ imageĀ sensor Ā 311,Ā includingĀ faceĀ imagesĀ inĀ YUVĀ (LumaĀ andĀ Chroma)Ā formatĀ orĀ RGBĀ format.Ā TheĀ imageĀ sensor Ā 311Ā mayĀ includeĀ aĀ colorĀ filterĀ arrayĀ (suchĀ asĀ aĀ BayerĀ filter)Ā andĀ correspondingĀ photosensitiveĀ units.Ā TheĀ imageĀ sensor Ā 311Ā mayĀ acquireĀ lightĀ intensityĀ andĀ wavelengthĀ informationĀ capturedĀ byĀ eachĀ photosensitiveĀ unitĀ andĀ provideĀ aĀ setĀ ofĀ originalĀ imageĀ dataĀ thatĀ mayĀ beĀ processedĀ byĀ theĀ ISPĀ processor Ā 312.Ā AfterĀ processingĀ theĀ originalĀ imageĀ data,Ā theĀ ISPĀ processor Ā 312Ā acquiresĀ aĀ faceĀ imageĀ inĀ theĀ YUVĀ formatĀ orĀ theĀ RGBĀ formatĀ andĀ sendsĀ itĀ toĀ theĀ processingĀ unit Ā 330.
TheĀ ISPĀ processor Ā 312Ā mayĀ processĀ theĀ originalĀ imageĀ dataĀ pixelĀ byĀ pixelĀ inĀ aĀ pluralityĀ ofĀ formatsĀ whenĀ processingĀ theĀ originalĀ imageĀ data.Ā ForĀ example,Ā eachĀ imageĀ pixelĀ mayĀ haveĀ aĀ bitĀ depthĀ ofĀ 8,Ā 10,Ā 12,Ā orĀ 14Ā bits,Ā andĀ theĀ ISPĀ processor Ā 312Ā mayĀ performĀ oneĀ orĀ moreĀ imageĀ processingĀ operationsĀ onĀ theĀ originalĀ imageĀ data,Ā collectĀ statisticalĀ informationĀ aboutĀ theĀ imageĀ data.Ā TheĀ imageĀ processingĀ operationĀ mayĀ beĀ performedĀ withĀ theĀ sameĀ orĀ differentĀ bitĀ depthĀ precision.
AsĀ aĀ possibleĀ implementationĀ manner,Ā theĀ depthĀ informationĀ unit Ā 320Ā includesĀ aĀ structured-lightĀ sensor Ā 321Ā andĀ aĀ depthĀ mapĀ generationĀ chip Ā 322Ā thatĀ areĀ electricallyĀ coupledĀ withĀ eachĀ other.
TheĀ structured-lightĀ sensor Ā 321Ā isĀ configuredĀ toĀ generateĀ anĀ infraredĀ speckleĀ pattern.
TheĀ depthĀ mapĀ generationĀ chip Ā 322Ā isĀ configuredĀ toĀ outputĀ depthĀ informationĀ correspondingĀ toĀ theĀ originalĀ 2DĀ faceĀ imageĀ basedĀ onĀ theĀ infraredĀ speckleĀ pattern.
InĀ theĀ embodimentsĀ ofĀ theĀ presentĀ disclosure,Ā theĀ structured-lightĀ sensor Ā 321Ā projectsĀ theĀ speckleĀ structureĀ lightĀ toĀ theĀ subject,Ā andĀ acquiresĀ theĀ structuredĀ lightĀ reflectedĀ byĀ theĀ subject,Ā andĀ acquireĀ anĀ infraredĀ speckleĀ patternĀ accordingĀ toĀ theĀ reflectedĀ structureĀ lightĀ byĀ imaging.Ā TheĀ structured-lightĀ sensor Ā 321Ā sendsĀ theĀ infraredĀ speckleĀ patternĀ toĀ theĀ depthĀ mapĀ generationĀ chip Ā 322,Ā soĀ thatĀ theĀ depthĀ mapĀ generationĀ chip Ā 322Ā determinesĀ theĀ morphologicalĀ changeĀ ofĀ theĀ structuredĀ lightĀ accordingĀ toĀ theĀ infraredĀ speckleĀ pattern,Ā andĀ thenĀ determinesĀ theĀ depthĀ ofĀ theĀ subjectĀ andĀ acquireĀ aĀ depthĀ map.Ā TheĀ depthĀ mapĀ indicatesĀ theĀ depthĀ ofĀ eachĀ pixelĀ inĀ theĀ infraredĀ speckleĀ pattern.Ā TheĀ depthĀ mapĀ generationĀ chip Ā 322Ā transmitsĀ theĀ depthĀ mapĀ toĀ theĀ processingĀ unit Ā 330.
AsĀ aĀ possibleĀ implementation,Ā theĀ processingĀ unit Ā 330Ā includes:Ā aĀ CPUĀ (CentralĀ ProcessingĀ Unit)Ā 331Ā andĀ aĀ GPUĀ (GraphicsĀ ProcessingĀ Unit)Ā 332Ā thatĀ areĀ electricallyĀ coupledĀ withĀ eachĀ other.
TheĀ CPU Ā 331Ā isĀ configuredĀ toĀ alignĀ theĀ faceĀ imageĀ andĀ theĀ depthĀ mapĀ accordingĀ toĀ theĀ calibrationĀ data,Ā andĀ outputĀ theĀ 3DĀ faceĀ modelĀ accordingĀ toĀ theĀ alignedĀ faceĀ imageĀ andĀ depthĀ map.
TheĀ GPU Ā 332Ā isĀ configuredĀ toĀ adjustĀ coordinateĀ informationĀ ofĀ referenceĀ keyĀ pointsĀ accordingĀ toĀ coordinateĀ differencesĀ toĀ generateĀ theĀ 3DĀ faceĀ modelĀ correspondingĀ toĀ theĀ targetĀ expression.
InĀ theĀ embodimentsĀ ofĀ theĀ presentĀ disclosure,Ā theĀ CPU Ā 331Ā acquiresĀ aĀ faceĀ imageĀ fromĀ theĀ ISPĀ processor Ā 312,Ā andĀ acquiresĀ aĀ depthĀ mapĀ fromĀ theĀ depthĀ mapĀ generationĀ chip Ā 322,Ā alignsĀ theĀ faceĀ imageĀ withĀ theĀ depthĀ mapĀ byĀ combiningĀ theĀ previouslyĀ acquiredĀ calibrationĀ data,Ā toĀ determineĀ theĀ depthĀ informationĀ correspondingĀ toĀ eachĀ pixelĀ inĀ theĀ faceĀ image.Ā Further,Ā theĀ CPU Ā 331Ā performsĀ 3DĀ reconstructionĀ basedĀ onĀ theĀ depthĀ informationĀ andĀ theĀ faceĀ imageĀ toĀ acquireĀ aĀ 3DĀ faceĀ model.
TheĀ CPU Ā 331Ā transmitsĀ theĀ 3DĀ faceĀ modelĀ toĀ theĀ GPU Ā 332Ā soĀ thatĀ theĀ GPU Ā 332Ā executesĀ theĀ methodĀ forĀ replacingĀ theĀ expressionĀ asĀ describedĀ inĀ theĀ aboveĀ embodiment.
Further,Ā theĀ imageĀ processingĀ circuitĀ mayĀ furtherĀ include:Ā aĀ firstĀ displayĀ unitĀ 341.
TheĀ firstĀ displayĀ unitĀ 341Ā isĀ electricallyĀ coupledĀ toĀ theĀ processingĀ unit Ā 330Ā forĀ displayingĀ anĀ adjustableĀ widgetĀ ofĀ theĀ keyĀ portionĀ toĀ beĀ adjusted.
Further,Ā theĀ imageĀ processingĀ circuitĀ mayĀ furtherĀ include:Ā aĀ secondĀ displayĀ unitĀ 342.
TheĀ secondĀ displayĀ unitĀ 342Ā isĀ electricallyĀ coupledĀ toĀ theĀ processingĀ unit Ā 340Ā forĀ displayingĀ theĀ adjustedĀ 3DĀ faceĀ model.
Alternatively,Ā theĀ imageĀ processingĀ circuitĀ mayĀ furtherĀ include:Ā anĀ encoder Ā 350Ā andĀ aĀ memory Ā 360.
InĀ theĀ embodimentsĀ ofĀ theĀ presentĀ disclosure,Ā theĀ beautifiedĀ faceĀ imageĀ processedĀ byĀ theĀ GPU Ā 332Ā mayĀ alsoĀ beĀ encodedĀ byĀ theĀ encoder Ā 350Ā andĀ storedĀ inĀ theĀ memory Ā 360.Ā TheĀ encoder Ā 350Ā mayĀ beĀ implementedĀ byĀ aĀ coprocessor.
InĀ anĀ embodiment,Ā thereĀ mayĀ beĀ aĀ pluralityĀ ofĀ theĀ memory Ā 360,Ā orĀ theĀ memory Ā 360Ā mayĀ beĀ dividedĀ intoĀ aĀ pluralityĀ ofĀ storageĀ spaces.Ā TheĀ imageĀ dataĀ processedĀ byĀ theĀ GPU Ā 312Ā mayĀ beĀ storedĀ inĀ aĀ dedicatedĀ memory,Ā orĀ aĀ dedicatedĀ storageĀ space,Ā andĀ mayĀ includeĀ DMAĀ (DirectĀ MemoryĀ Access)Ā feature.Ā TheĀ memory Ā 360Ā mayĀ beĀ configuredĀ toĀ implementĀ oneĀ orĀ moreĀ frameĀ buffers.
TheĀ aboveĀ processĀ willĀ beĀ describedĀ inĀ detailĀ belowĀ withĀ referenceĀ toĀ FIG.Ā 16.
ItĀ shouldĀ beĀ notedĀ thatĀ FIG.Ā 16Ā isĀ aĀ schematicĀ diagramĀ ofĀ anĀ imageĀ processingĀ circuitĀ asĀ aĀ possibleĀ implementation.Ā ForĀ easeĀ ofĀ explanation,Ā onlyĀ theĀ variousĀ aspectsĀ relatedĀ toĀ theĀ embodimentsĀ ofĀ theĀ presentĀ disclosureĀ areĀ illustrated.
AsĀ illustratedĀ inĀ FIG.Ā 16,Ā theĀ originalĀ imageĀ dataĀ capturedĀ byĀ theĀ imageĀ sensor Ā 311Ā isĀ firstĀ processedĀ byĀ theĀ ISPĀ processor Ā 312,Ā whichĀ analyzesĀ theĀ originalĀ imageĀ dataĀ toĀ captureĀ imageĀ statisticsĀ informationĀ thatĀ mayĀ beĀ usedĀ toĀ determineĀ oneĀ orĀ moreĀ controlĀ parametersĀ ofĀ theĀ imageĀ sensor Ā 311,Ā includingĀ faceĀ imagesĀ inĀ YUVĀ formatĀ orĀ RGBĀ format.Ā TheĀ imageĀ sensor Ā 311Ā mayĀ includeĀ aĀ colorĀ filterĀ arrayĀ (suchĀ asĀ aĀ BayerĀ filter)Ā andĀ correspondingĀ photosensitiveĀ units.Ā TheĀ imageĀ sensor Ā 311Ā mayĀ acquireĀ lightĀ intensityĀ andĀ wavelengthĀ informationĀ capturedĀ byĀ eachĀ photosensitiveĀ unitĀ andĀ provideĀ aĀ setĀ ofĀ originalĀ imageĀ dataĀ thatĀ mayĀ beĀ processedĀ byĀ theĀ ISPĀ processor Ā 312.Ā TheĀ ISPĀ processor Ā 312Ā processesĀ theĀ originalĀ imageĀ dataĀ toĀ acquireĀ aĀ faceĀ imageĀ inĀ theĀ YUVĀ formatĀ orĀ theĀ RGBĀ format,Ā andĀ transmitsĀ theĀ faceĀ imageĀ toĀ theĀ CPU Ā 331.
TheĀ ISPĀ processor Ā 312Ā mayĀ processĀ theĀ originalĀ imageĀ dataĀ pixelĀ byĀ pixelĀ inĀ aĀ pluralityĀ ofĀ formatsĀ whenĀ processingĀ theĀ originalĀ imageĀ data.Ā ForĀ example,Ā eachĀ imageĀ pixelĀ mayĀ haveĀ aĀ bitĀ depthĀ ofĀ 8,Ā 10,Ā 12,Ā orĀ 14Ā bits,Ā andĀ theĀ ISPĀ processor Ā 312Ā mayĀ performĀ oneĀ orĀ moreĀ imageĀ processingĀ operationsĀ onĀ theĀ originalĀ imageĀ data,Ā collectĀ statisticalĀ informationĀ aboutĀ theĀ imageĀ data.Ā TheĀ imageĀ processingĀ operationsĀ mayĀ beĀ performedĀ withĀ theĀ sameĀ orĀ differentĀ bitĀ depthĀ precision.
AsĀ illustratedĀ inĀ FIG.Ā 16,Ā theĀ structured-lightĀ sensor Ā 321Ā projectsĀ theĀ speckleĀ structureĀ lightĀ towardĀ theĀ subject,Ā andĀ acquiresĀ theĀ structuredĀ lightĀ reflectedĀ byĀ theĀ subject,Ā andĀ acquireĀ anĀ infraredĀ speckleĀ patternĀ accordingĀ toĀ theĀ reflectedĀ structuredĀ light.Ā TheĀ structured-lightĀ sensor Ā 321Ā transmitsĀ theĀ infraredĀ speckleĀ patternĀ toĀ theĀ depthĀ mapĀ generationĀ chip Ā 322,Ā soĀ thatĀ theĀ depthĀ mapĀ generationĀ chip Ā 322Ā determinesĀ theĀ morphologicalĀ changeĀ ofĀ theĀ structuredĀ lightĀ accordingĀ toĀ theĀ infraredĀ speckleĀ pattern,Ā andĀ thenĀ determinesĀ theĀ depthĀ ofĀ theĀ subjectĀ toĀ acquireĀ aĀ depthĀ map.Ā TheĀ depthĀ mapĀ indicatesĀ theĀ depthĀ ofĀ eachĀ pixelĀ inĀ theĀ infraredĀ speckleĀ pattern.Ā TheĀ depthĀ mapĀ generationĀ chip Ā 322Ā transmitsĀ theĀ depthĀ mapĀ toĀ theĀ CPU Ā 331.
TheĀ CPU Ā 331Ā acquiresĀ aĀ faceĀ imageĀ fromĀ theĀ ISPĀ processor Ā 312,Ā andĀ acquiresĀ aĀ depthĀ mapĀ fromĀ theĀ depthĀ mapĀ generationĀ chip Ā 322,Ā alignsĀ theĀ faceĀ imageĀ withĀ theĀ depthĀ mapĀ byĀ combiningĀ theĀ previouslyĀ acquiredĀ calibrationĀ data,Ā toĀ determineĀ theĀ depthĀ informationĀ correspondingĀ toĀ eachĀ pixelĀ inĀ theĀ faceĀ image.Ā Further,Ā theĀ CPU Ā 331Ā performsĀ 3DĀ reconstructionĀ basedĀ onĀ theĀ depthĀ informationĀ andĀ theĀ faceĀ imageĀ toĀ acquireĀ aĀ 3DĀ faceĀ model
TheĀ CPU Ā 331Ā transmitsĀ theĀ 3DĀ faceĀ modelĀ toĀ theĀ GPU Ā 332,Ā soĀ thatĀ theĀ GPU Ā 332Ā performsĀ theĀ methodĀ describedĀ inĀ theĀ aboveĀ embodimentĀ basedĀ onĀ theĀ 3DĀ faceĀ modelĀ toĀ generateĀ theĀ 3DĀ faceĀ modelĀ correspondingĀ toĀ theĀ targetĀ expression.Ā TheĀ 3DĀ faceĀ modelĀ correspondingĀ toĀ theĀ targetĀ expression,Ā processedĀ byĀ theĀ GPU Ā 332,Ā mayĀ beĀ representedĀ byĀ theĀ displayĀ 340Ā (includingĀ theĀ firstĀ displayĀ unitĀ 341Ā andĀ theĀ secondĀ displayĀ unitĀ 351Ā describedĀ above)Ā ,Ā and/orĀ encodedĀ byĀ theĀ encoder Ā 350Ā andĀ storedĀ inĀ theĀ memory Ā 360.Ā TheĀ encoder Ā 350Ā isĀ implementedĀ byĀ aĀ coprocessor.
InĀ anĀ embodiment,Ā thereĀ mayĀ beĀ aĀ pluralityĀ ofĀ theĀ memory Ā 360,Ā orĀ theĀ memory Ā 360Ā mayĀ beĀ dividedĀ intoĀ aĀ pluralityĀ ofĀ storageĀ spaces.Ā TheĀ imageĀ dataĀ processedĀ byĀ theĀ GPU Ā 312Ā mayĀ beĀ storedĀ inĀ aĀ dedicatedĀ memory,Ā orĀ aĀ dedicatedĀ storageĀ space,Ā andĀ mayĀ includeĀ DMAĀ (DirectĀ MemoryĀ Access)Ā feature.Ā TheĀ memory Ā 360Ā mayĀ beĀ configuredĀ toĀ implementĀ oneĀ orĀ moreĀ frameĀ buffers.
ForĀ example,Ā theĀ followingĀ actsĀ areĀ implementedĀ byĀ usingĀ theĀ processor Ā 220Ā inĀ FIG.Ā 14Ā orĀ usingĀ theĀ imagingĀ processingĀ circuitsĀ (theĀ CPU Ā 331Ā andĀ theĀ GPUĀ 332)Ā inĀ FIG.Ā 16.
TheĀ CPU Ā 331Ā acquiresĀ aĀ 2DĀ faceĀ imageĀ andĀ depthĀ informationĀ correspondingĀ toĀ theĀ faceĀ image.Ā TheĀ CPU Ā 331Ā performsĀ 3DĀ reconstructionĀ accordingĀ toĀ theĀ depthĀ informationĀ andĀ theĀ faceĀ imageĀ toĀ acquireĀ aĀ 3DĀ faceĀ model.Ā TheĀ GPU Ā 332Ā acquiresĀ adjustingĀ parametersĀ ofĀ theĀ 3DĀ faceĀ modelĀ forĀ theĀ user,Ā andĀ adjustsĀ keyĀ pointsĀ onĀ theĀ originalĀ 3DĀ faceĀ modelĀ basedĀ onĀ theĀ shapingĀ parametersĀ ofĀ theĀ 3DĀ faceĀ model,Ā toĀ acquireĀ aĀ 3DĀ faceĀ modelĀ correspondingĀ toĀ theĀ targetĀ expression
InĀ theĀ descriptionĀ ofĀ theĀ presentĀ disclosure,Ā referenceĀ throughoutĀ thisĀ specificationĀ toĀ āanĀ embodiment,Ā āĀ āsomeĀ embodiments,Ā āĀ āanĀ example,Ā āĀ āaspecificĀ example,Ā āĀ orĀ āsomeĀ examples,Ā āĀ meansĀ thatĀ aĀ particularĀ feature,Ā structure,Ā material,Ā orĀ characteristicĀ describedĀ inĀ connectionĀ withĀ theĀ embodimentĀ orĀ exampleĀ isĀ includedĀ inĀ atĀ leastĀ oneĀ embodimentĀ orĀ exampleĀ ofĀ theĀ presentĀ disclosure.Ā Thus,Ā theĀ appearancesĀ ofĀ theĀ phrasesĀ inĀ variousĀ placesĀ throughoutĀ thisĀ specificationĀ areĀ notĀ necessarilyĀ referringĀ toĀ theĀ sameĀ embodimentĀ orĀ exampleĀ ofĀ theĀ presentĀ disclosure.Ā Furthermore,Ā theĀ particularĀ features,Ā structures,Ā materials,Ā orĀ characteristicsĀ mayĀ beĀ combinedĀ inĀ anyĀ suitableĀ mannerĀ inĀ oneĀ orĀ moreĀ embodimentsĀ orĀ examples.Ā WithoutĀ aĀ contradiction,Ā theĀ differentĀ embodimentsĀ orĀ examplesĀ andĀ theĀ featuresĀ ofĀ theĀ differentĀ embodimentsĀ orĀ examplesĀ mayĀ beĀ combinedĀ byĀ thoseĀ skilledĀ inĀ theĀ art.
InĀ addition,Ā termsĀ suchĀ asĀ āfirstāĀ andĀ āsecondāĀ areĀ usedĀ hereinĀ forĀ purposesĀ ofĀ descriptionĀ andĀ areĀ notĀ intendedĀ toĀ indicateĀ orĀ implyĀ relativeĀ importanceĀ orĀ significance.Ā Furthermore,Ā theĀ featureĀ definedĀ withĀ āfirstāĀ andĀ āsecondāĀ mayĀ compriseĀ oneĀ orĀ moreĀ thisĀ featureĀ distinctlyĀ orĀ implicitly.Ā InĀ theĀ descriptionĀ ofĀ theĀ presentĀ disclosure,Ā āaĀ pluralityĀ ofāĀ meansĀ twoĀ orĀ moreĀ thanĀ two,Ā unlessĀ specifiedĀ otherwise.
TheĀ flowĀ chartĀ orĀ anyĀ processĀ orĀ methodĀ describedĀ hereinĀ inĀ otherĀ mannersĀ mayĀ representĀ aĀ module,Ā segment,Ā orĀ portionĀ ofĀ codeĀ thatĀ comprisesĀ oneĀ orĀ moreĀ executableĀ instructionsĀ toĀ implementĀ theĀ specifiedĀ logicĀ functionĀ (s)Ā orĀ thatĀ comprisesĀ oneĀ orĀ moreĀ executableĀ instructionsĀ ofĀ theĀ stepsĀ ofĀ theĀ progress.Ā AlthoughĀ theĀ flowĀ chartĀ showsĀ aĀ specificĀ orderĀ ofĀ execution,Ā itĀ isĀ understoodĀ thatĀ theĀ orderĀ ofĀ executionĀ mayĀ differĀ fromĀ thatĀ whichĀ isĀ depicted.Ā ForĀ example,Ā theĀ orderĀ ofĀ executionĀ ofĀ twoĀ orĀ moreĀ boxesĀ mayĀ beĀ scrambledĀ relativeĀ toĀ theĀ orderĀ shown.
TheĀ logicĀ and/orĀ stepĀ describedĀ inĀ otherĀ mannersĀ hereinĀ orĀ shownĀ inĀ theĀ flowĀ chart,Ā forĀ example,Ā aĀ particularĀ sequenceĀ tableĀ ofĀ executableĀ instructionsĀ forĀ realizingĀ theĀ logicalĀ function,Ā mayĀ beĀ specificallyĀ achievedĀ inĀ anyĀ computerĀ readableĀ mediumĀ toĀ beĀ usedĀ byĀ theĀ instructionĀ executionĀ system,Ā deviceĀ orĀ equipmentĀ (suchĀ asĀ theĀ systemĀ basedĀ onĀ computers,Ā theĀ systemĀ comprisingĀ processorsĀ orĀ otherĀ systemsĀ capableĀ ofĀ acquiringĀ theĀ instructionĀ fromĀ theĀ instructionĀ executionĀ system,Ā deviceĀ andĀ equipmentĀ andĀ executingĀ theĀ instruction)Ā ,Ā orĀ toĀ beĀ usedĀ inĀ combinationĀ withĀ theĀ instructionĀ executionĀ system,Ā deviceĀ andĀ equipment.Ā AsĀ toĀ theĀ specification,Ā ātheĀ computerĀ readableĀ mediumāĀ mayĀ beĀ anyĀ deviceĀ adaptiveĀ forĀ including,Ā storing,Ā communicating,Ā propagatingĀ orĀ transferringĀ programsĀ toĀ beĀ usedĀ byĀ orĀ inĀ combinationĀ withĀ theĀ instructionĀ executionĀ system,Ā deviceĀ orĀ equipment.Ā MoreĀ specificĀ examplesĀ ofĀ theĀ computerĀ readableĀ mediumĀ compriseĀ butĀ areĀ notĀ limitedĀ to:Ā anĀ electronicĀ connectionĀ (anĀ electronicĀ device)Ā withĀ oneĀ orĀ moreĀ wires,Ā aĀ portableĀ computerĀ enclosureĀ (amagneticĀ device)Ā ,Ā aĀ randomĀ accessĀ memoryĀ (RAM)Ā ,Ā aĀ readĀ onlyĀ memoryĀ (ROM)Ā ,Ā anĀ erasableĀ programmableĀ read-onlyĀ memoryĀ (EPROMĀ orĀ aĀ flashĀ memory)Ā ,Ā anĀ opticalĀ fiberĀ deviceĀ andĀ aĀ portableĀ compactĀ diskĀ read-onlyĀ memoryĀ (CDROM)Ā .Ā InĀ addition,Ā theĀ computerĀ readableĀ mediumĀ mayĀ evenĀ beĀ aĀ paperĀ orĀ otherĀ appropriateĀ mediumĀ capableĀ ofĀ printingĀ programsĀ thereon,Ā thisĀ isĀ because,Ā forĀ example,Ā theĀ paperĀ orĀ otherĀ appropriateĀ mediumĀ mayĀ beĀ opticallyĀ scannedĀ andĀ thenĀ edited,Ā decryptedĀ orĀ processedĀ withĀ otherĀ appropriateĀ methodsĀ whenĀ necessaryĀ toĀ acquireĀ theĀ programsĀ inĀ anĀ electricĀ manner,Ā andĀ thenĀ theĀ programsĀ mayĀ beĀ storedĀ inĀ theĀ computerĀ memories.
ItĀ shouldĀ beĀ understoodĀ thatĀ eachĀ partĀ ofĀ theĀ presentĀ disclosureĀ mayĀ beĀ realizedĀ byĀ theĀ hardware,Ā software,Ā firmwareĀ orĀ theirĀ combination.Ā InĀ theĀ aboveĀ embodiments,Ā aĀ pluralityĀ ofĀ stepsĀ orĀ methodsĀ mayĀ beĀ realizedĀ byĀ theĀ softwareĀ orĀ firmwareĀ storedĀ inĀ theĀ memoryĀ andĀ executedĀ byĀ theĀ appropriateĀ instructionĀ executionĀ system.Ā ForĀ example,Ā ifĀ itĀ isĀ realizedĀ byĀ theĀ hardware,Ā likewiseĀ inĀ anotherĀ embodiment,Ā theĀ stepsĀ orĀ methodsĀ mayĀ beĀ realizedĀ byĀ oneĀ orĀ aĀ combinationĀ ofĀ theĀ followingĀ techniquesĀ knownĀ inĀ theĀ art:Ā aĀ discreteĀ logicĀ circuitĀ havingĀ aĀ logicĀ gateĀ circuitĀ forĀ realizingĀ aĀ logicĀ functionĀ ofĀ aĀ dataĀ signal,Ā anĀ application-specificĀ integratedĀ circuitĀ havingĀ anĀ appropriateĀ combinationĀ logicĀ gateĀ circuit,Ā aĀ programmableĀ gateĀ arrayĀ (PGA)Ā ,Ā aĀ fieldĀ programmableĀ gateĀ arrayĀ (FPGA)Ā ,Ā etc.
ThoseĀ skilledĀ inĀ theĀ artĀ shallĀ understandĀ thatĀ allĀ orĀ portionsĀ ofĀ theĀ stepsĀ inĀ theĀ aboveĀ exemplifyingĀ methodĀ ofĀ theĀ presentĀ disclosureĀ mayĀ beĀ achievedĀ byĀ commandingĀ theĀ relatedĀ hardwareĀ withĀ programs.Ā TheĀ programsĀ mayĀ beĀ storedĀ inĀ aĀ computerĀ readableĀ storageĀ medium,Ā andĀ theĀ programsĀ compriseĀ oneĀ orĀ aĀ combinationĀ ofĀ theĀ stepsĀ inĀ theĀ methodĀ embodimentsĀ ofĀ theĀ presentĀ disclosureĀ whenĀ runĀ onĀ aĀ computer.
InĀ addition,Ā eachĀ functionĀ cellĀ ofĀ theĀ embodimentsĀ ofĀ theĀ presentĀ disclosureĀ mayĀ beĀ integratedĀ inĀ aĀ processingĀ module,Ā orĀ theseĀ cellsĀ mayĀ beĀ separateĀ physicalĀ existence,Ā orĀ twoĀ orĀ moreĀ cellsĀ areĀ integratedĀ inĀ aĀ processingĀ module.Ā TheĀ integratedĀ moduleĀ mayĀ beĀ realizedĀ inĀ aĀ formĀ ofĀ hardwareĀ orĀ inĀ aĀ formĀ ofĀ softwareĀ functionĀ modules.Ā WhenĀ theĀ integratedĀ moduleĀ isĀ realizedĀ inĀ aĀ formĀ ofĀ softwareĀ functionĀ moduleĀ andĀ isĀ soldĀ orĀ usedĀ asĀ aĀ standaloneĀ product,Ā theĀ integratedĀ moduleĀ mayĀ beĀ storedĀ inĀ aĀ computerĀ readableĀ storageĀ medium.
TheĀ storageĀ mediumĀ mentionedĀ aboveĀ mayĀ beĀ read-onlyĀ memories,Ā magneticĀ disks,Ā CD,Ā etc.Ā AlthoughĀ explanatoryĀ embodimentsĀ haveĀ beenĀ shownĀ andĀ described,Ā itĀ wouldĀ beĀ appreciatedĀ byĀ thoseĀ skilledĀ inĀ theĀ artĀ thatĀ theĀ aboveĀ embodimentsĀ cannotĀ beĀ construedĀ toĀ limitĀ theĀ presentĀ disclosure,Ā andĀ changes,Ā alternatives,Ā andĀ modificationsĀ mayĀ beĀ madeĀ inĀ theĀ embodimentsĀ withoutĀ departingĀ fromĀ principlesĀ andĀ scopeĀ ofĀ theĀ presentĀ disclosure.
Claims (15)
- AĀ methodĀ forĀ replacingĀ anĀ expression,Ā comprising:acquiringĀ (101)Ā aĀ currentĀ expressionĀ representedĀ byĀ aĀ currently-reconstructedĀ three-dimensionalĀ (3D)Ā faceĀ model;acquiringĀ (102)Ā aĀ targetĀ expressionĀ fromĀ aĀ user;acquiringĀ (103)Ā ,Ā basedĀ onĀ theĀ currentĀ expressionĀ andĀ theĀ targetĀ expression,Ā valuesĀ forĀ adjustingĀ coordinatesĀ ofĀ aĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ model;Ā andadjustingĀ (104)Ā theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ basedĀ onĀ theĀ values,Ā toĀ generateĀ aĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expression.
- TheĀ methodĀ ofĀ claimĀ 1,Ā whereinĀ acquiringĀ (102)Ā theĀ targetĀ expressionĀ fromĀ theĀ user,Ā comprises:displayingĀ (1021)Ā aĀ listĀ ofĀ expressionsĀ toĀ theĀ user;Ā andacquiringĀ (1022)Ā anĀ expressionĀ selectedĀ byĀ theĀ userĀ onĀ theĀ listĀ asĀ theĀ targetĀ expression.
- TheĀ methodĀ ofĀ claimĀ 1,Ā whereinĀ acquiringĀ (102)Ā theĀ targetĀ expressionĀ fromĀ theĀ user,Ā comprises:capturingĀ (1023)Ā anĀ expressionĀ ofĀ theĀ userĀ byĀ aĀ camera;matchingĀ (1024)Ā theĀ expressionĀ capturedĀ byĀ theĀ cameraĀ withĀ aĀ presetĀ listĀ ofĀ expressions;Ā andinĀ responseĀ toĀ theĀ expressionĀ capturedĀ byĀ theĀ cameraĀ matchingĀ oneĀ expressionĀ inĀ theĀ presetĀ list,Ā usingĀ (1025)Ā theĀ expressionĀ capturedĀ byĀ theĀ cameraĀ asĀ theĀ targetĀ expression.
- TheĀ methodĀ ofĀ anyĀ oneĀ ofĀ claimsĀ 1Ā toĀ 3,Ā whereinĀ acquiringĀ (103)Ā ,Ā basedĀ onĀ theĀ currentĀ expressionĀ andĀ theĀ targetĀ expression,Ā valuesĀ forĀ adjustingĀ coordinatesĀ ofĀ aĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ model,Ā comprises:acquiringĀ (1031)Ā aĀ secondĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ currentĀ expression,Ā andĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ points;acquiringĀ (1032)Ā aĀ thirdĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ targetĀ expression,Ā andĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ points;acquiringĀ (1033)Ā theĀ firstĀ setĀ ofĀ keyĀ pointsĀ basedĀ onĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ andĀ theĀ thirdĀ setĀ ofĀ keyĀ points,Ā andĀ acquiringĀ (1034)Ā theĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ basedĀ onĀ theĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ andĀ theĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ points;theĀ secondĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ currentĀ expression,Ā andĀ theĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ beingĀ preset;Ā andtheĀ thirdĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ targetĀ expression,Ā andĀ theĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ pointsĀ beingĀ preset.
- TheĀ methodĀ ofĀ anyĀ oneĀ ofĀ claimsĀ 1Ā toĀ 4,Ā whereinĀ acquiringĀ (103)Ā ,Ā basedĀ onĀ theĀ currentĀ expressionĀ andĀ theĀ targetĀ expression,Ā valuesĀ forĀ adjustingĀ coordinatesĀ ofĀ aĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ model,Ā comprises:queryingĀ (1035)Ā ,Ā basedĀ onĀ theĀ currentĀ expressionĀ andĀ theĀ targetĀ expression,Ā aĀ presetĀ databaseĀ toĀ acquireĀ theĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ points,Ā theĀ presetĀ databaseĀ comprisesĀ aĀ pluralityĀ ofĀ expressions,Ā andĀ valuesĀ forĀ adjustingĀ coordinatesĀ ofĀ aĀ correspondingĀ setĀ ofĀ keyĀ pointsĀ fromĀ oneĀ ofĀ theĀ pluralityĀ ofĀ expressionsĀ toĀ anotherĀ ofĀ theĀ pluralityĀ ofĀ expressions.
- TheĀ methodĀ ofĀ anyĀ oneĀ ofĀ claimsĀ 1Ā toĀ 5,Ā furtherĀ comprising:displayingĀ (401)Ā oneĀ orĀ moreĀ adjustableĀ widgets,Ā eachĀ ofĀ theĀ oneĀ orĀ moreĀ adjustableĀ widgetsĀ beingĀ configuredĀ toĀ adjustĀ aĀ correspondingĀ keyĀ portionĀ onĀ theĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expressionĀ withinĀ aĀ presetĀ range;acquiringĀ (402)Ā anĀ operationĀ onĀ oneĀ ofĀ theĀ oneĀ orĀ moreĀ adjustableĀ widgets;acquiringĀ (403)Ā anĀ adjustmentĀ angleĀ basedĀ onĀ theĀ operation;Ā andadjustingĀ (404)Ā theĀ correspondingĀ keyĀ portionĀ basedĀ onĀ theĀ adjustmentĀ angle.
- TheĀ methodĀ ofĀ anyĀ oneĀ ofĀ claimsĀ 1Ā toĀ 6,Ā furtherĀ comprising:acquiringĀ (501)Ā aĀ presetĀ stateĀ featureĀ ofĀ aĀ keyĀ portionĀ correspondingĀ toĀ theĀ targetĀ expression;Ā andadjustingĀ (502)Ā aĀ stateĀ ofĀ theĀ keyĀ portionĀ inĀ theĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expressionĀ basedĀ onĀ theĀ presetĀ stateĀ feature.
- AĀ deviceĀ forĀ replacingĀ anĀ expression,Ā comprising:aĀ firstĀ acquiringĀ moduleĀ (10)Ā configuredĀ to,Ā acquireĀ aĀ currentĀ expressionĀ representedĀ byĀ aĀ currently-reconstructedĀ three-dimensionalĀ (3D)Ā faceĀ model;aĀ secondĀ acquiringĀ moduleĀ (20)Ā configuredĀ to,Ā acquireĀ aĀ targetĀ expressionĀ fromĀ aĀ user;aĀ thirdĀ acquiringĀ moduleĀ (30)Ā configuredĀ to,Ā acquire,Ā basedĀ onĀ theĀ currentĀ expressionĀ andĀ theĀ targetĀ expression,Ā valuesĀ forĀ adjustingĀ coordinatesĀ ofĀ aĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ model;Ā andaĀ generatingĀ moduleĀ (40)Ā configuredĀ to,Ā adjustĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ onĀ theĀ currently-reconstructedĀ 3DĀ faceĀ modelĀ basedĀ onĀ theĀ values,Ā toĀ generateĀ aĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expression.
- TheĀ deviceĀ ofĀ claimĀ 8,Ā whereinĀ theĀ secondĀ acquiringĀ moduleĀ (20)Ā isĀ configuredĀ to:displayĀ aĀ listĀ ofĀ expressionsĀ toĀ theĀ user;Ā andacquireĀ anĀ expressionĀ selectedĀ byĀ theĀ userĀ onĀ theĀ listĀ asĀ theĀ targetĀ expression.
- TheĀ deviceĀ ofĀ claimĀ 8,Ā whereinĀ theĀ secondĀ acquiringĀ moduleĀ (20)Ā isĀ configuredĀ to:captureĀ anĀ expressionĀ ofĀ theĀ userĀ byĀ aĀ camera;matchĀ theĀ expressionĀ capturedĀ byĀ theĀ cameraĀ withĀ aĀ presetĀ listĀ ofĀ expressions;Ā andinĀ responseĀ toĀ theĀ expressionĀ capturedĀ byĀ theĀ cameraĀ matchingĀ oneĀ expressionĀ inĀ theĀ presetĀ list,Ā useĀ theĀ expressionĀ capturedĀ byĀ theĀ cameraĀ asĀ theĀ targetĀ expression.
- TheĀ deviceĀ ofĀ anyĀ oneĀ ofĀ claimsĀ 8Ā toĀ 10,Ā whereinĀ theĀ thirdĀ acquiringĀ moduleĀ (30)Ā isĀ configuredĀ to:acquireĀ aĀ secondĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ currentĀ expression,Ā andĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ points;acquireĀ aĀ thirdĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ targetĀ expression,Ā andĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ points;acquireĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ basedĀ onĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ andĀ theĀ thirdĀ setĀ ofĀ keyĀ points,Ā andĀ acquireĀ theĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ pointsĀ basedĀ onĀ theĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ andĀ theĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ points;theĀ secondĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ currentĀ expression,Ā andĀ theĀ coordinatesĀ ofĀ theĀ secondĀ setĀ ofĀ keyĀ pointsĀ beingĀ preset;Ā andtheĀ thirdĀ setĀ ofĀ keyĀ pointsĀ ofĀ theĀ targetĀ expression,Ā andĀ theĀ coordinatesĀ ofĀ theĀ thirdĀ setĀ ofĀ keyĀ pointsĀ beingĀ preset.
- TheĀ deviceĀ ofĀ anyĀ oneĀ ofĀ claimsĀ 8Ā toĀ 11,Ā whereinĀ theĀ thirdĀ acquiringĀ moduleĀ (30)Ā isĀ configuredĀ to:query,Ā basedĀ onĀ theĀ currentĀ expressionĀ andĀ theĀ targetĀ expression,Ā aĀ presetĀ databaseĀ toĀ acquireĀ theĀ valuesĀ forĀ adjustingĀ theĀ coordinatesĀ ofĀ theĀ firstĀ setĀ ofĀ keyĀ points,Ā theĀ presetĀ databaseĀ comprisesĀ aĀ pluralityĀ ofĀ expressions,Ā andĀ valuesĀ forĀ adjustingĀ coordinatesĀ ofĀ aĀ correspondingĀ setĀ ofĀ keyĀ pointsĀ fromĀ oneĀ ofĀ theĀ pluralityĀ ofĀ expressionsĀ toĀ anotherĀ ofĀ theĀ pluralityĀ ofĀ expressions.
- TheĀ deviceĀ ofĀ anyĀ oneĀ ofĀ claimsĀ 8Ā toĀ 12,Ā furtherĀ comprising:aĀ firstĀ adjustingĀ moduleĀ (50)Ā configuredĀ to:displayĀ oneĀ orĀ moreĀ adjustableĀ widgets,Ā eachĀ ofĀ theĀ oneĀ orĀ moreĀ adjustableĀ widgetsĀ beingĀ configuredĀ toĀ adjustĀ aĀ correspondingĀ keyĀ portionĀ onĀ theĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expressionĀ withinĀ aĀ presetĀ range;acquireĀ anĀ operationĀ onĀ oneĀ ofĀ theĀ oneĀ orĀ moreĀ adjustableĀ widgets;acquireĀ anĀ adjustmentĀ angleĀ basedĀ onĀ theĀ operation;Ā andadjustĀ theĀ correspondingĀ keyĀ portionĀ basedĀ onĀ theĀ adjustmentĀ angle.
- TheĀ deviceĀ ofĀ anyĀ oneĀ ofĀ claimsĀ 8Ā toĀ 12,Ā furtherĀ comprising:aĀ secondĀ adjustingĀ moduleĀ (60)Ā configuredĀ to:acquireĀ aĀ presetĀ stateĀ featureĀ ofĀ aĀ keyĀ portionĀ correspondingĀ toĀ theĀ targetĀ expression;Ā andadjustĀ aĀ stateĀ ofĀ theĀ keyĀ portionĀ inĀ theĀ 3DĀ faceĀ modelĀ representingĀ theĀ targetĀ expressionĀ basedĀ onĀ theĀ presetĀ stateĀ feature.
- AĀ computerĀ readableĀ storageĀ mediumĀ havingĀ aĀ computerĀ programĀ storedĀ thereon,Ā whereinĀ theĀ computerĀ programĀ causesĀ anĀ electronicĀ deviceĀ toĀ carryĀ outĀ theĀ methodĀ ofĀ anyĀ oneĀ ofĀ claimsĀ 1Ā toĀ 7.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810934577.8A CN109147024A (en) | 2018-08-16 | 2018-08-16 | Expression replacing method and device based on three-dimensional model |
CN201810934577.8 | 2018-08-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020035001A1 true WO2020035001A1 (en) | 2020-02-20 |
Family
ID=64789719
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/100601 WO2020035001A1 (en) | 2018-08-16 | 2019-08-14 | Methods and devices for replacing expression, and computer readable storage media |
Country Status (4)
Country | Link |
---|---|
US (1) | US11069151B2 (en) |
EP (1) | EP3621038A3 (en) |
CN (1) | CN109147024A (en) |
WO (1) | WO2020035001A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113673287A (en) * | 2020-05-15 | 2021-11-19 | ę·±å³åøå é“ē§ęęéå ¬åø | Depth reconstruction method, system, device and medium based on target time node |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109147024A (en) | 2018-08-16 | 2019-01-04 | Oppo广äøē§»åØéäæ”ęéå ¬åø | Expression replacing method and device based on three-dimensional model |
CN111447379B (en) * | 2019-01-17 | 2022-08-23 | ē¾åŗ¦åØēŗæē½ē»ęęÆļ¼åäŗ¬ļ¼ęéå ¬åø | Method and device for generating information |
CN111695383A (en) * | 2019-03-14 | 2020-09-22 | åäŗ¬å„čē§ęęéå ¬åø | Image processing method and device for expression and electronic equipment |
CN110458121B (en) * | 2019-08-15 | 2023-03-14 | äŗ¬äøę¹ē§ęéå¢č”份ęéå ¬åø | Method and device for generating face image |
CN110609282B (en) * | 2019-09-19 | 2020-11-17 | äøå½äŗŗę°č§£ę¾ååäŗē§å¦é¢å½é²ē§ęåę°ē ē©¶é¢ | Terahertz aperture coding three-dimensional imaging method and device based on back projection |
CN110941332A (en) * | 2019-11-06 | 2020-03-31 | åäŗ¬ē¾åŗ¦ē½č®Æē§ęęéå ¬åø | Expression driving method and device, electronic equipment and storage medium |
CN113570634B (en) * | 2020-04-28 | 2024-07-12 | å京达佳äŗčäæ”ęÆęęÆęéå ¬åø | Object three-dimensional reconstruction method, device, electronic equipment and storage medium |
CN112927328B (en) * | 2020-12-28 | 2023-09-01 | åäŗ¬ē¾åŗ¦ē½č®Æē§ęęéå ¬åø | Expression migration method and device, electronic equipment and storage medium |
CN113674385B (en) * | 2021-08-05 | 2023-07-18 | åäŗ¬å„čŗäøēŗŖē§ęęéå ¬åø | Virtual expression generation method and device, electronic equipment and storage medium |
CN115376141A (en) * | 2021-10-14 | 2022-11-22 | äøęµ·ååøęŗč½ē§ęęéå ¬åø | A Printed Character Recognition System |
CN114299563A (en) * | 2021-11-16 | 2022-04-08 | äøęļ¼äøå½ļ¼å导ä½ęéå ¬åø | Method and device for predicting key point coordinates of face image |
KR20230072851A (en) * | 2021-11-18 | 2023-05-25 | ģ”°ģ ėķźµģ°ķķė „ėØ | A landmark-based ensemble network creation method for facial expression classification and a facial expression classification method using the generated ensemble network |
CN113870401B (en) * | 2021-12-06 | 2022-02-25 | č ¾č®Æē§ęļ¼ę·±å³ļ¼ęéå ¬åø | Expression generation method, device, equipment, medium and computer program product |
CN115816480A (en) * | 2022-08-01 | 2023-03-21 | åäŗ¬åÆä»„ē§ęęéå ¬åø | Robot and emotion expression method thereof |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107479801A (en) * | 2017-07-31 | 2017-12-15 | 广äøę¬§ēē§»åØéäæ”ęéå ¬åø | Displaying method of terminal, device and terminal based on user's expression |
CN107481317A (en) * | 2017-07-31 | 2017-12-15 | 广äøę¬§ēē§»åØéäæ”ęéå ¬åø | Face adjustment method and device for 3D model of human face |
CN108022206A (en) * | 2017-11-30 | 2018-05-11 | 广äøę¬§ēē§»åØéäæ”ęéå ¬åø | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN108230252A (en) * | 2017-01-24 | 2018-06-29 | ę·±å³åøå걤ē§ęęéå ¬åø | Image processing method, device and electronic equipment |
US20180204052A1 (en) * | 2015-08-28 | 2018-07-19 | Baidu Online Network Technology (Beijing) Co., Ltd. | A method and apparatus for human face image processing |
CN109147024A (en) * | 2018-08-16 | 2019-01-04 | Oppo广äøē§»åØéäæ”ęéå ¬åø | Expression replacing method and device based on three-dimensional model |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090153552A1 (en) * | 2007-11-20 | 2009-06-18 | Big Stage Entertainment, Inc. | Systems and methods for generating individualized 3d head models |
US8207971B1 (en) * | 2008-12-31 | 2012-06-26 | Lucasfilm Entertainment Company Ltd. | Controlling animated character expressions |
US8970656B2 (en) * | 2012-12-20 | 2015-03-03 | Verizon Patent And Licensing Inc. | Static and dynamic video calling avatars |
US9378576B2 (en) * | 2013-06-07 | 2016-06-28 | Faceshift Ag | Online modeling for real-time facial animation |
US20160070952A1 (en) * | 2014-09-05 | 2016-03-10 | Samsung Electronics Co., Ltd. | Method and apparatus for facial recognition |
CN104616347A (en) * | 2015-01-05 | 2015-05-13 | ę赢俔ęÆē§ęļ¼äøęµ·ļ¼ęéå ¬åø | Expression migration method, electronic equipment and system |
CN108229239B (en) * | 2016-12-09 | 2020-07-10 | ę¦ę±ęé±¼ē½ē»ē§ęęéå ¬åø | Image processing method and device |
CN106920277A (en) * | 2017-03-01 | 2017-07-04 | ęµę±ē„é ē§ęęéå ¬åø | Simulation beauty and shaping effect visualizes the method and system of online scope of freedom carving |
CN107123160A (en) * | 2017-05-02 | 2017-09-01 | ęé½éē²ä¼åē§ęęéč“£ä»»å ¬åø | Simulation lift face system, method and mobile terminal based on three-dimensional image |
-
2018
- 2018-08-16 CN CN201810934577.8A patent/CN109147024A/en active Pending
-
2019
- 2019-08-13 EP EP19191508.1A patent/EP3621038A3/en not_active Withdrawn
- 2019-08-14 WO PCT/CN2019/100601 patent/WO2020035001A1/en active Application Filing
- 2019-08-15 US US16/542,025 patent/US11069151B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180204052A1 (en) * | 2015-08-28 | 2018-07-19 | Baidu Online Network Technology (Beijing) Co., Ltd. | A method and apparatus for human face image processing |
CN108230252A (en) * | 2017-01-24 | 2018-06-29 | ę·±å³åøå걤ē§ęęéå ¬åø | Image processing method, device and electronic equipment |
CN107479801A (en) * | 2017-07-31 | 2017-12-15 | 广äøę¬§ēē§»åØéäæ”ęéå ¬åø | Displaying method of terminal, device and terminal based on user's expression |
CN107481317A (en) * | 2017-07-31 | 2017-12-15 | 广äøę¬§ēē§»åØéäæ”ęéå ¬åø | Face adjustment method and device for 3D model of human face |
CN108022206A (en) * | 2017-11-30 | 2018-05-11 | 广äøę¬§ēē§»åØéäæ”ęéå ¬åø | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN109147024A (en) * | 2018-08-16 | 2019-01-04 | Oppo广äøē§»åØéäæ”ęéå ¬åø | Expression replacing method and device based on three-dimensional model |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113673287A (en) * | 2020-05-15 | 2021-11-19 | ę·±å³åøå é“ē§ęęéå ¬åø | Depth reconstruction method, system, device and medium based on target time node |
CN113673287B (en) * | 2020-05-15 | 2023-09-12 | ę·±å³åøå é“ē§ęęéå ¬åø | Depth reconstruction method, system, equipment and medium based on target time node |
Also Published As
Publication number | Publication date |
---|---|
US11069151B2 (en) | 2021-07-20 |
CN109147024A (en) | 2019-01-04 |
US20200058171A1 (en) | 2020-02-20 |
EP3621038A3 (en) | 2020-06-24 |
EP3621038A2 (en) | 2020-03-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11069151B2 (en) | Methods and devices for replacing expression, and computer readable storage media | |
EP3614340B1 (en) | Methods and devices for acquiring 3d face, and computer readable storage media | |
Kartynnik et al. | Real-time facial surface geometry from monocular video on mobile GPUs | |
CN111415422B (en) | Virtual object adjustment method and device, storage medium and augmented reality equipment | |
CN110675487B (en) | Three-dimensional face modeling and recognition method and device based on multi-angle two-dimensional face | |
CN109359538B (en) | Training method of convolutional neural network, gesture recognition method, device and equipment | |
US11900557B2 (en) | Three-dimensional face model generation method and apparatus, device, and medium | |
US11403819B2 (en) | Three-dimensional model processing method, electronic device, and readable storage medium | |
WO2021213067A1 (en) | Object display method and apparatus, device and storage medium | |
CN119206817A (en) | 3D face capture and modification using image and time tracking neural networks | |
CN113822977A (en) | Image rendering method, device, equipment and storage medium | |
CN115699114A (en) | Image augmentation for analysis | |
CN114972632A (en) | Image processing method and device based on nerve radiation field | |
CN109147037B (en) | Special effect processing method, device and electronic device based on 3D model | |
CN109242760B (en) | Face image processing method and device and electronic equipment | |
CN109102559A (en) | three-dimensional model processing method and device | |
US20220277512A1 (en) | Generation apparatus, generation method, system, and storage medium | |
TW202109359A (en) | Face image processing method, image equipment and storage medium | |
CN111047509B (en) | Image special effect processing method, device and terminal | |
CN108701355B (en) | GPU optimization and online single Gaussian-based skin likelihood estimation | |
CN113221847A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN113570052B (en) | Image processing method, device, electronic equipment and storage medium | |
WO2024021742A9 (en) | Fixation point estimation method and related device | |
WO2024077791A1 (en) | Video generation method and apparatus, device, and computer readable storage medium | |
US20190371039A1 (en) | Method and smart terminal for switching expression of smart terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19849141 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19849141 Country of ref document: EP Kind code of ref document: A1 |