TWI780919B - Method and apparatus for processing face image, electronic device and storage medium - Google Patents

Method and apparatus for processing face image, electronic device and storage medium Download PDF

Info

Publication number
TWI780919B
TWI780919B TW110135050A TW110135050A TWI780919B TW I780919 B TWI780919 B TW I780919B TW 110135050 A TW110135050 A TW 110135050A TW 110135050 A TW110135050 A TW 110135050A TW I780919 B TWI780919 B TW I780919B
Authority
TW
Taiwan
Prior art keywords
face
point cloud
cloud data
dense point
target
Prior art date
Application number
TW110135050A
Other languages
Chinese (zh)
Other versions
TW202221638A (en
Inventor
陳祖凱
徐勝偉
朴鏡潭
王權
錢晨
Original Assignee
大陸商上海商湯智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商上海商湯智能科技有限公司 filed Critical 大陸商上海商湯智能科技有限公司
Publication of TW202221638A publication Critical patent/TW202221638A/en
Application granted granted Critical
Publication of TWI780919B publication Critical patent/TWI780919B/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The present disclosure provides a method and apparatus for processing a face image, electronic device and storage medium, wherein the method includes: obtaining initial dense point cloud data of a target face, and generating an initial virtual face image of the target face based on the initial dense point cloud data; determining a deformation coefficient of the initial dense point cloud data relative to standard dense point cloud data corresponding to a standard virtual face image; adjusting the deformation coefficient in response to an adjustment operation for the initial virtual face image to obtain a target deformation coefficient; and generating a target virtual face image corresponding to the target face based on the target deformation coefficient and the standard dense point cloud data.

Description

人臉影像的處理方法、裝置、電子設備及儲存媒體Face image processing method, device, electronic equipment and storage medium

本公開涉及人臉重建技術領域,具體而言,涉及一種人臉影像的處理方法、裝置、電子設備及儲存媒體。The present disclosure relates to the technical field of face reconstruction, in particular, to a face image processing method, device, electronic equipment and storage media.

在三維世界中,可以通過三維點雲對物體的形貌進行表徵,例如可以通過人臉稠密點雲來表示人臉形貌,但是考慮到表徵人臉形貌的人臉稠密點雲由成千上萬個點構成,在需要對應人臉形貌進行調整時,需要對點進行逐一調整,過程繁瑣,效率較低。In the 3D world, the shape of an object can be represented by a 3D point cloud. For example, the face shape can be represented by a dense point cloud of a face. It consists of tens of thousands of points. When it is necessary to adjust the corresponding face shape, the points need to be adjusted one by one. The process is cumbersome and the efficiency is low.

本公開實施例至少提供一種人臉影像的處理方案。Embodiments of the present disclosure provide at least one face image processing solution.

第一方面,本公開實施例提供了一種人臉影像的處理方法,包括:獲取目標人臉的初始稠密點雲數據,並基於所述初始稠密點雲數據生成所述目標人臉的初始虛擬人臉影像;確定所述初始稠密點雲數據相對於標準虛擬人臉影像對應的標準稠密點雲數據的形變係數;響應於針對所述初始虛擬人臉影像的調整操作,對所述形變係數進行調整,得到目標形變係數;基於所述目標形變係數和所述標準稠密點雲數據,生成所述目標人臉對應的目標虛擬人臉影像。In a first aspect, an embodiment of the present disclosure provides a method for processing a face image, including: acquiring initial dense point cloud data of a target face, and generating an initial virtual person of the target face based on the initial dense point cloud data Face image; determine the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual human face image; adjust the deformation coefficient in response to the adjustment operation for the initial virtual human face image , to obtain a target deformation coefficient; based on the target deformation coefficient and the standard dense point cloud data, generate a target virtual face image corresponding to the target face.

本公開實施例中,提出通過稠密點雲數據確定用於對目標人臉的虛擬人臉影像進行調整的形變係數,這樣可以建立稠密點雲數據和形變係數之間的對應關係,從而可以直接基於形變係數對虛擬人臉影像進行調整,相對於通過對稠密點雲數據中的點進行逐一調整的方式,可以提高調整效率、快速生成調整後的目標虛擬人臉影像。In the embodiment of the present disclosure, it is proposed to determine the deformation coefficient used to adjust the virtual face image of the target face through the dense point cloud data, so that the corresponding relationship between the dense point cloud data and the deformation coefficient can be established, so that it can be directly based on The deformation coefficient adjusts the virtual face image. Compared with adjusting the points in the dense point cloud data one by one, it can improve the adjustment efficiency and quickly generate the adjusted target virtual face image.

另一方面,考慮到這裡的形變係數是根據稠密點雲數據確定的,在基於形變係數對初始虛擬人臉影像的調整過程中,可以直接基於形變係數對稠密點雲數據中的稠密點雲進行調整,這樣可以直接精確到對構成虛擬人臉影像的稠密點雲中各個點的調整,在提高調整效率的基礎上同時可以提高調整精度。On the other hand, considering that the deformation coefficient here is determined according to the dense point cloud data, in the process of adjusting the initial virtual face image based on the deformation coefficient, the dense point cloud in the dense point cloud data can be directly based on the deformation coefficient. Adjustment, which can be directly accurate to the adjustment of each point in the dense point cloud that constitutes the virtual face image, and can improve the adjustment accuracy on the basis of improving the adjustment efficiency.

在一種可能的實施方式中,所述形變係數包含至少一個骨骼係數和至少一個混合形變係數中的至少一項;其中,每個骨骼係數用於對與該骨骼係數關聯的第一稠密點雲構成的骨骼的初始位姿進行調整;每個混合形變係數用於對與該混合形變係數關聯的第二稠密點雲對應的初始位置進行調整。In a possible implementation manner, the deformation coefficient includes at least one of at least one bone coefficient and at least one mixed deformation coefficient; wherein each bone coefficient is used to construct the first dense point cloud associated with the bone coefficient Adjust the initial pose of the skeleton of the bone; each blend shape coefficient is used to adjust the initial position corresponding to the second dense point cloud associated with the blend shape coefficient.

本公開實施例中,基於形變係數中的骨骼係數和/或混合形變係數能夠分別調整不同類型的稠密點雲的位置,以實現對稠密點雲的精准調整。In the embodiments of the present disclosure, the positions of different types of dense point clouds can be adjusted respectively based on the bone coefficient and/or the mixed deformation coefficient in the deformation coefficients, so as to achieve precise adjustment of the dense point clouds.

在一種可能的實施方式中,所述確定所述初始稠密點雲數據相對於標準虛擬人臉影像對應的標準稠密點雲數據的形變係數,包括:基於當前形變係數對所述標準稠密點雲數據進行調整,得到調整後的稠密點雲數據,初始的所述當前形變係數為預先設置的;基於調整後的稠密點雲數據和所述初始稠密點雲數據,確定第一損失值;基於所述第一損失值以及預設的形變係數的約束範圍,調整所述當前形變係數;基於調整後的所述當前形變係數,返回執行對所述標準稠密點雲數據進行調整的步驟,直至對所述當前形變係數的調整操作符合第一調整截止條件的情況下,基於所述當前形變係數得到所述初始稠密點雲數據相對於所述標準稠密點雲數據的形變係數。In a possible implementation manner, the determination of the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual face image includes: modifying the standard dense point cloud data based on the current deformation coefficient Adjust to obtain the adjusted dense point cloud data, the initial current deformation coefficient is preset; based on the adjusted dense point cloud data and the initial dense point cloud data, determine the first loss value; based on the The first loss value and the preset constraint range of the deformation coefficient adjust the current deformation coefficient; based on the adjusted current deformation coefficient, return to the step of adjusting the standard dense point cloud data until the When the adjustment operation of the current deformation coefficient meets the first adjustment cut-off condition, the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data is obtained based on the current deformation coefficient.

本公開實施例中,在確定形變係數的過程中,是通過對標準稠密點雲數據中的多個點進行調整確定的,因此得到的形變係數可以表示出目標人臉的初始稠密點雲相比標準稠密點雲的變化量,從而在對目標人臉的初始虛擬人臉影像進行調整過程中,可以基於形變係數對稠密點雲數據中關聯的點進行調整,從而提高調整精度。In the embodiment of the present disclosure, in the process of determining the deformation coefficient, it is determined by adjusting multiple points in the standard dense point cloud data, so the obtained deformation coefficient can represent the initial dense point cloud of the target face. The change amount of the standard dense point cloud, so that in the process of adjusting the initial virtual face image of the target face, the associated points in the dense point cloud data can be adjusted based on the deformation coefficient, thereby improving the adjustment accuracy.

另一方面,在確定形變係數的過程中,是在對所有稠密點雲進行調整後,再基於調整後的稠密點雲數據以及目標人臉的初始稠密點雲數據確定的損失值,對當前形變係數進行的優化,充分考慮形變係數與整體稠密點雲之間的關聯性,提高優化效率;此外在調整過程中通過預設的形變係數的約束範圍進行調整約束,可以有效避免形變係數發生畸變,得到無法表示正常的目標人臉的形變係數。On the other hand, in the process of determining the deformation coefficient, after adjusting all the dense point clouds, and then based on the adjusted dense point cloud data and the loss value determined by the initial dense point cloud data of the target face, the current deformation The optimization of the deformation coefficient fully considers the correlation between the deformation coefficient and the overall dense point cloud to improve the optimization efficiency; in addition, during the adjustment process, the adjustment constraints are adjusted through the preset deformation coefficient constraint range, which can effectively avoid the distortion of the deformation coefficient. Get deformation coefficients that cannot represent normal target faces.

在一種可能的實施方式中,所述響應於針對所述初始虛擬人臉影像的調整操作,對所述形變係數進行調整,得到目標形變係數,包括:響應針對所述初始虛擬人臉影像的調整操作,確定針對所述初始虛擬人臉影像的目標調整位置,以及針對所述目標調整位置的調整幅度;按照所述調整幅度,對與所述目標調整位置關聯的形變係數進行調整,得到所述目標形變係數。In a possible implementation manner, the adjusting the deformation coefficient in response to the adjustment operation on the initial virtual human face image to obtain the target deformation coefficient includes: responding to the adjustment operation on the initial virtual human face image Operation, determine the target adjustment position for the initial virtual human face image, and the adjustment range for the target adjustment position; according to the adjustment range, adjust the deformation coefficient associated with the target adjustment position to obtain the Target deformation coefficient.

本公開實施例中,可以根據調整操作,確定目標形變係數,便於後期基於該目標形變係數確定出調整後的目標虛擬人臉影像,該方式可以基於用戶需求個性化地對形變係數進行調整。In the embodiment of the present disclosure, the target deformation coefficient can be determined according to the adjustment operation, so that the adjusted target virtual human face image can be determined later based on the target deformation coefficient. This method can adjust the deformation coefficient personalizedly based on user needs.

在一種可能的實施方式中,所述基於所述目標形變係數和所述標準稠密點雲數據,生成所述目標人臉對應的目標虛擬人臉影像,包括:基於所述目標形變係數,對所述標準稠密點雲數據進行調整,得到目標稠密點雲數據;基於所述目標稠密點雲數據,生成所述目標虛擬人臉影像。In a possible implementation manner, the generating the target virtual human face image corresponding to the target face based on the target deformation coefficient and the standard dense point cloud data includes: based on the target deformation coefficient, The standard dense point cloud data is adjusted to obtain the target dense point cloud data; based on the target dense point cloud data, the target virtual human face image is generated.

本公開實施例中,在確定目標形變係數後,可以直接根據目標形變係數對標準稠密點雲數據進行調整,確定目標稠密點雲數據,這樣可以根據目標稠密點雲數據快速得到目標人臉對應的目標虛擬人臉影像。In the embodiment of the present disclosure, after the target deformation coefficient is determined, the standard dense point cloud data can be directly adjusted according to the target deformation coefficient, and the target dense point cloud data can be determined, so that the target face can be quickly obtained according to the target dense point cloud data Target virtual human face image.

在一種可能的實施方式中,所述基於所述目標稠密點雲數據,生成所述目標虛擬人臉影像,包括:確定與所述目標稠密點雲數據對應的虛擬人臉模型;基於預選的人臉屬性特徵和所述虛擬人臉模型,生成所述目標虛擬人臉影像。In a possible implementation manner, the generating the target virtual human face image based on the target dense point cloud data includes: determining a virtual human face model corresponding to the target dense point cloud data; The face attribute feature and the virtual human face model are used to generate the target virtual human face image.

本公開實施例中,在對初始虛擬人臉影像進行調整時,還可以結合用戶選定的人臉屬性特徵進行個性化地調整,從而使得目標虛擬人臉影像更貼合用戶的實際需求。In the embodiment of the present disclosure, when adjusting the initial virtual human face image, it may also be adjusted in combination with the facial attribute characteristics selected by the user, so that the target virtual human face image is more suitable for the actual needs of the user.

在一種可能的實施方式中,所述獲取目標人臉的初始稠密點雲數據,並基於所述初始稠密點雲數據生成所述目標人臉的初始虛擬人臉影像,包括:獲取所述目標人臉對應的第一人臉影像,以及預設風格的多張第二人臉影像分別對應的稠密點雲數據;基於所述第一人臉影像和所述預設風格的多張第二人臉影像分別對應的稠密點雲數據,確定所述目標人臉在所述預設風格下的初始稠密點雲數據;基於所述目標人臉在所述預設風格下的初始稠密點雲數據,生成所述目標人臉在所述預設風格下的初始虛擬人臉影像。In a possible implementation manner, the acquiring the initial dense point cloud data of the target face, and generating the initial virtual face image of the target face based on the initial dense point cloud data includes: acquiring the target face The first human face image corresponding to the face, and the dense point cloud data respectively corresponding to the multiple second human face images of the preset style; based on the first human face image and the multiple second human faces of the preset style The dense point cloud data corresponding to the images respectively, determine the initial dense point cloud data of the target face in the preset style; based on the initial dense point cloud data of the target face in the preset style, generate The initial virtual human face image of the target human face in the preset style.

本公開實施例中,可以根據預先儲存的多張基底影像在預設風格下分別對應的稠密點雲數據,來確定第一人臉影像在預設風格下的稠密點雲數據,從而可以快速展示出目標人臉在預設風格下的虛擬人臉影像。In the embodiment of the present disclosure, the dense point cloud data of the first face image in the preset style can be determined according to the dense point cloud data corresponding to the pre-stored multiple base images in the preset style, so that it can be quickly displayed Generate a virtual face image of the target face in the preset style.

在一種可能的實施方式中,所述基於所述第一人臉影像和所述預設風格的多張第二人臉影像分別對應的稠密點雲數據,確定所述目標人臉在所述預設風格下的初始稠密點雲數據,包括:提取所述第一人臉影像的人臉參數值,以及所述預設風格的多張第二人臉影像分別對應的人臉參數值;其中,所述人臉參數值包含表徵人臉形狀的參數值和表徵人臉表情的參數值;基於所述第一人臉影像的人臉參數值、以及所述預設風格的多張第二人臉影像分別對應的人臉參數值和稠密點雲數據,確定所述目標人臉在所述預設風格下的初始稠密點雲數據。In a possible implementation manner, based on the dense point cloud data respectively corresponding to the first face image and the plurality of second face images of the preset style, it is determined that the target face is in the preset The initial dense point cloud data under the preset style includes: extracting the face parameter values of the first face image, and the face parameter values corresponding to a plurality of second face images of the preset style; wherein, The face parameter value includes a parameter value representing the shape of the face and a parameter value representing the expression of the face; a face parameter value based on the first face image and a plurality of second faces of the preset style The face parameter values and dense point cloud data corresponding to the images respectively determine the initial dense point cloud data of the target face in the preset style.

本公開實施例中,提出在確定第一人臉影像在預設風格下的稠密點雲數據的過程中,可以結合第一人臉影像和預設風格的多張第二人臉影像的人臉參數值來確定,因為在通過人臉參數值表示人臉時使用的參數值數量較少,因此能夠更加快速的確定出目標人臉在預設風格下的稠密點雲數據。In the embodiment of the present disclosure, it is proposed that in the process of determining the dense point cloud data of the first face image in the preset style, the faces of the first face image and multiple second face images of the preset style can be combined Because the number of parameter values used to represent the face through the face parameter value is small, the dense point cloud data of the target face in the preset style can be determined more quickly.

在一種實施方式中,所述基於所述第一人臉影像的人臉參數值、以及所述預設風格的多張第二人臉影像分別對應的人臉參數值和稠密點雲數據,確定所述目標人臉在所述預設風格下的初始稠密點雲數據,包括:基於所述第一人臉影像的人臉參數值,以及預設風格的多張第二人臉影像分別對應的人臉參數值,確定所述第一人臉影像和所述預設風格的多張第二人臉影像之間的線性擬合係數;根據所述預設風格的多張第二人臉影像分別對應的稠密點雲數據和所述線性擬合係數,確定所述目標人臉在所述預設風格下的初始稠密點雲數據。In one embodiment, the face parameter values based on the first face image, and the face parameter values and dense point cloud data respectively corresponding to the plurality of second face images of the preset style are determined. The initial dense point cloud data of the target face in the preset style includes: face parameter values based on the first face image, and multiple second face images of the preset style corresponding to The face parameter value is used to determine the linear fitting coefficient between the first face image and multiple second face images of the preset style; according to the multiple second face images of the preset style, respectively Corresponding to the dense point cloud data and the linear fitting coefficient, determine the initial dense point cloud data of the target face in the preset style.

本公開實施例中,可以提出通過數量較少的人臉參數值快速得到表示第一人臉影像和多張第二人臉影像之間的關聯關係的線性擬合係數,進一步可以根據該線性擬合係數對預設風格的多張第二人臉影像的稠密點雲數據進行調整,可以快速得到目標人臉在預設風格下的稠密點雲數據。In the embodiment of the present disclosure, it can be proposed that the linear fitting coefficient representing the correlation between the first face image and multiple second face images can be quickly obtained through a small number of face parameter values, and further can be based on the linear fitting coefficient The combination coefficient adjusts the dense point cloud data of multiple second face images in the preset style, and can quickly obtain the dense point cloud data of the target face in the preset style.

在一種可能的實施方式中,所述基於所述第一人臉影像的人臉參數值,以及所述預設風格的多張第二人臉影像分別對應的人臉參數值,確定所述第一人臉影像和所述預設風格的多張第二人臉影像之間的線性擬合係數,包括:獲取當前線性擬合係數,初始的所述當前線性擬合係數為預先設置;基於所述當前線性擬合係數和所述預設風格的多張第二人臉影像分別對應的人臉參數值,預測所述第一人臉影像的當前人臉參數值;基於預測的當前人臉參數值和所述第一人臉影像的人臉參數值,確定第二損失值;基於所述第二損失值以及預設的所述線性擬合係數對應的約束範圍,調整所述當前線性擬合係數;基於調整後的所述當前線性擬合係數,返回執行預測當前人臉參數值的步驟,直至對所述當前線性擬合係數的調整操作符合第二調整截止條件的情況下,基於當前線性擬合係數得到第一人臉影像和預設風格的多張第二人臉影像之間的線性擬合係數。In a possible implementation manner, the first human face image is determined based on the face parameter values of the first face image and the face parameter values corresponding to the plurality of second face images of the preset style. The linear fitting coefficient between a human face image and multiple second human face images of the preset style includes: obtaining the current linear fitting coefficient, the initial current linear fitting coefficient is preset; based on the The current linear fitting coefficient and the face parameter values corresponding to the plurality of second face images of the preset style respectively, predict the current face parameter value of the first face image; based on the predicted current face parameters value and the face parameter value of the first face image to determine a second loss value; based on the second loss value and the preset constraint range corresponding to the linear fitting coefficient, adjust the current linear fitting coefficient; based on the adjusted current linear fitting coefficient, return to the step of predicting the current face parameter value, until the adjustment operation to the current linear fitting coefficient meets the second adjustment cut-off condition, based on the current linear The fitting coefficient obtains a linear fitting coefficient between the first human face image and multiple second human face images of a preset style.

本公開實施例中,在調整第一人臉影像和預設風格的多張第二人臉影像之間的線性擬合係數的過程中,可以通過第二損失值和/或調整次數對線性擬合係數進行多次調整,從而可以提高線性擬合係數的準確度;另一方面在調整過程中通過預設的線性擬合係數的約束範圍進行調整約束,這樣得到線性擬合係數,能夠更加合理的確定目標人臉對應的稠密點雲數據。In the embodiment of the present disclosure, in the process of adjusting the linear fitting coefficients between the first face image and multiple second face images of the preset style, the linear fitting coefficient can be adjusted by the second loss value and/or the number of adjustments. The fitting coefficient can be adjusted multiple times, so that the accuracy of the linear fitting coefficient can be improved; on the other hand, in the adjustment process, the constraint range of the preset linear fitting coefficient can be adjusted to obtain the linear fitting coefficient, which can be more reasonable Determine the dense point cloud data corresponding to the target face.

在一種可能的實施方式中,所述稠密點雲數據包括稠密點雲中各個點的坐標值;所述根據所述預設風格的多張第二人臉影像分別對應的稠密點雲數據和所述線性擬合係數,確定所述目標人臉在所述預設風格下的初始稠密點雲數據,包括:基於所述預設風格的多張第二人臉影像分別對應的所述稠密點雲中各個點的坐標值,確定平均稠密點雲數據對應點的坐標值;基於所述預設風格的多張第二人臉影像分別對應的所述稠密點雲數據中各個點的坐標值、和所述平均稠密點雲數據中對應點的坐標值,確定所述預設風格的多張第二人臉影像分別對應的坐標差異值;基於所述預設風格多張第二人臉影像分別對應的所述坐標差異值和所述線性擬合係數,確定所述第一人臉影像對應的坐標差異值;基於所述第一人臉影像對應的坐標差異值和所述平均稠密點雲數據中對應點的坐標值,確定所述目標人臉在所述預設風格下的所述初始稠密點雲數據。In a possible implementation manner, the dense point cloud data includes the coordinate values of each point in the dense point cloud; the dense point cloud data and the corresponding dense point cloud data of the multiple second face images according to the preset style The linear fitting coefficient, determining the initial dense point cloud data of the target face in the preset style, including: the dense point cloud corresponding to a plurality of second face images based on the preset style The coordinate values of each point in the center, determine the coordinate value of the corresponding point of the average dense point cloud data; the coordinate values of each point in the dense point cloud data corresponding to the multiple second face images based on the preset style, and The coordinate values of the corresponding points in the average dense point cloud data determine the coordinate difference values corresponding to the plurality of second human face images in the preset style; the plurality of second human face images in the preset style correspond to The coordinate difference value corresponding to the first human face image and the linear fitting coefficient, determine the coordinate difference value corresponding to the first human face image; based on the coordinate difference value corresponding to the first human face image and the average dense point cloud data The coordinate values of the corresponding points are used to determine the initial dense point cloud data of the target face in the preset style.

本公開實施例中,在第二人臉影像較少的情況下,可以通過多樣性的第二人臉影像的稠密點雲數據準確的表示出不同的目標人臉在預設風格下的稠密點雲數據。In the embodiment of the present disclosure, when there are few second face images, the dense point cloud data of the diverse second face images can accurately represent the dense points of different target faces in the preset style cloud data.

在一種可能的實施方式中,所述人臉參數值由預先訓練的神經網路提取,所述神經網路基於預先標註人臉參數值的樣本影像訓練得到。In a possible implementation manner, the face parameter value is extracted by a pre-trained neural network, and the neural network is trained based on sample images with face parameter values marked in advance.

本公開實施例中,提出通過預先訓練的神經網路來提取人臉影像的人臉參數值,可以提高人臉參數值的提取精度和提取效率。In the embodiment of the present disclosure, it is proposed to extract the face parameter values of the face image by using a pre-trained neural network, which can improve the extraction accuracy and extraction efficiency of the face parameter values.

在一種可能的實施方式中,按照以下方式預先訓練所述神經網路:獲取樣本影像集,所述樣本影像集包含多張樣本影像以及每張樣本影像對應的標註人臉參數值;將所述多張樣本影像輸入神經網路,得到每張樣本影像對應的預測人臉參數值;基於每張樣本影像對應的預測人臉參數值和標註人臉參數值,對所述神經網路的網路參數值進行調整,得到訓練完成的神經網路。In a possible implementation manner, the neural network is pre-trained in the following manner: a sample image set is obtained, and the sample image set includes a plurality of sample images and the labeled face parameter values corresponding to each sample image; Multiple sample images are input into the neural network to obtain the predicted face parameter values corresponding to each sample image; based on the predicted face parameter values and marked face parameter values corresponding to each sample image, the neural network network The parameter values are adjusted to obtain the trained neural network.

本公開實施例中,在對用於提取人臉參數值的神經網路進行訓練過程中,提出通過每張樣本影像的標註人臉參數值,對神經網路的網路參數值進行不斷調整,從而可以得到準確度較高的神經網路。In the embodiment of the present disclosure, during the training process of the neural network used to extract the face parameter value, it is proposed to continuously adjust the network parameter value of the neural network by marking the face parameter value of each sample image, Thus, a neural network with higher accuracy can be obtained.

第二方面,本公開實施例提供了一種人臉影像的處理裝置,包括:獲取模組,用於獲取目標人臉的原始稠密點雲數據,並基於所述初始稠密點雲數據生成所述目標人臉的初始虛擬人臉影像;確定模組,用於確定所述初始稠密點雲數據相對於標準虛擬人臉影像對應的標準稠密點雲數據的形變係數;調整模組,用於響應於針對所述初始虛擬人臉影像的調整操作,對所述形變係數進行調整,得到目標形變係數;生成模組,用於基於所述目標形變係數和所述標準稠密點雲數據,生成所述目標人臉對應的目標虛擬人臉影像。In a second aspect, an embodiment of the present disclosure provides a face image processing device, including: an acquisition module, configured to acquire the original dense point cloud data of a target face, and generate the target object based on the initial dense point cloud data. The initial virtual face image of the face; the determination module is used to determine the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual face image; the adjustment module is used to respond to the The adjustment operation of the initial virtual human face image is to adjust the deformation coefficient to obtain the target deformation coefficient; the generation module is used to generate the target person based on the target deformation coefficient and the standard dense point cloud data. The target virtual face image corresponding to the face.

第三方面,本公開實施例提供了一種電子設備,包括:處理器、儲存器和匯流排,所述儲存器儲存有所述處理器可執行的機器可讀指令,當電子設備運行時,所述處理器與所述儲存器之間通過匯流排通訊,所述機器可讀指令被所述處理器執行時執行如第一方面所述的處理方法的步驟。In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor, a memory, and a bus bar. The memory stores machine-readable instructions executable by the processor. When the electronic device is running, the The processor communicates with the storage through a bus, and when the machine-readable instructions are executed by the processor, the steps of the processing method described in the first aspect are executed.

第四方面,本公開實施例提供了一種計算機可讀儲存媒體,其上儲存有計算機程式,該計算機程式被處理器運行時執行如第一方面所述的處理方法的步驟。In a fourth aspect, an embodiment of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the processing method described in the first aspect are executed.

為使本公開的上述目的、特徵和優點能更明顯易懂,下文特舉較佳實施例,並配合所附附圖,作詳細說明如下。In order to make the above-mentioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments will be described in detail below together with the accompanying drawings.

為使本公開實施例的目的、技術方案和優點更加清楚,下面將結合本公開實施例中附圖,對本公開實施例中的技術方案進行清楚、完整地描述,顯然,所描述的實施例僅僅是本公開一部分實施例,而不是全部的實施例。通常在此處附圖中描述和示出的本公開實施例的組件可以以各種不同的配置來佈置和設計。因此,以下對在附圖中提供的本公開的實施例的詳細描述並非旨在限制要求保護的本公開的範圍,而是僅僅表示本公開的選定實施例。基於本公開的實施例,本領域技術人員在沒有做出創造性勞動的前提下所獲得的所有其他實施例,都屬於本公開保護的範圍。In order to make the purpose, technical solutions and advantages of the embodiments of the present disclosure clearer, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only It is a part of the embodiments of the present disclosure, but not all of them. The components of the disclosed embodiments generally described and illustrated in the figures herein may be arranged and designed in a variety of different configurations. Accordingly, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the claimed disclosure, but merely represents selected embodiments of the present disclosure. Based on the embodiments of the present disclosure, all other embodiments obtained by those skilled in the art without creative effort shall fall within the protection scope of the present disclosure.

應注意到:相似的標號和字母在下面的附圖中表示類似項,因此,一旦某一項在一個附圖中被定義,則在隨後的附圖中不需要對其進行進一步定義和解釋。It should be noted that like numerals and letters denote similar items in the following figures, therefore, once an item is defined in one figure, it does not require further definition and explanation in subsequent figures.

本文中術語“和/或”,僅僅是描述一種關聯關係,表示可以存在三種關係,例如,A和/或B,可以表示:單獨存在A,同時存在A和B,單獨存在B這三種情況。另外,本文中術語“至少一種”表示多種中的任意一種或多種中的至少兩種的任意組合,例如,包括A、B、C中的至少一種,可以表示包括從A、B和C構成的集合中選擇的任意一個或多個元素。The term "and/or" in this article only describes an association relationship, which means that there may be three relationships, for example, A and/or B may mean: A exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one or any combination of at least two of the plurality, for example, including at least one of A, B, and C, may mean including the composition consisting of A, B, and C Any one or more elements selected in the collection.

在三維建模領域中,一張人臉可以通過針對該人臉採集的稠密點雲來表示,表示人臉的稠密點雲一般包含成千上萬個點,在需要對人臉的虛擬人臉影像的形狀進行調整時,需要對上千上萬個點的位置逐一調整,過程繁瑣,效率較低。In the field of 3D modeling, a face can be represented by a dense point cloud collected for the face. The dense point cloud representing the face generally contains tens of thousands of points. When adjusting the shape of the image, it is necessary to adjust the positions of thousands of points one by one, which is a cumbersome process and low efficiency.

基於上述研究,本公開提供了一種人臉影像的處理方法,在獲取到目標人臉的原始稠密點雲數據後,可以確定出目標人臉的初始稠密點雲數據相對於標準人臉影像對應的標準稠密點雲數據的形變係數,按照這樣的方式,建立了稠密點雲數據和形變係數之間的對應關係,這樣在檢測到針對初始虛擬人臉影像的調整操作的情況下,可以直接針對形變係數進行調整,從而完成對初始虛擬人臉影像的調整,該方式無需對稠密點雲數據中的點進行逐一調整,提高了調整效率,另外本方案是根據稠密點雲數據確定的形變係數,在針對初始虛擬人臉影像進行調整過程中,調整的精度更高。Based on the above research, the present disclosure provides a face image processing method. After obtaining the original dense point cloud data of the target face, it can be determined that the initial dense point cloud data of the target face corresponds to the standard face image. The deformation coefficient of the standard dense point cloud data, in this way, establishes the corresponding relationship between the dense point cloud data and the deformation coefficient, so that when the adjustment operation for the initial virtual face image is detected, it can directly target the deformation Coefficients are adjusted to complete the adjustment of the initial virtual face image. This method does not need to adjust the points in the dense point cloud data one by one, which improves the adjustment efficiency. In addition, this scheme is based on the deformation coefficient determined by the dense point cloud data. In the process of adjusting the initial virtual human face image, the adjustment accuracy is higher.

為便於對本實施例進行理解,首先對本公開實施例所公開的一種人臉影像的處理方法進行詳細介紹,本公開實施例所提供的處理方法的執行主體一般為具有一定計算能力的計算機設備,該計算機設備例如包括:終端設備或伺服器或其它處理設備,終端設備可以為用戶設備(User Equipment,UE)、移動設備、用戶終端、終端、手持設備、計算設備、可穿戴設備等。在一些可能的實現方式中,該處理方法可以通過處理器調用儲存器中儲存的計算機可讀指令的方式來實現。In order to facilitate the understanding of this embodiment, a face image processing method disclosed in the embodiments of the present disclosure is first introduced in detail. The processing method provided by the embodiments of the present disclosure is generally executed by a computer device with a certain computing power. The computer equipment includes, for example: terminal equipment or servers or other processing equipment, and the terminal equipment may be user equipment (User Equipment, UE), mobile equipment, user terminal, terminal, handheld equipment, computing equipment, wearable equipment, and the like. In some possible implementation manners, the processing method may be implemented by a processor invoking computer-readable instructions stored in a memory.

參見圖1所示,本公開實施例提供了一種人臉影像的處理方法,該處理方法包括以下步驟S101至S104。Referring to FIG. 1 , an embodiment of the present disclosure provides a method for processing a human face image, and the processing method includes the following steps S101 to S104.

S101,獲取目標人臉的原始稠密點雲數據,並基於原始稠密點雲數據生成目標人臉的初始虛擬人臉影像。S101. Acquire original dense point cloud data of a target face, and generate an initial virtual face image of the target face based on the original dense point cloud data.

示例性地,稠密點雲數據可以表示人臉的三維模型,具體地,稠密點雲數據可以包含人臉表面的多個點在預先構建的三維坐標系下的坐標值,多個點連接後形成的三維網路(3Dmesh)和多個點的坐標值可以用來表示人臉的三維模型,如圖2所示,表示通過不同稠密點雲數據表示的人臉的三維模型的示意圖,稠密點雲中所包含的點的個數越多,稠密點雲數據在表示人臉的三維模型時也越精細。Exemplarily, the dense point cloud data can represent a 3D model of a human face. Specifically, the dense point cloud data can include the coordinate values of multiple points on the surface of the human face in a pre-built 3D coordinate system, and the multiple points are connected to form The 3D network (3Dmesh) and the coordinate values of multiple points can be used to represent the 3D model of the face, as shown in Figure 2, which represents the schematic diagram of the 3D model of the face represented by different dense point cloud data, dense point cloud The more points contained in , the finer the dense point cloud data will be when representing the 3D model of the face.

示例性地,初始虛擬人臉影像可以為三維人臉影像或者二維人臉影像,與具體的應用場景相關,對應的,當初始虛擬人臉影像為三維人臉影像時,後文提到的人臉影像也為三維人臉影像,當初始虛擬人臉影像為二維人臉影像時,後文提到的人臉影像也為二維人臉影像,本公開實施例將以虛擬人臉影像為三維人臉影像為例進行說明。Exemplarily, the initial virtual face image can be a three-dimensional face image or a two-dimensional face image, which is related to a specific application scenario. Correspondingly, when the initial virtual face image is a three-dimensional face image, the following mentioned The face image is also a three-dimensional face image. When the initial virtual face image is a two-dimensional face image, the face image mentioned later is also a two-dimensional face image. The embodiment of the present disclosure will use the virtual face image A three-dimensional face image is taken as an example for illustration.

示例性地,獲取的目標人臉的初始稠密點雲數據為目標人臉在預設風格下對應的稠密點雲數據時,例如目標人臉在古典風格下對應的稠密點雲數據時,基於該初始稠密點雲數據展示的目標人臉的初始虛擬人臉影像也是在古典風格下的人臉影像,具體如何獲取目標人臉在預設風格下對應的稠密點雲數據將在後文進行說明。Exemplarily, when the acquired initial dense point cloud data of the target face is the dense point cloud data corresponding to the target face in the preset style, for example, when the dense point cloud data corresponding to the target face in the classical style, based on the The initial virtual face image of the target face displayed by the initial dense point cloud data is also a face image in the classical style. How to obtain the corresponding dense point cloud data of the target face in the preset style will be explained later.

S102,確定初始稠密點雲數據相對於標準虛擬人臉影像對應的標準稠密點雲數據的形變係數。S102. Determine the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual face image.

示例性地,這裡標準虛擬人臉影像對應的標準稠密點雲數據可以是預先設定的虛擬人臉影像對應的稠密點雲數據,該預先設定的虛擬人臉影像具有預先設定的臉型以及五官特徵,進一步可以以該標準虛擬人臉影像為基礎,確定目標人臉的初始稠密點雲數據相比標準稠密點雲數據的形變係數。Exemplarily, the standard dense point cloud data corresponding to the standard virtual human face image here may be dense point cloud data corresponding to a preset virtual human face image, the preset virtual human face image has a preset face shape and facial features, Further, based on the standard virtual face image, the deformation coefficient of the initial dense point cloud data of the target face compared with the standard dense point cloud data can be determined.

示例性地,形變係數與稠密點雲數據關聯,可以表示稠密點雲數據相比標準稠密點雲數據的形變量,這樣針對目標人臉對應的形變係數,可以表示目標人臉相比標準人臉的形變量,例如可以包含鼻樑增高、眼睛變大、嘴角上揚、臉頰變小等。Exemplarily, the deformation coefficient is associated with the dense point cloud data, and can represent the deformation amount of the dense point cloud data compared with the standard dense point cloud data, so that the deformation coefficient corresponding to the target face can represent the difference between the target face and the standard face. The amount of deformation, such as increased nose bridge, enlarged eyes, raised mouth corners, smaller cheeks, etc.

具體地,形變係數包含至少一個骨骼係數和/或至少一個混合形變係數;Specifically, the deformation coefficient includes at least one bone coefficient and/or at least one hybrid deformation coefficient;

其中,每個骨骼係數用於對與該骨骼係數關聯的第一稠密點雲構成的骨骼的初始位姿進行調整;每個混合形變係數用於對與該混合形變係數關聯的第二稠密點雲對應的初始位置進行調整。Among them, each bone coefficient is used to adjust the initial pose of the bone composed of the first dense point cloud associated with the bone coefficient; each mixed deformation coefficient is used to adjust the second dense point cloud associated with the mixed deformation coefficient Adjust the corresponding initial position.

示例性地,骨骼係數可以包含多個,可以用於對人臉的骨骼進行調整,具體調整時,可以對骨骼在預先構建的三維坐標系(可以為預先以人臉的其中一個點為坐標原點構建的世界坐標系,將在後文介紹)中的初始位姿進行調整,以其中一個與人臉鼻樑對應的骨骼係數為例,通過調整該骨骼係數可以對構成人臉鼻樑的第一稠密點雲的初始位姿進行調整,從而完成對該人臉鼻樑的初始位姿的調整,例如將人臉鼻樑調整的更加挺拔。Exemplarily, the bone coefficient can contain multiple, which can be used to adjust the bones of the human face. In the specific adjustment, the bones can be adjusted in a pre-built three-dimensional coordinate system (it can be one of the points of the human face as the coordinate source) Points to construct the world coordinate system, which will be introduced later), to adjust the initial pose in), taking one of the bone coefficients corresponding to the nose bridge of the face as an example, by adjusting the bone coefficient, the first dense The initial pose of the point cloud is adjusted to complete the adjustment of the initial pose of the nose bridge of the face, for example, the nose bridge of the face is adjusted to be more straight.

混合形變係數也可以包含多個,用於對關聯的第二稠密點雲在預先構建的三維坐標系中的初始位置進行調整,可以達到對臉部輪廓和五官的尺寸、形狀等進行調整的目標,以其中一個與臉部輪廓對應的混合形變係數為例,通過調整該混合形變係數可以對構成臉部輪廓的第二稠密點雲的初始位置進行調整,從而完成對臉部輪廓的尺寸和/形狀進行調整,例如將大圓臉的尺寸調小,或者調整成瓜子臉。The mixed deformation coefficient can also contain multiple, which is used to adjust the initial position of the associated second dense point cloud in the pre-built three-dimensional coordinate system, which can achieve the goal of adjusting the size and shape of the facial contour and facial features , taking one of the blending deformation coefficients corresponding to the face contour as an example, by adjusting the blending deformation coefficient, the initial position of the second dense point cloud that constitutes the face contour can be adjusted, thereby completing the size and/or adjustment of the face contour Adjust the shape, such as reducing the size of a round face, or adjusting it into an oval face.

示例性地,響應於不同的調整需求,骨骼係數關聯的第一稠密點雲和混合形變係數關聯的第二稠密點雲之間可以有至少一部分點重疊,例如,以用於對人臉鼻頭的位姿進行調整的一骨骼係數為例,通過調整該骨骼係數關聯的第一稠密點雲,可以達到對人臉鼻頭的位姿進行調整的目的,在需要對人臉鼻頭的尺寸進行調整時,該人臉鼻頭對應的混合形變係數關聯的第二稠密點雲可以和用於對人臉鼻頭的位姿進行調整的骨骼係數關聯的第一稠密點雲相同;當然,骨骼係數關聯的第一稠密點雲和混合形變係數關聯的第二稠密點雲也可以為不相同的稠密點雲,例如用於對人臉鼻頭的位姿進行條件的骨骼係數關聯的第一稠密點雲,與用於對臉頰尺寸進行調整的混合形變係數關聯的第二稠密點雲。Exemplarily, in response to different adjustment requirements, there may be at least a partial point overlap between the first dense point cloud associated with the bone coefficients and the second dense point cloud associated with the hybrid deformation coefficients, for example, to Take a bone coefficient for pose adjustment as an example. By adjusting the first dense point cloud associated with the bone coefficient, the pose of the face and nose can be adjusted. When the size of the face and nose needs to be adjusted, The second dense point cloud associated with the mixed deformation coefficient corresponding to the face and nose can be the same as the first dense point cloud associated with the bone coefficient used to adjust the pose of the face and nose; of course, the first dense point cloud associated with the bone coefficient The second dense point cloud associated with the point cloud and the mixed deformation coefficient can also be a different dense point cloud, for example, the first dense point cloud associated with the bone coefficient used to condition the pose of the face and nose, and the first dense point cloud used to The second dense point cloud associated with the blended deformation coefficients adjusted for the cheek size.

示例性地,為了表示目標人臉的初始稠密點雲數據相對於標準稠密點雲數據的形變係數,可以預先以目標人臉包含的稠密點雲中的其中一個點為坐標系原點,選定三個互相垂直的方向作為坐標系的三個坐標軸構建世界坐標系,在該世界坐標系下,可以確定目標人臉的初始稠密點雲數據相對於標準稠密點雲數據的形變係數,形變係數的具體確定過程可以根據機器學習算法來確定,將在後文進行詳細說明。For example, in order to represent the deformation coefficient of the initial dense point cloud data of the target face relative to the standard dense point cloud data, one of the points in the dense point cloud contained in the target face can be used as the origin of the coordinate system in advance, and three Two mutually perpendicular directions are used as the three coordinate axes of the coordinate system to construct the world coordinate system. In this world coordinate system, the deformation coefficient and deformation coefficient of the initial dense point cloud data of the target face relative to the standard dense point cloud data can be determined. The specific determination process may be determined according to a machine learning algorithm, which will be described in detail later.

本公開實施例中提出,形變係數包含用於對骨骼的初始位姿進行調整的骨骼係數,以及包含用於對稠密點雲進行初始位置調整的混合形變係數,這樣可以基於形變係數對目標人臉進行全面調整。In the embodiment of the present disclosure, it is proposed that the deformation coefficient includes the bone coefficient used to adjust the initial pose of the skeleton, and the mixed deformation coefficient used to adjust the initial position of the dense point cloud, so that the target face can be adjusted based on the deformation coefficient. Make full adjustments.

S103,響應於針對初始虛擬人臉影像的調整操作,對形變係數進行調整,得到目標形變係數。S103, in response to the adjustment operation on the initial virtual human face image, adjust the deformation coefficient to obtain a target deformation coefficient.

示例性地,在展示目標人臉的初始虛擬人臉影像時,還可以展示用於對該初始虛擬人臉影像進行調整的操作按鈕,允許用戶通過操作按鈕對展示的初始虛擬人臉影像進行形貌調整,在調整過程中,為了方便用戶能夠直觀地對初始虛擬人臉影像進行調整,可以預先建立多種待調整位置與形變係數之間的對應關係,例如建立嘴巴、眼睛、鼻翼、眉毛、臉型等待調整位置分別與形變係數之間的對應關係,這樣便於用戶直接基於展示的初始虛擬人臉影像,對待調整位置進行調整,從而可以達到對形變係數進行調整的目的。Exemplarily, when showing the initial virtual human face image of the target face, an operation button for adjusting the initial virtual human face image may also be displayed, allowing the user to shape the displayed initial virtual human face image through the operation button. During the adjustment process, in order to facilitate the user to adjust the initial virtual face image intuitively, a variety of corresponding relationships between the position to be adjusted and the deformation coefficient can be established in advance, such as establishing the mouth, eyes, nose, eyebrows, face shape, etc. Waiting for the corresponding relationship between the adjusted position and the deformation coefficient, so that the user can directly adjust the position to be adjusted based on the displayed initial virtual face image, so as to achieve the purpose of adjusting the deformation coefficient.

S104,基於目標形變係數和標準稠密點雲數據,生成目標人臉對應的目標虛擬人臉影像。S104. Based on the target deformation coefficient and the standard dense point cloud data, generate a target virtual face image corresponding to the target face.

在得到目標形變係數後,進一步可以基於該目標形變係數對標準稠密點雲數據進行調整,得到目標人臉對應的目標稠密點雲數據,然後根據該目標稠密點雲數據,生成目標人臉對應的目標虛擬人臉影像。After the target deformation coefficient is obtained, the standard dense point cloud data can be further adjusted based on the target deformation coefficient to obtain the target dense point cloud data corresponding to the target face, and then according to the target dense point cloud data, the target face corresponding to Target virtual human face image.

本公開實施例中,提出通過稠密點雲數據確定用於對目標人臉的虛擬人臉影像進行調整的形變係數,這樣可以建立稠密點雲數據和形變係數之間的對應關係,從而可以直接基於形變係數對虛擬人臉影像進行調整,相比通過對稠密點雲數據中的點進行逐一調整的方式,可以提高調整效率。In the embodiment of the present disclosure, it is proposed to determine the deformation coefficient used to adjust the virtual face image of the target face through the dense point cloud data, so that the corresponding relationship between the dense point cloud data and the deformation coefficient can be established, so that it can be directly based on The deformation coefficient adjusts the virtual face image, which can improve the adjustment efficiency compared to adjusting the points in the dense point cloud data one by one.

另一方面,考慮到這裡的形變係數是根據稠密點雲數據確定的,在基於形變係數對初始虛擬人臉影像的調整過程中,可以直接基於形變係數對稠密點雲數據中的點進行調整,這樣可以直接精確到對構成虛擬人臉影像的各個點的調整,在提高調整效率的基礎上同時可以提高調整精度。On the other hand, considering that the deformation coefficient here is determined according to the dense point cloud data, in the process of adjusting the initial virtual face image based on the deformation coefficient, the points in the dense point cloud data can be adjusted directly based on the deformation coefficient. In this way, the adjustment of each point constituting the virtual human face image can be directly and accurately adjusted, and the adjustment accuracy can be improved on the basis of improving the adjustment efficiency.

下面將結合具體實施例對上述步驟S101至S104進行具體介紹。The above steps S101 to S104 will be described in detail below in conjunction with specific embodiments.

針對上述S101,在獲取目標人臉的初始稠密點雲數據,並基於初始稠密點雲數據展示目標人臉的初始虛擬人臉影像時,如圖3所示,可以包括以下步驟S201至S203。Regarding the above S101, when acquiring the initial dense point cloud data of the target face and displaying the initial virtual face image of the target face based on the initial dense point cloud data, as shown in FIG. 3 , the following steps S201 to S203 may be included.

S201,獲取目標人臉對應的第一人臉影像,以及預設風格的多張第二人臉影像分別對應的稠密點雲數據。S201. Acquire a first face image corresponding to a target face, and dense point cloud data respectively corresponding to a plurality of second face images in a preset style.

示例性地,目標人臉對應的第一人臉影像可以為通過影像採集設備採集的目標人臉的彩色人臉影像,或者目標人臉的灰度人臉影像,在此不做具體限定。Exemplarily, the first face image corresponding to the target face may be a color face image of the target face collected by an image collection device, or a grayscale face image of the target face, which is not specifically limited herein.

示例性地,多張第二人臉影像為預先選擇具有一些特徵的影像,通過這些第二人臉影像能夠表徵出不同的第一人臉影像,例如選擇n張第二人臉影像,針對每張第一人臉影像,則可以通過這n張第二人臉影像和線性擬合係數來表徵該第一人臉影像。示例性地,為了使得多張第二人臉影像能夠擬合表示大部分的第一人臉影像,可以選擇相比平均人臉具有一些突出特徵的人臉的影像作為第二人臉影像,例如,選擇相比平均人臉的臉部尺寸較小的人臉的人臉影像作為第二人臉影像,或者,選擇相比平均人臉的嘴巴尺寸較大的人臉的人臉影像作為第二人臉影像,或者,選擇相比平均人臉的眼睛尺寸較大的人臉的人臉影像作為第二人臉影像,通過選擇具有特定特徵的人臉的人臉影像作為第二人臉影像,可以通過調整線性擬合係數,來表徵出第一人臉影像。Exemplarily, the plurality of second face images are pre-selected images with some characteristics, through which different first face images can be characterized, for example, n second face images are selected, and for each For the first face images, the first face images can be characterized by the n second face images and the linear fitting coefficients. Exemplarily, in order to enable multiple second face images to fit most of the first face images, an image of a face with some prominent features compared to an average face may be selected as the second face image, for example , select a face image with a smaller face size than the average face as the second face image, or select a face image with a larger mouth size than the average face as the second face image face images, or select a face image of a face with a larger eye size than an average face as a second face image, by selecting a face image of a face with specific characteristics as the second face image, The first human face image can be represented by adjusting the linear fitting coefficient.

示例性地,可以預先獲取並保存每張第二人臉影像在多種風格下分別對應的稠密點雲數據,例如在古典風格下對應的稠密點雲數據,在現代風格下對應的稠密點雲數據,在西式風格下對應的稠密點雲數據、以及在中式風格下對應的稠密點雲數據,便於後續確定出第一人臉影像在不同風格下對應的虛擬人臉模型。Exemplarily, the dense point cloud data corresponding to each second face image in multiple styles can be acquired and saved in advance, for example, the dense point cloud data corresponding to the classical style, and the dense point cloud data corresponding to the modern style , the dense point cloud data corresponding to the western style and the dense point cloud data corresponding to the Chinese style facilitate subsequent determination of the virtual face models corresponding to the first face image in different styles.

示例性地,預先可以針對每張第二人臉影像,可以提取該張第二人臉影像對應的稠密點雲數據、以及該張第二人臉影像的人臉參數值,例如可以提取第二張人臉影像的三維可變形模型(3D Morphable Face Model,3DMM)參數值,然後根據人臉參數值對稠密點雲數據中多個點的坐標值進行調整,得到每張第二人臉影像在多種風格下分別對應的稠密點雲數據,例如可以得到每張第二人臉影像在古典風格下對應的稠密點雲數據、在卡通風格下對應的稠密點雲數據,然後對每張第二人臉影像在不同風格下的稠密點雲數據進行保存。Exemplarily, for each second face image, the dense point cloud data corresponding to the second face image and the face parameter value of the second face image can be extracted, for example, the second face image can be extracted 3D Morphable Face Model (3DMM) parameter values of each face image, and then adjust the coordinate values of multiple points in the dense point cloud data according to the face parameter values to obtain each second face image at The dense point cloud data corresponding to various styles, for example, the dense point cloud data corresponding to each second face image in the classical style, the dense point cloud data corresponding to the cartoon style, and then for each second face image Face images are stored in dense point cloud data in different styles.

示例性地,人臉參數值包括表示臉部形狀的參數值,以及,表示臉部表情的參數值,例如人臉參數值中可以包含K維度的用於表示臉部形狀的參數值,包含M維度的用於表示臉部表情的參數值,其中,K維度的用於表示臉部形狀的參數值共同體現出該第二人臉影像的臉部形狀,M維度的用於表示臉部表情的參數值共同體現出該第二人臉影像的臉部表情。Exemplarily, the face parameter value includes a parameter value representing a face shape, and a parameter value representing a facial expression, for example, a face parameter value may include a K-dimensional parameter value representing a face shape, including M Dimensions are used to represent the parameter value of facial expression, wherein, the parameter value of K dimension is used to represent the shape of the face together reflects the facial shape of the second human face image, and the parameter value of M dimension is used to represent facial expression The parameter values together reflect the facial expression of the second human face image.

示例性地,K的維度取值範圍一般在150至400之間,K的維度越小,能夠表徵的臉部形狀越簡單,K的維度越大,能夠表徵的臉部形狀越複雜; M的取值範圍一般在10至40之間,M的維度越少,能夠表徵的臉部表情越簡單,M的維度越多,能夠表徵的臉部表情越複雜,可見,本公開實施例提出可以通過數量範圍較少的人臉參數值來表示一張人臉,從而為後續確定目標人臉對應的初始虛擬人臉模型提供便利。For example, the value range of the dimension of K is generally between 150 and 400, the smaller the dimension of K, the simpler the facial shape that can be represented, the larger the dimension of K, the more complex the facial shape that can be represented; The value range is generally between 10 and 40. The less the dimension of M, the simpler the facial expression that can be represented, and the more the dimension of M, the more complex the facial expression that can be represented. It can be seen that the embodiment of the present disclosure proposes that the facial expression can be represented by A face is represented by face parameter values with a relatively small number range, so as to facilitate subsequent determination of the initial virtual face model corresponding to the target face.

示例性地,結合人臉參數值的含義,上述提到的根據人臉參數值對稠密點雲中多個點的坐標值進行調整,得到每張第二人臉影像在多種風格下分別對應的稠密點雲數據,可以理解為是根據人臉參數值以及多種風格分別對應的特徵屬性(例如卡通風格的特徵屬性、古典風格的特徵屬性等),對稠密點雲中的點在預先建立的三維坐標系下的坐標值進行調整,從而得到第二人臉影像在多種風格下分別對應的稠密點雲數據。Exemplarily, in combination with the meaning of the face parameter value, the above-mentioned coordinate values of multiple points in the dense point cloud are adjusted according to the face parameter value to obtain the corresponding values of each second face image in multiple styles. Dense point cloud data can be understood as the point in the dense point cloud in the pre-established 3D The coordinate values in the coordinate system are adjusted to obtain the dense point cloud data corresponding to the second face image in various styles.

S202,基於第一人臉影像和預設風格的多張第二人臉影像分別對應的稠密點雲數據,確定目標人臉在預設風格下的稠密點雲數據。S202. Based on the dense point cloud data respectively corresponding to the first human face image and multiple second human face images of the preset style, determine the dense point cloud data of the target human face in the preset style.

示例性地,可以通過找到第一人臉影像與多張第二人臉影像之間的關聯關係,例如可以通過線性擬合的方式,確定出多張第二人臉影像與第一人臉影像之間的線性擬合係數,然後進一步可以根據該線性擬合係數以及預設風格的多張第二人臉影像分別對應的稠密點雲數據,確定出目標人臉在預設風格下的稠密點雲數據。Exemplarily, by finding the correlation between the first human face image and multiple second human face images, for example, by linear fitting, it is possible to determine the relationship between the multiple second human face images and the first human face image The linear fitting coefficient between them, and then according to the linear fitting coefficient and the dense point cloud data corresponding to multiple second face images of the preset style, the dense point of the target face under the preset style can be determined cloud data.

S203,基於目標人臉在預設風格下的稠密點雲數據,生成並展示目標人臉在預設風格下的初始虛擬人臉影像。S203. Based on the dense point cloud data of the target face in the preset style, generate and display an initial virtual face image of the target face in the preset style.

在獲取到目標人臉在預設風格下的稠密點雲數據後,可以按照該目標人臉對應的稠密點雲數據,生成並展示目標人臉在預設風格下的初始虛擬人臉影像,例如可以基於默認設置的風格,或者用戶設置的風格展示出目標人臉的初始虛擬人臉影像。After obtaining the dense point cloud data of the target face in the preset style, the initial virtual face image of the target face in the preset style can be generated and displayed according to the dense point cloud data corresponding to the target face, for example The initial virtual face image of the target face can be displayed based on the style set by default or the style set by the user.

本公開實施例中,可以根據預先儲存的基底影像庫中的每張基底影像在預設風格下分別對應的稠密點雲數據,來確定第一人臉影像在預設風格下的稠密點雲數據,從而快速展示出目標人臉在預設風格下的虛擬人臉影像。In the embodiment of the present disclosure, the dense point cloud data of the first face image in the preset style can be determined according to the dense point cloud data corresponding to each base image in the pre-stored base image database in the preset style , so as to quickly display the virtual face image of the target face in the preset style.

針對上述S202,稠密點雲數據包含稠密點雲中多個點的坐標值,在基於第一人臉影像和預設風格的多張第二人臉影像分別對應的稠密點雲數據,確定目標人臉在預設風格下的稠密點雲數據時,如圖4所示,可以包括以下步驟S301至S302。For the above S202, the dense point cloud data includes the coordinate values of multiple points in the dense point cloud, and the target person is determined based on the dense point cloud data corresponding to the first face image and multiple second face images of the preset style. When the dense point cloud data of the face is in the preset style, as shown in FIG. 4 , the following steps S301 to S302 may be included.

S301,提取第一人臉影像的人臉參數值,以及預設風格的多張第二人臉影像分別對應的人臉參數值;其中,人臉參數值包含表徵人臉形狀的參數值和表徵人臉表情的參數值。S301, extracting the face parameter values of the first face image, and the face parameter values corresponding to the plurality of second face images of the preset style; wherein, the face parameter values include the parameter values and representations representing the shape of the face The parameter value of facial expression.

示例性地,這裡可以通過預先訓練的神經網路來分別提取第一人臉影像的人臉參數值,以及預設風格的多張第二人臉影像分別對應的人臉參數值,例如可以將第一人臉影像和每張第二人臉影像分別輸入預先訓練的神經網路,得到各自對應的人臉參數值。Exemplarily, here, the face parameter values of the first face image and the face parameter values corresponding to multiple second face images of the preset style can be respectively extracted through a pre-trained neural network, for example, the The first face image and each second face image are respectively input into the pre-trained neural network to obtain respective corresponding face parameter values.

S302,基於第一人臉影像的人臉參數值、以及預設風格的多張第二人臉影像分別對應的人臉參數值和稠密點雲數據,確定目標人臉在預設風格下的稠密點雲數據。S302, based on the face parameter values of the first face image, and the face parameter values and dense point cloud data respectively corresponding to multiple second face images of the preset style, determine the denseness of the target face in the preset style point cloud data.

考慮到人臉參數值和稠密點雲數據在表徵同一張人臉時具有對應關係,因此可以通過第一人臉影像和預設風格的多張第二人臉影像各自對應的人臉參數值,確定出第一人臉影像和預設風格的多張第二人臉影像之間的關聯關係,然後根據該關聯關係,以及預設風格多張第二人臉影像分別對應的稠密點雲數據,確定目標人臉在預設風格下的稠密點雲數據。Considering that the face parameter values and the dense point cloud data have a corresponding relationship when representing the same face, it is possible to use the face parameter values corresponding to the first face image and multiple second face images of the preset style, Determining the correlation between the first human face image and the multiple second human face images of the preset style, and then according to the correlation and the dense point cloud data respectively corresponding to the multiple second human face images of the preset style, Determine the dense point cloud data of the target face in the preset style.

本公開實施例中,提出在確定目標人臉影像在預設風格下的稠密點雲數據的過程中,可以結合第一人臉影像和多張第二人臉影像的人臉參數值來確定,因為在通過人臉參數值表示人臉時使用的參數值數量較少,因此能夠更加快速的確定出目標人臉在預設風格下的稠密點雲數據。In the embodiment of the present disclosure, it is proposed that in the process of determining the dense point cloud data of the target face image in the preset style, it can be determined by combining the face parameter values of the first face image and multiple second face images, Because the number of parameter values used to represent a human face is relatively small, the dense point cloud data of the target human face in a preset style can be determined more quickly.

示例性地,上述提到的人臉參數值由預先訓練的神經網路提取,神經網路基於預先標註人臉參數值的樣本影像訓練得到。Exemplarily, the aforementioned face parameter values are extracted by a pre-trained neural network, and the neural network is trained based on sample images with pre-marked face parameter values.

本公開實施例中,提出通過預先訓練的神經網路來提取人臉影像的人臉參數值,可以提高人臉參數值的提取效率。In the embodiment of the present disclosure, it is proposed to extract the face parameter value of the face image through the pre-trained neural network, which can improve the extraction efficiency of the face parameter value.

具體地,可以按照以下方式預先訓練神經網路,如圖5所示,可以包括以下步驟S401至S403。Specifically, the neural network may be pre-trained in the following manner, as shown in FIG. 5 , which may include the following steps S401 to S403.

S401,獲取樣本影像集,樣本影像集包含多張樣本影像以及每張樣本影像對應的標註人臉參數值;S401. Obtain a sample image set, where the sample image set includes multiple sample images and labeled face parameter values corresponding to each sample image;

S402,將多張樣本影像輸入神經網路,得到每張樣本影像對應的預測人臉參數值;S402, inputting multiple sample images into the neural network to obtain predicted face parameter values corresponding to each sample image;

S403,基於每張樣本影像對應的預測人臉參數值和標註人臉參數值,對神經網路的網路參數值進行調整,得到訓練完成的神經網路。S403, based on the predicted face parameter value and the marked face parameter value corresponding to each sample image, adjust the network parameter value of the neural network to obtain a trained neural network.

示例性地,可以採集大量的人臉影像以及每張人臉影像對應的標註人臉參數值作為這裡的樣本影像集,將每張樣本影像輸入神經網路,可以得到神經網路輸出的該張樣本影像對應的預測人臉參數值,進一步可以基於樣本影像對應的標註人臉參數值和預測人臉參數值確定神經網路對應的第三損失值,然後根據第三損失值對神經網路的網路參數值進行調整,直至調整次數達到預設次數和/或第三損失值小於第三預設閾值後,得到訓練完成的神經網路。Exemplarily, a large number of face images and the marked face parameter values corresponding to each face image can be collected as the sample image set here, and each sample image can be input into the neural network, and the output of the neural network can be obtained. The predicted face parameter value corresponding to the sample image can further determine the third loss value corresponding to the neural network based on the marked face parameter value and the predicted face parameter value corresponding to the sample image, and then according to the third loss value. Network parameter values are adjusted until the number of adjustments reaches a preset number of times and/or the third loss value is less than a third preset threshold, and a trained neural network is obtained.

本公開實施例中,在對用於提取人臉參數值的神經網路進行訓練過程中,提出通過每張樣本影像的標註人臉參數值,對神經網路的網路參數值進行不斷調整,從而可以得到準確度較高的神經網路。In the embodiment of the present disclosure, during the training process of the neural network used to extract the face parameter value, it is proposed to continuously adjust the network parameter value of the neural network by marking the face parameter value of each sample image, Thus, a neural network with higher accuracy can be obtained.

具體地,針對上述S302,在基於第一人臉影像的人臉參數值、以及預設風格的多張第二人臉影像分別對應的人臉參數值和稠密點雲數據,確定目標人臉在預設風格下的稠密點雲數據時,如圖6所示,可以包括以下S3021~S3032:Specifically, for the above S302, based on the face parameter values of the first face image, and the face parameter values and dense point cloud data respectively corresponding to multiple second face images of the preset style, it is determined that the target face is For dense point cloud data in the preset style, as shown in Figure 6, the following S3021~S3032 can be included:

S3021,基於第一人臉影像的人臉參數值,以及預設風格的多張第二人臉影像分別對應的人臉參數值,確定第一人臉影像和預設風格的多張第二人臉影像之間的線性擬合係數;S3021, based on the face parameter values of the first face image and the face parameter values respectively corresponding to the multiple second face images of the preset style, determine the first face image and the multiple second face images of the preset style Linear fit coefficient between face images;

S3022,根據預設風格的多張第二人臉影像分別對應的稠密點雲數據和線性擬合係數,確定目標人臉在預設風格下的稠密點雲數據。S3022. Determine the dense point cloud data of the target face in the preset style according to the dense point cloud data and linear fitting coefficients respectively corresponding to the multiple second face images of the preset style.

示例性地,以人臉參數值為3DMM參數值為例,考慮到第一人臉影像的3DMM參數值可以表徵該第一人臉影像對應的人臉形狀和表情,同樣每張第二人臉影像對應的3DMM參數值可以表徵該第二人臉影像對應的人臉形狀和表情,第一人臉影像和多張第二人臉影像之間的關聯關係可以通過3DMM參數值來進行確定,具體地,假設多張第二人臉影像包含n張第二人臉影像,這樣第一人臉影像和多張第二人臉影像之間的線性擬合係數也包含n個線性擬合系數值,可以按照以下公式(1)來表示第一人臉影像的人臉參數值和多張第二人臉影像分別對應的人臉參數值之間的關聯關係:

Figure 02_image001
(1)。 Exemplarily, taking the face parameter value as 3DMM parameter value as an example, considering that the 3DMM parameter value of the first face image can represent the face shape and expression corresponding to the first face image, each second face The 3DMM parameter value corresponding to the image can represent the face shape and expression corresponding to the second face image, and the correlation between the first face image and multiple second face images can be determined by the 3DMM parameter value, specifically Specifically, it is assumed that multiple second face images contain n second face images, so that the linear fitting coefficient between the first face image and multiple second face images also includes n linear fitting coefficient values, The relationship between the face parameter values of the first face image and the face parameter values corresponding to multiple second face images can be expressed according to the following formula (1):
Figure 02_image001
(1).

其中,

Figure 02_image003
表示第一人臉影像對應的3DMM參數值;
Figure 02_image005
表示第一人臉影像和第x張第二人臉影像之間的線性擬合系數值;
Figure 02_image007
表示第x張第二人臉影像對應的人臉參數值;L表示確定第一人臉影像對應的人臉參數值時使用到的第二人臉影像的數量;x用於指示第x張第二人臉影像,其中,
Figure 02_image009
。 in,
Figure 02_image003
Indicates the 3DMM parameter value corresponding to the first face image;
Figure 02_image005
Indicates the linear fitting coefficient value between the first face image and the xth second face image;
Figure 02_image007
Represents the face parameter value corresponding to the xth second face image; L represents the number of second face images used when determining the face parameter value corresponding to the first face image; x is used to indicate the xth face image Two face images, in which,
Figure 02_image009
.

本公開實施例中,可以提出通過數量較少的人臉參數值快速得到表示第一人臉影像和多張第二人臉影像之間的關聯關係的線性擬合係數,進一步可以根據該線性擬合係數對預設風格的多張第二人臉影像的稠密點雲數據進行調整,可以快速得到目標人臉在預設風格下的稠密點雲數據。In the embodiment of the present disclosure, it can be proposed that the linear fitting coefficient representing the correlation between the first face image and multiple second face images can be quickly obtained through a small number of face parameter values, and further can be based on the linear fitting coefficient The combination coefficient adjusts the dense point cloud data of multiple second face images in the preset style, and can quickly obtain the dense point cloud data of the target face in the preset style.

具體地,針對上述S3021,在基於第一人臉影像的人臉參數值,以及預設風格的多張第二人臉影像分別對應的人臉參數值,確定第一人臉影像和預設風格的多張第二人臉影像之間的線性擬合係數時,包括以下步驟S30211至S30214。Specifically, for the above S3021, based on the face parameter values of the first face image and the face parameter values corresponding to the plurality of second face images of the preset style, the first face image and the preset style are determined. When performing linear fitting coefficients between multiple second human face images, the following steps S30211 to S30214 are included.

S30211,獲取當前線性擬合係數;其中,初始的當前線性擬合係數為預先設置。S30211. Obtain the current linear fitting coefficient; wherein, the initial current linear fitting coefficient is preset.

當前線性擬合係數可以為按照以下步驟S30212至S30214調整過至少一次的線性擬合係數,也可以為初始的線性擬合係數,在該當前線性擬合係數為初始的線性擬合係數的情況下,該初始的線性擬合係數可以為預先根據經驗設置的。The current linear fitting coefficient can be the linear fitting coefficient adjusted at least once according to the following steps S30212 to S30214, and can also be the initial linear fitting coefficient. In the case where the current linear fitting coefficient is the initial linear fitting coefficient , the initial linear fitting coefficient can be set in advance based on experience.

S30212,基於當前線性擬合係數和多張第二人臉影像分別對應的人臉參數值,預測第一人臉影像的當前人臉參數值。S30212. Predict the current face parameter value of the first face image based on the current linear fitting coefficient and the face parameter values respectively corresponding to the multiple second face images.

示例性地,多張第二人臉影像分別對應的人臉參數值可以由上述提到的預先訓練的神經網路提取得到,然後可以將當前線性擬合係數和多張第二人臉影像分別對應的人臉參數值輸入上述公式(1)中,預測得到第一人臉影像的當前人臉參數值。Exemplarily, the face parameter values corresponding to the multiple second face images can be extracted by the above-mentioned pre-trained neural network, and then the current linear fitting coefficient and the multiple second face images can be respectively The corresponding face parameter value is input into the above formula (1), and the current face parameter value of the first face image is obtained by prediction.

S30213,基於預測的當前人臉參數值和第一人臉影像的人臉參數值,確定第二損失值。S30213. Determine a second loss value based on the predicted current face parameter value and the face parameter value of the first face image.

在調整線性擬合係數的過程中,預測得到的第一人臉影像的當前人臉參數值和通過上述提到的預先訓練的神經網路提取的第一人臉影像的人臉參數值之間具有一定的差距,可以基於該差距,確定提取的第一人臉影像的人臉參數值和預測的第一人臉影像的人臉參數值之間的第二損失值。In the process of adjusting the linear fitting coefficient, the difference between the current face parameter value of the predicted first face image and the face parameter value of the first face image extracted by the above-mentioned pre-trained neural network There is a certain gap, and based on the gap, a second loss value between the extracted face parameter value of the first face image and the predicted face parameter value of the first face image can be determined.

S30214,基於第二損失值以及預設的線性擬合係數對應的約束範圍,調整當前線性擬合係數,基於調整後的當前線性擬合係數,返回執行預測當前人臉參數值的步驟,直至對當前線性擬合係數的調整操作符合第二調整截止條件的情況下,基於當前線性擬合係數得到第一人臉影像和預設風格的多張第二人臉影像之間的線性擬合係數。S30214, based on the second loss value and the constraint range corresponding to the preset linear fitting coefficient, adjust the current linear fitting coefficient, and return to the step of predicting the current face parameter value based on the adjusted current linear fitting coefficient until the When the adjustment operation of the current linear fitting coefficient meets the second adjustment cut-off condition, the linear fitting coefficient between the first human face image and multiple second human face images of the preset style is obtained based on the current linear fitting coefficient.

示例性地,考慮到人臉參數值是用來表示臉部形狀和尺寸的,為了避免後期通過線性擬合係數確定出的第一人臉影像的稠密點雲數據在表徵人臉臉部時發生失真,這裡提出在基於第二損失值調整當前線性擬合係數的過程中,需要結合預設的線性擬合係數的約束範圍,一同對當前線性擬合係數進行調整,例如,這裡可以通過大量數據統計,確定預設的線性擬合係數對應的約束範圍設置為-0.5到0.5之間,這樣在基於第二損失值調整當前線性擬合係數的過程中,可以使得每個調整後的線性擬合係數在-0.5到0.5之間。For example, considering that the face parameter values are used to represent the shape and size of the face, in order to avoid the occurrence of dense point cloud data of the first face image determined by the linear fitting coefficient in the later stage when characterizing the face Distortion, it is proposed here that in the process of adjusting the current linear fitting coefficient based on the second loss value, it is necessary to adjust the current linear fitting coefficient together with the preset constraint range of the linear fitting coefficient. For example, a large amount of data can be used here Statistics, determine that the constraint range corresponding to the preset linear fitting coefficient is set between -0.5 and 0.5, so that in the process of adjusting the current linear fitting coefficient based on the second loss value, each adjusted linear fitting can be made The coefficient is between -0.5 and 0.5.

示例性地,在基於第二損失值以及預設的線性擬合係數對應的約束範圍,對當前線性擬合係數進行調整,以使得預測的第一人臉影像的當前人臉參數值和基於神經網路提取的第一人臉影像的人臉參數值之間更加接近,然後基於調整後的當前線性擬合係數,返回S30212,直至對當前線性擬合係數的調整操作符合第二調整截止條件的情況下,例如在第二損失值小於第二預設閾值和/或針對當前線性擬合係數的調整次數達到預設次數後,得到線性擬合係數。Exemplarily, within the constraint range corresponding to the second loss value and the preset linear fitting coefficient, the current linear fitting coefficient is adjusted so that the predicted current face parameter value of the first face image and the neural The face parameter values of the first face image extracted by the network are closer, and then return to S30212 based on the adjusted current linear fitting coefficient until the adjustment operation of the current linear fitting coefficient meets the second adjustment cut-off condition. In some cases, for example, the linear fitting coefficient is obtained after the second loss value is smaller than the second preset threshold and/or the number of adjustments for the current linear fitting coefficient reaches the preset number.

本公開實施例中,在調整第一人臉影像和預設風格的多張第二人臉影像之間的線性擬合係數的過程中,通過第二損失值和/或調整次數對線性擬合係數進行多次調整,可以提高線性擬合係數的準確度;另一方面在調整過程中通過預設的線性擬合係數的約束範圍進行調整約束,這樣得到線性擬合係數,能夠更加合理的確定目標人臉對應的稠密點雲數據。In the embodiment of the present disclosure, in the process of adjusting the linear fitting coefficients between the first face image and multiple second face images of the preset style, the linear fitting is performed through the second loss value and/or the number of adjustments Adjusting the coefficients multiple times can improve the accuracy of the linear fitting coefficients; on the other hand, during the adjustment process, the constraints are adjusted through the preset constraints of the linear fitting coefficients, so that the linear fitting coefficients can be obtained, which can be more reasonably determined The dense point cloud data corresponding to the target face.

具體地,稠密點雲數據包含稠密點雲中各個點的坐標值,針對上述S3022,在根據預設風格的多張第二人臉影像分別對應的稠密點雲數據和線性擬合係數,確定目標人臉在預設風格下的稠密點雲數據時,包括以下步驟S30221至S30224。Specifically, the dense point cloud data includes the coordinate values of each point in the dense point cloud. For the above S3022, determine the target When the dense point cloud data of the face is in the preset style, the following steps S30221 to S30224 are included.

S30221,基於預設風格的多張第二人臉影像分別對應的稠密點雲中各個點的坐標值,確定平均稠密點雲數據中對應點的坐標值;S30221, based on the coordinate values of each point in the dense point cloud corresponding to the multiple second face images of the preset style, determine the coordinate value of the corresponding point in the average dense point cloud data;

示例性地,在確定預設風格的多張第二人臉影像對應的平均稠密點雲數據中各個點的坐標值時,可以基於多張第二人臉影像分別對應的各個點的坐標值,以及多張第二人臉影像的張數進行確定。例如多張第二人臉影像包含20張,每張第二人臉影像對應的稠密點雲數據包含100個點的三維坐標值,針對第一個點,可以將第一個點在20張第二人臉影像中對應的三維坐標值進行求和,然後將求和結果除以20得到的值作為平均稠密點雲數據中對應的第一個點的坐標值。按照同樣的方式,可以得到多張第二人臉影像對應的平均稠密點雲數據中每個點在三維坐標系下的坐標值。換言之,多張第二人臉影像各自的稠密點雲數據中相互對應的多個個點的坐標均值構成這裡的平均稠密點雲數據中對應點的坐標值。For example, when determining the coordinate values of each point in the average dense point cloud data corresponding to multiple second face images of the preset style, based on the coordinate values of each point corresponding to the multiple second face images, And the number of multiple second human face images is determined. For example, multiple second face images contain 20 pieces, and the dense point cloud data corresponding to each second face image contains 3D coordinate values of 100 points. For the first point, the first point can be placed in the 20th The corresponding three-dimensional coordinate values in the two face images are summed, and then the value obtained by dividing the summation result by 20 is used as the coordinate value of the corresponding first point in the average dense point cloud data. In the same manner, the coordinate value of each point in the three-dimensional coordinate system in the average dense point cloud data corresponding to multiple second face images can be obtained. In other words, the mean values of the coordinates of the corresponding points in the dense point cloud data of the multiple second face images constitute the coordinate values of the corresponding points in the average dense point cloud data here.

S30222,基於預設風格的多張第二人臉影像分別對應的稠密點雲中各個點的坐標值、和平均稠密點雲數據中對應點的坐標值,確定預設風格的多張第二人臉影像分別對應的坐標差異值。S30222. Based on the coordinate values of each point in the dense point cloud corresponding to the plurality of second human face images of the preset style and the coordinate values of the corresponding points in the average dense point cloud data, determine the plurality of second human face images of the preset style The coordinate difference values corresponding to the face images respectively.

示例性地,平均稠密點雲數據中各點的坐標值可以表示多張第二人臉影像對應的平均虛擬人臉模型,例如平均稠密點雲數據中各點的坐標值表示的五官尺寸可以為多張第二人臉影像對應的平均五官尺寸,平均稠密點雲數據中各點的坐標值表示的臉部尺寸可以為多張第二人臉影像對應的平均臉部尺寸等。Exemplarily, the coordinate value of each point in the average dense point cloud data can represent the average virtual face model corresponding to multiple second face images, for example, the facial features size represented by the coordinate value of each point in the average dense point cloud data can be The average size of facial features corresponding to the plurality of second facial images, and the facial size represented by the coordinate values of each point in the average dense point cloud data may be the average facial size corresponding to the plurality of second facial images.

示例性地,通過多張第二人臉影像分別對應的稠密點雲的坐標值和平均稠密點雲數據中對應點的坐標值進行作差,可以得到多張第二人臉影像分別對應的稠密點雲中各點的坐標值相對於平均稠密點雲數據中對應點的坐標值的坐標差異值(本文中也可簡稱為“第二人臉影像對應的坐標差異值”),從而可以表徵該第二人臉影像對應的虛擬人臉模型相比上述提到的平均人臉模型的差異性。Exemplarily, by making a difference between the coordinate values of the dense point clouds corresponding to the multiple second face images and the coordinate values of the corresponding points in the average dense point cloud data, the dense point cloud values corresponding to the multiple second face images can be obtained respectively. The coordinate difference value of the coordinate value of each point in the point cloud relative to the coordinate value of the corresponding point in the average dense point cloud data (this article can also be referred to as "the coordinate difference value corresponding to the second face image"), which can characterize the The difference between the virtual human face model corresponding to the second human face image and the aforementioned average human face model.

S30223,基於預設風格的多張第二人臉影像分別對應的坐標差異值和線性擬合係數,確定第一人臉影像對應的坐標差異值。S30223. Determine the coordinate difference value corresponding to the first human face image based on the coordinate difference values and linear fitting coefficients respectively corresponding to the plurality of second human face images in the preset style.

示例性地,線性擬合係數可以表示第一人臉影像對應的人臉參數值與多張第二人臉影像分別對應的人臉參數值之間的關聯關係,而人臉影像對應的人臉參數值和該人臉影像對應的稠密點雲數據之間具有對應關係,因此線性擬合係數也可以表示第一人臉影像對應的稠密點雲數據與多張第二人臉影像分別對應的稠密點雲數據之間的關聯關係。Exemplarily, the linear fitting coefficient can represent the relationship between the face parameter values corresponding to the first face image and the face parameter values corresponding to multiple second face images respectively, and the face values corresponding to the face images There is a corresponding relationship between the parameter value and the dense point cloud data corresponding to the face image, so the linear fitting coefficient can also represent the dense point cloud data corresponding to the first face image and the dense point cloud data corresponding to multiple second face images. The relationship between point cloud data.

在對應相同的平均稠密點雲數據的情況下,該線性擬合係數還可以表示第一人臉影像對應的坐標差異值與多張第二人臉影像分別對應的坐標差異值之間的關聯關係,因此,這裡可以基於多張第二人臉影像分別對應的坐標差異值和線性擬合係數,確定第一人臉影像對應的稠密點雲數據相對於平均稠密點雲數據的坐標差異值。In the case of corresponding to the same average dense point cloud data, the linear fitting coefficient can also represent the relationship between the coordinate difference values corresponding to the first face image and the coordinate difference values corresponding to multiple second face images , therefore, based on the coordinate difference values and linear fitting coefficients corresponding to the multiple second face images, the coordinate difference value of the dense point cloud data corresponding to the first face image relative to the average dense point cloud data can be determined.

S30224,基於第一人臉影像對應的坐標差異值和平均稠密點雲數據中對應點的坐標值,確定目標人臉在預設風格下的稠密點雲數據。S30224. Based on the coordinate difference value corresponding to the first face image and the coordinate value of the corresponding point in the average dense point cloud data, determine the dense point cloud data of the target face in a preset style.

將第一人臉影像對應的坐標差異值和平均稠密點雲數據中對應點的坐標值進行求和,可以得到第一人臉影像對應的稠密點雲數據,基於該稠密點雲數據可以表示該第一人臉影像對應的虛擬人臉模型。Summing the coordinate difference value corresponding to the first face image and the coordinate value of the corresponding point in the average dense point cloud data, the dense point cloud data corresponding to the first face image can be obtained, based on the dense point cloud data can represent the A virtual face model corresponding to the first face image.

具體地,這裡確定目標人臉對應的稠密點雲數據時,考慮到稠密點雲數據和3DMM之間的關係,目標人臉(第一人臉影像)對應的稠密點雲數據可以通過

Figure 02_image011
表示,具體可以根據以下公式(2)進行確定:
Figure 02_image013
(2)。 Specifically, when determining the dense point cloud data corresponding to the target face, considering the relationship between the dense point cloud data and 3DMM, the dense point cloud data corresponding to the target face (first face image) can be obtained by
Figure 02_image011
Specifically, it can be determined according to the following formula (2):
Figure 02_image013
(2).

其中,

Figure 02_image015
表示第x張第二人臉影像對應的稠密點雲的坐標值;
Figure 02_image017
表示根據多張第二人臉影像確定的平均稠密點雲數據中對應點的坐標值;
Figure 02_image019
可以表示第一人臉影像對應的點的坐標值相對於平均稠密點雲數據中對應點的坐標值的坐標差異值。 in,
Figure 02_image015
Indicates the coordinate value of the dense point cloud corresponding to the xth second face image;
Figure 02_image017
Indicates the coordinate value of the corresponding point in the average dense point cloud data determined according to multiple second face images;
Figure 02_image019
It may represent the coordinate difference value of the coordinate value of the point corresponding to the first face image relative to the coordinate value of the corresponding point in the average dense point cloud data.

這裡在確定第一人臉影像的稠密點雲數據時,採用步驟S30221至 S30224的方式進行確定,即通過上述公式(2)的方式進行確定,相比通過多張第二人臉影像分別對應的稠密點雲數據和線性擬合係數來確定目標人臉對應的稠密點雲數據的方式,可以包含以下好處。Here, when determining the dense point cloud data of the first face image, the method of steps S30221 to S30224 is used for determination, that is, the determination is made by the method of the above formula (2), compared with the method of multiple second face images corresponding to The method of using dense point cloud data and linear fitting coefficients to determine the dense point cloud data corresponding to the target face can include the following benefits.

本公開實施例中,考慮到線性擬合係數是用於對多張第二人臉影像分別對應的坐標差異值進行線性擬合,這樣得到的是第一人臉影像對應的點的坐標值相對於平均稠密點雲數據中對應點的坐標值的坐標差異值(本文中也可簡稱為“第一人臉影像對應的坐標差異值”),因此無需對這些線性擬合係數之和等於1進行限定,第一人臉影像對應的坐標差異值和平均稠密點雲數據中對應點的坐標值相加後,得到的稠密點雲數據也能夠表示一張正常的人臉影像。In the embodiment of the present disclosure, it is considered that the linear fitting coefficient is used to linearly fit the coordinate difference values corresponding to multiple second face images, so that the coordinate values of the points corresponding to the first face images are relatively The coordinate difference value of the coordinate value of the corresponding point in the average dense point cloud data (this article can also be referred to as "the coordinate difference value corresponding to the first face image"), so it is not necessary to make the sum of these linear fitting coefficients equal to 1. It is limited that after the coordinate difference value corresponding to the first face image is added to the coordinate value of the corresponding point in the average dense point cloud data, the obtained dense point cloud data can also represent a normal face image.

另外,在第二人臉影像較少的情況下,按照本公開實施例提供的方式可以通過對線性擬合係數進行合理的調整,達到使用較少數量的第二人臉影像確定出目標人臉在預設風格下對應的稠密點雲數據的目的,例如,第一人臉影像的眼睛尺寸為小眼睛,通過上述方式無需對多張第二人臉影像的眼睛尺寸進行限定,而可以通過線性擬合係數對坐標差異值進行調整,使得調整後的坐標差異值和平均稠密點雲數據中對應點的坐標值疊加後,可以得到表示小眼睛的稠密點雲數據。具體地,即使在多張第二人臉影像均為大眼睛時,對應的平均稠密點雲數據表示的眼睛也為大眼睛,仍然可以調整線性擬合係數,使得通過將調整後的坐標差異值與平均稠密點雲數據中對應點的坐標值求和可以得到表示小眼睛的稠密點雲數據。In addition, in the case of fewer second human face images, according to the method provided by the embodiment of the present disclosure, the linear fitting coefficient can be reasonably adjusted to use a small number of second human face images to determine the target human face. The purpose of the dense point cloud data corresponding to the preset style, for example, the eye size of the first face image is small eyes, through the above method, there is no need to limit the eye size of multiple second face images, but can be linearly The fitting coefficient adjusts the coordinate difference value, so that after the adjusted coordinate difference value and the coordinate value of the corresponding point in the average dense point cloud data are superimposed, dense point cloud data representing ommatidia can be obtained. Specifically, even when multiple second face images have big eyes, the eyes represented by the corresponding average dense point cloud data are also big eyes, and the linear fitting coefficient can still be adjusted, so that the adjusted coordinate difference value The dense point cloud data representing ommatidia can be obtained by summing the coordinate values of the corresponding points in the average dense point cloud data.

可見,本公開實施例針對不同的第一人臉影像,無需挑選與該第一人臉影像的五官特徵相似的第二人臉影像來確定該第一人臉影像對應的稠密點雲數據,該方式在第二人臉影像較少的情況下,可以通過多樣性的第二人臉影像的稠密點雲數據準確地表示出不同的目標人臉在預設風格下的稠密點雲數據。It can be seen that, for different first facial images, the embodiment of the present disclosure does not need to select a second facial image similar to the facial features of the first facial image to determine the dense point cloud data corresponding to the first facial image. The method can accurately represent the dense point cloud data of different target faces in the preset style through the dense point cloud data of the diverse second face images when there are few second human face images.

按照上述方式,可以得到目標人臉在預設風格下的稠密點雲數據,例如得到目標人臉在古典風格下的稠密點雲數據,進一步基於該稠密點雲數據展示目標人臉在古典風格下的初始虛擬人臉影像。According to the above method, the dense point cloud data of the target face in the preset style can be obtained, for example, the dense point cloud data of the target face in the classical style can be obtained, and further based on the dense point cloud data to show the target face in the classical style initial virtual face image.

針對上述S102,在確定初始稠密點雲數據相對於標準虛擬人臉影像對應的標準稠密點雲數據的形變係數時,如圖7所示,包括以下步驟S501至S504。Regarding the above S102, when determining the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual face image, as shown in FIG. 7 , the following steps S501 to S504 are included.

S501,基於當前形變係數對標準稠密點雲數據進行調整,得到當前稠密點雲數據,初始的當前形變係數為預先設置的。S501. Adjust the standard dense point cloud data based on the current deformation coefficient to obtain the current dense point cloud data. The initial current deformation coefficient is preset.

示例性地,在形變係數包含骨骼係數的情況下,可以基於當前骨骼係數和初始骨骼變化矩陣來共同確定針對該骨骼係數關聯的第一稠密點雲進行調整時的變化矩陣;在形變係數包含混合形變係數的情況下,可以基於當前混合形變係數和單位混合形變量共同確定針對該混合形變係數關聯的第二稠密點雲進行調整時的變化量,具體情況詳見下文。Exemplarily, when the deformation coefficient includes a bone coefficient, the change matrix when adjusting the first dense point cloud associated with the bone coefficient can be jointly determined based on the current bone coefficient and the initial bone change matrix; In the case of the deformation coefficient, the change amount when adjusting the second dense point cloud associated with the mixed deformation coefficient can be determined based on the current mixed deformation coefficient and the unit mixed deformation amount. See below for details.

示例性地,為了對標準稠密點雲數據進行調整的過程進行解釋,這裡可以引入骨骼坐標系,以及骨骼坐標系和世界坐標系之間的轉換關係,其中骨骼坐標系是針對每個骨骼建立三維坐標系,即每個骨骼對應的局部坐標系,世界坐標系為針對整張人臉建立的三維坐標系,每個骨骼對應的局部坐標系與世界坐標系之間具有轉換關係,按照該轉換關係,可以將稠密點雲在骨骼坐標系下的位置轉換至世界坐標系下的位置。Exemplarily, in order to explain the process of adjusting standard dense point cloud data, the bone coordinate system and the conversion relationship between the bone coordinate system and the world coordinate system can be introduced here, where the bone coordinate system is established for each bone in three dimensions The coordinate system is the local coordinate system corresponding to each bone. The world coordinate system is a three-dimensional coordinate system established for the entire face. There is a conversion relationship between the local coordinate system corresponding to each bone and the world coordinate system. According to the conversion relationship , which can convert the position of the dense point cloud in the bone coordinate system to the position in the world coordinate system.

特別地,在基於當前形變係數對標準稠密點雲數據進行調整的過程中,可以分為兩種情況,第一種情況為在基於混合形變係數對標準稠密點雲數據中對應的點進行調整時,會受到骨骼係數的影響的情況,下文將結合公式(3)進行說明;第二種情況為在基於混合形變係數對標準稠密點雲數據中對應的點進行調整時,不會受到骨骼係數的影響的情況,下文將結合公式(4)說明。In particular, in the process of adjusting the standard dense point cloud data based on the current deformation coefficient, it can be divided into two cases. The first case is when adjusting the corresponding points in the standard dense point cloud data based on the mixed deformation coefficient , will be affected by the bone coefficient, which will be explained in conjunction with formula (3) below; the second case is that when adjusting the corresponding points in the standard dense point cloud data based on the mixed deformation coefficient, it will not be affected by the bone coefficient The situation of the influence will be described in conjunction with formula (4) below.

具體地,第一種情況可以按照以下公式(3)來確定調整後的稠密點雲數據:

Figure 02_image021
(3)。 Specifically, in the first case, the adjusted dense point cloud data can be determined according to the following formula (3):
Figure 02_image021
(3).

其中,

Figure 02_image023
為針對標準稠密點雲數據中第m個點進行調整過程中,在預先以人臉建立的世界坐標系下的坐標值;
Figure 02_image025
表示第i個骨骼對應的骨骼坐標系向世界坐標系進行變換的變換矩陣;
Figure 02_image027
表示預先設定的第i個骨骼在該骨骼對應的骨骼坐標系下的初始骨骼變換矩陣;
Figure 02_image029
表示在第i個骨骼在第i個骨骼對應的骨骼坐標系下的值;
Figure 02_image031
表示標準稠密點雲數據中的第m個點在第i個骨骼對應的骨骼坐標系下的初始坐標值(當該第m個點不在第i個骨骼中時,該初始坐標值為0);
Figure 02_image033
表示預先設定與第m個點關聯的混合形變係數在第i個骨骼對應的骨骼坐標系下的單位形變量;
Figure 02_image035
表示與第m個點關聯的混合形變係數在第i個骨骼對應的骨骼坐標系下的坐標值;i用於指示第i個骨骼,
Figure 02_image037
;n表示標準虛擬人臉影像對應的骨骼個數;m表示稠密點雲數據中的第m個點。 in,
Figure 02_image023
In the process of adjusting the mth point in the standard dense point cloud data, the coordinate value in the world coordinate system pre-established with the face;
Figure 02_image025
Indicates the transformation matrix that transforms the bone coordinate system corresponding to the i-th bone to the world coordinate system;
Figure 02_image027
Indicates the initial bone transformation matrix of the preset i-th bone in the bone coordinate system corresponding to the bone;
Figure 02_image029
Indicates the value of the i-th bone in the bone coordinate system corresponding to the i-th bone;
Figure 02_image031
Indicates the initial coordinate value of the m-th point in the standard dense point cloud data in the bone coordinate system corresponding to the i-th bone (when the m-th point is not in the i-th bone, the initial coordinate value is 0);
Figure 02_image033
Indicates the unit deformation of the blended deformation coefficient associated with the mth point in the bone coordinate system corresponding to the i bone;
Figure 02_image035
Indicates the coordinate value of the mixed deformation coefficient associated with the m-th point in the bone coordinate system corresponding to the i-th bone; i is used to indicate the i-th bone,
Figure 02_image037
; n represents the number of bones corresponding to the standard virtual face image; m represents the mth point in the dense point cloud data.

可見針對上述第一種情況,在基於混合形變係數對標準稠密點雲數據中的點在骨骼坐標系下的坐標值進行調整後,還需要結合骨骼形變係數才能最終確定標準稠密點雲數據中的稠密點雲在世界坐標系下的坐標值,即上述提到的在基於混合形變係數對標準稠密點雲數據中的稠密點雲進行調整時,會受到骨骼係數的影響。It can be seen that for the first case above, after adjusting the coordinate values of the points in the standard dense point cloud data in the bone coordinate system based on the mixed deformation coefficient, it is necessary to combine the bone deformation coefficient to finally determine the standard dense point cloud data. The coordinate value of the dense point cloud in the world coordinate system, which is mentioned above, will be affected by the bone coefficient when adjusting the dense point cloud in the standard dense point cloud data based on the mixed deformation coefficient.

針對第二種情況,可以按照以下公式(4)來確定調整後的稠密點雲數據:

Figure 02_image039
(4)。 For the second case, the adjusted dense point cloud data can be determined according to the following formula (4):
Figure 02_image039
(4).

其中,

Figure 02_image041
針對標準稠密點雲數據中第m個點進行調整過程中,在世界坐標系下的坐標值;
Figure 02_image042
表示第i個骨骼對應的骨骼坐標系向世界坐標系進行變換的變換矩陣;
Figure 02_image044
表示預先設定的第i個骨骼在該骨骼對應的骨骼坐標系下的初始骨骼變換矩陣;
Figure 02_image046
表示在第i個骨骼在第i個骨骼對應的骨骼坐標系下的值;
Figure 02_image048
表示標準稠密點雲數據中的第m個點在第i個骨骼對應的骨骼坐標系下的初始位置(當該第m個點不在第i個骨骼中時,該初始位置為0);
Figure 02_image050
表示預先設定與第m個點關聯的混合形變係數在世界坐標系下的單位形變量;
Figure 02_image052
表示與第m個點關聯的混合形變係數在世界坐標系下的值;i用於指示第i個骨骼,
Figure 02_image037
;n表示需要調整的骨骼個數。 in,
Figure 02_image041
The coordinate value in the world coordinate system during the adjustment process for the mth point in the standard dense point cloud data;
Figure 02_image042
Indicates the transformation matrix that transforms the bone coordinate system corresponding to the i-th bone to the world coordinate system;
Figure 02_image044
Indicates the initial bone transformation matrix of the preset i-th bone in the bone coordinate system corresponding to the bone;
Figure 02_image046
Indicates the value of the i-th bone in the bone coordinate system corresponding to the i-th bone;
Figure 02_image048
Indicates the initial position of the m-th point in the standard dense point cloud data in the bone coordinate system corresponding to the i-th bone (when the m-th point is not in the i-th bone, the initial position is 0);
Figure 02_image050
Indicates the pre-set unit deformation of the mixed deformation coefficient associated with the mth point in the world coordinate system;
Figure 02_image052
Indicates the value of the blended deformation coefficient associated with the mth point in the world coordinate system; i is used to indicate the ith bone,
Figure 02_image037
; n represents the number of bones that need to be adjusted.

可見針對上述第二種情況,可以直接基於混合形變係數對標準稠密點雲數據中的點在世界坐標系下的坐標值進行調整,即上述提到的在基於混合形變係數對標準稠密點雲數據中的點進行調整時,不會受到骨骼係數的影響。It can be seen that for the second case above, the coordinate values of the points in the standard dense point cloud data in the world coordinate system can be adjusted directly based on the mixed deformation coefficient, that is, the above-mentioned standard dense point cloud data based on the mixed deformation coefficient When adjusting the points in , it will not be affected by the bone coefficient.

上述公式(3)或者公式(4)均為針對標準稠密點雲數據中的其中一個點進行調整的過程,按照同樣的方式,可以依次針對標準稠密點雲數據中的其它點進行調整,從而完成基於當前形變係數對標準稠密點雲數據的一次調整。The above formula (3) or formula (4) is the process of adjusting one of the points in the standard dense point cloud data. In the same way, it can be adjusted for other points in the standard dense point cloud data in turn, so as to complete An adjustment to standard dense point cloud data based on the current deformation coefficient.

S502,基於調整後的稠密點雲數據和目標人臉的初始稠密點雲數據,確定第一損失值。S502. Determine a first loss value based on the adjusted dense point cloud data and the initial dense point cloud data of the target face.

示例性地,第一損失值可以通過目標人臉的初始稠密點雲數據和調整後的稠密點雲數據之間的差值進行表示。Exemplarily, the first loss value may be represented by a difference between the initial dense point cloud data of the target face and the adjusted dense point cloud data.

具體地,該第一損失值可以通過以下公式(5)進行表示:

Figure 02_image054
(5)。 Specifically, the first loss value can be expressed by the following formula (5):
Figure 02_image054
(5).

其中,

Figure 02_image056
表示調整後的稠密點雲數據相比目標人臉的初始稠密點雲數據的第一損失值;
Figure 02_image058
表示目標人臉的初始稠密點雲數據中的第m個點在世界坐標系下的坐標值;
Figure 02_image060
表示調整後的稠密點雲數據中的第m個點在世界坐標系下的坐標值;m表示稠密點雲數據中的第m個點;M表示稠密點雲數據中的點的數量。 in,
Figure 02_image056
Represents the first loss value of the adjusted dense point cloud data compared to the initial dense point cloud data of the target face;
Figure 02_image058
Represents the coordinate value of the mth point in the world coordinate system in the initial dense point cloud data of the target face;
Figure 02_image060
Represents the coordinate value of the mth point in the adjusted dense point cloud data in the world coordinate system; m represents the mth point in the dense point cloud data; M represents the number of points in the dense point cloud data.

S503,基於第一損失值以及預設的形變係數的約束範圍,調整當前形變係數,基於調整後的當前形變係數,返回執行對標準稠密點雲數據進行調整的步驟,直至對當前形變係數的調整操作符合第一調整截止條件,得到初始稠密點雲數據相對於標準稠密點雲數據的形變係數。S503. Adjust the current deformation coefficient based on the first loss value and the preset constraint range of the deformation coefficient. Based on the adjusted current deformation coefficient, return to the step of adjusting the standard dense point cloud data until the current deformation coefficient is adjusted. The operation meets the first adjustment cut-off condition, and the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data is obtained.

示例性地,考慮到當前形變係數是目標人臉相對於標準人臉的形變係數,即當前形變係數是用來表示正常的臉部形貌的,為了避免對當前形變係數的調整幅度過大,使得其表示的臉部形貌失真,這裡提出在基於損失函數值調整當前形變係數的過程中,需要結合預設的線性擬合係數的約束範圍,一同對當前線性擬合係數進行調整,具體地,這裡預設的形變係數為混合形變係數,例如約束混合形變係數的取值為0至1之間。Exemplarily, considering that the current deformation coefficient is the deformation coefficient of the target face relative to the standard face, that is, the current deformation coefficient is used to represent the normal facial appearance, in order to avoid excessive adjustment of the current deformation coefficient, make The facial shape it represents is distorted. Here, it is proposed that in the process of adjusting the current deformation coefficient based on the loss function value, it is necessary to adjust the current linear fitting coefficient together with the constraint range of the preset linear fitting coefficient. Specifically, The preset deformation coefficient here is a mixed deformation coefficient, for example, the value of the restricted mixed deformation coefficient is between 0 and 1.

示例性地,在基於第一損失值以及預設的形變係數對應的約束範圍,對當前形變係數進行調整,以使得目標人臉的初始稠密點雲數據和調整後的稠密點雲數據之間更加接近,然後基於調整後的當前形變係數,返回S501,直至在對當前形變係數的調整操作符合第一調整截止條件,例如在第一損失值小於第一預設閾值和/或針對當前形變係數的調整次數達到預設次數後,得到目標人臉對應的形變係數。Exemplarily, the current deformation coefficient is adjusted based on the first loss value and the constraint range corresponding to the preset deformation coefficient, so that the initial dense point cloud data of the target face and the adjusted dense point cloud data are closer. approach, and then return to S501 based on the adjusted current deformation coefficient, until the adjustment operation on the current deformation coefficient meets the first adjustment cut-off condition, for example, when the first loss value is less than the first preset threshold and/or the current deformation coefficient After the number of adjustments reaches the preset number of times, the deformation coefficient corresponding to the target face is obtained.

本公開實施例中,在確定形變係數的過程中,是通過對標準稠密點雲數據中的多個點進行調整確定的,因此得到的形變係數可以表示出目標人臉的初始稠密點雲相比標準稠密點雲的精確變化量,從而在需要對目標人臉的初始虛擬人臉影像進行調整過程中,可以基於形變係數對稠密點雲數據中關聯的點進行調整,從而提高調整精度。In the embodiment of the present disclosure, in the process of determining the deformation coefficient, it is determined by adjusting multiple points in the standard dense point cloud data, so the obtained deformation coefficient can represent the initial dense point cloud of the target face. The accurate change amount of the standard dense point cloud, so that in the process of adjusting the initial virtual face image of the target face, the associated points in the dense point cloud data can be adjusted based on the deformation coefficient, thereby improving the adjustment accuracy.

另一方面,在確定形變係數的過程中,損失值是在對所有稠密點雲進行調整後,再基於調整後的稠密點雲數據以及目標人臉的初始稠密點雲數據而確定的,對當前形變係數進行的優化,充分考慮形變係數與整體稠密點雲之間的關聯性,提高優化效率;此外在調整過程中通過預設的形變係數的約束範圍進行調整約束,可以防止形變係數發生畸變,得到無法表示正常的目標人臉的形變係數。On the other hand, in the process of determining the deformation coefficient, the loss value is determined based on the adjusted dense point cloud data and the initial dense point cloud data of the target face after adjusting all the dense point clouds. The optimization of the deformation coefficient fully considers the correlation between the deformation coefficient and the overall dense point cloud to improve the optimization efficiency; in addition, during the adjustment process, the adjustment constraints are adjusted through the preset deformation coefficient constraint range, which can prevent the deformation coefficient from being distorted. Get deformation coefficients that cannot represent normal target faces.

針對上述S103,在響應於針對初始虛擬人臉影像的調整操作,對形變係數進行調整,得到目標形變係數時,如圖8所示,可以包括以下步驟S601至S602:For the above S103, when adjusting the deformation coefficient in response to the adjustment operation for the initial virtual face image to obtain the target deformation coefficient, as shown in Figure 8, the following steps S601 to S602 may be included:

S601,響應針對初始虛擬人臉影像的調整操作,確定針對初始虛擬人臉影像的目標調整位置,以及針對目標調整位置的調整幅度;S601. In response to the adjustment operation on the initial virtual face image, determine a target adjustment position for the initial virtual face image and an adjustment range for the target adjustment position;

S602,按照調整幅度,對與目標調整位置關聯的形變係數進行調整,得到目標形變係數。S602. Adjust the deformation coefficient associated with the target adjustment position according to the adjustment range to obtain the target deformation coefficient.

示例性地,在對初始虛擬人臉影像進行調整過程中,考慮到初始虛擬人臉影像包含的可以調整的位置較多,在向用戶進行展示這些可以調整的位置時,可以預選對這些可以調整的位置進行分類,例如按照臉部不同區域進行分類,例如可以分為臉部下巴區域、眉毛區域、眼睛區域等,對應地,可以展示下巴區域、眉毛區域、眼睛區域等分別對應的調整操作按鈕,用戶可以基於不同區域分別對應的調整操作按鈕選擇目標調整區域;或者,可以每次向用戶展示設定個數的調整位置對應的調整按鈕,以及展示更換調整位置的指示按鈕。例如,如圖9所示,圖9中左圖展示了6種調整位置的調整界面,具體包含鼻翼上下、鼻樑高低、鼻頭大小、鼻頭朝向、嘴大小、嘴的上部和下部各自對應的幅度條,用戶可以拖動幅度條對調整位置進行調整,也可以在選中調整位置後,通過位於調整位置上方的調整按鍵進行調整,例如“減一”按鍵和“加一”按鍵,調整界面的右下角還展示有用於指示更換調整位置的箭頭按鈕,用戶可以觸發該箭頭按鈕,更換至圖9中右圖展示的6種調整位置。For example, in the process of adjusting the initial virtual face image, considering that the initial virtual face image contains many positions that can be adjusted, when presenting these adjustable positions to the user, it is possible to pre-select the positions that can be adjusted. For example, it can be classified according to different areas of the face, for example, it can be divided into the chin area, eyebrow area, eye area, etc. Correspondingly, the adjustment operation buttons corresponding to the chin area, eyebrow area, and eye area can be displayed , the user can select the target adjustment area based on the adjustment operation buttons corresponding to different areas; or, the user can be shown each time adjustment buttons corresponding to a set number of adjustment positions, and an indication button for changing the adjustment position. For example, as shown in Figure 9, the left picture in Figure 9 shows six adjustment interfaces for adjusting positions, including the amplitude bars corresponding to the upper and lower sides of the nose, the height of the bridge of the nose, the size of the nose, the orientation of the nose, the size of the mouth, and the upper and lower parts of the mouth. , the user can adjust the adjustment position by dragging the amplitude bar, or after selecting the adjustment position, adjust it through the adjustment buttons above the adjustment position, such as the "minus one" button and "plus one" button, the right side of the adjustment interface The lower corner also shows an arrow button used to indicate the replacement of the adjustment position. The user can trigger the arrow button to change to the 6 adjustment positions shown in the right picture in Figure 9.

具體地,針對每種調整位置,可以按照該調整位置對應的幅度條確定針對該調整位置的調整幅度,用戶針對其中一個調整位置對應的幅度條進行調整時,可以將該調整位置作為目標調整位置,基於幅度條的變化數據確定針對該目標調整位置的調整幅度,進一步按照該調整幅度,以及預先設定的每種調整位置與形變係數之間的關聯關係,對與目標調整位置關聯的形變係數進行調整,得到目標形變係數。Specifically, for each adjustment position, the adjustment range for the adjustment position can be determined according to the range bar corresponding to the adjustment position, and when the user adjusts the range bar corresponding to one of the adjustment positions, the adjustment position can be used as the target adjustment position , determine the adjustment range for the target adjustment position based on the change data of the range bar, and further carry out the deformation coefficient associated with the target adjustment position according to the adjustment range and the preset relationship between each adjustment position and the deformation coefficient Adjust to get the target deformation coefficient.

本公開實施例中,可以根據調整操作,確定目標形變係數,便於後期基於該目標形變係數可以確定出目標虛擬人臉影像,該方式可以基於個性化的用戶需求來針對形變係數進行調整。In the embodiment of the present disclosure, the target deformation coefficient can be determined according to the adjustment operation, so that the target virtual human face image can be determined later based on the target deformation coefficient. This method can adjust the deformation coefficient based on personalized user needs.

針對上述S104,在基於目標形變係數和標準稠密點雲數據,生成目標人臉對應的目標虛擬人臉影像時,如圖10所示,可以包括以下步驟S801至S802:Regarding the above S104, when generating a target virtual face image corresponding to the target face based on the target deformation coefficient and standard dense point cloud data, as shown in FIG. 10 , the following steps S801 to S802 may be included:

S801,基於目標形變係數,對標準稠密點雲數據進行調整,得到目標稠密點雲數據;S801, based on the target deformation coefficient, adjust the standard dense point cloud data to obtain the target dense point cloud data;

S802,基於目標稠密點雲數據,生成目標虛擬人臉影像。S802. Generate a virtual face image of the target based on the dense point cloud data of the target.

示例性地,目標形變係數可以包含與目標調整位置關聯的發生變化的形變係數,也可以包含未進行調整的不發生變化的形變係數,考慮到形變係數是目標人臉的初始稠密點雲數據相比標準稠密點雲數據確定的,因此在基於目標形變係數對初始虛擬人臉影像進行調整過程中,可以基於該目標形變係數和標準稠密點雲數據來得到目標人臉對應的目標稠密點雲數據,進一步基於該目標稠密點雲數據,生成目標虛擬人臉影像。Exemplarily, the target deformation coefficient may include a changed deformation coefficient associated with the target adjusted position, or may include an unchanged deformation coefficient that has not been adjusted. Considering that the deformation coefficient is the initial dense point cloud data of the target face Therefore, in the process of adjusting the initial virtual face image based on the target deformation coefficient, the target dense point cloud data corresponding to the target face can be obtained based on the target deformation coefficient and the standard dense point cloud data , and further based on the dense point cloud data of the target, a target virtual face image is generated.

示例性地,例如針對如上圖9所示,用戶點擊針對鼻樑高低進行調整,將鼻樑的高度調高,則相比初始虛擬人臉影像,目標虛擬人臉影像的鼻樑變高。Exemplarily, for example, as shown in FIG. 9 above, the user clicks to adjust the height of the bridge of the nose to increase the height of the bridge of the nose. Compared with the initial virtual face image, the bridge of the nose of the target virtual face image becomes higher.

本公開實施例中,在確定目標形變係數後,可以直接根據目標形變係數對標準稠密點雲數據進行調整,確定目標稠密點雲數據,這樣可以根據調目標稠密點雲數據快速得到目標人臉對應的目標虛擬人臉影像。In the embodiment of the present disclosure, after the target deformation coefficient is determined, the standard dense point cloud data can be directly adjusted according to the target deformation coefficient to determine the target dense point cloud data, so that the target face correspondence can be quickly obtained according to the adjusted target dense point cloud data. target virtual face image.

具體地,在基於目標稠密點雲數據,生成目標虛擬人臉影像時,包括以下步驟S8021至S8022:Specifically, when generating the target virtual human face image based on the target dense point cloud data, the following steps S8021 to S8022 are included:

S8021,確定與目標稠密點雲數據對應的虛擬人臉模型;S8021, determining a virtual face model corresponding to the target dense point cloud data;

S8022,基於預選的人臉屬性特徵和虛擬人臉模型,生成目標虛擬人臉影像。S8022. Generate a target virtual face image based on the preselected face attributes and virtual face model.

示例性地,虛擬人臉模型可以為三維人臉模型,也可以是二維人臉模型,與具體的應用場景相關,在此不進行限定。Exemplarily, the virtual face model may be a three-dimensional face model or a two-dimensional face model, which is related to a specific application scenario and is not limited here.

示例性地,人臉屬性特徵可以包含膚色、髮型等特徵,人臉屬性特徵可以根據用戶的選擇確定,例如用戶可以選擇設置膚色為白色系、髮型為棕色捲髮。Exemplarily, the face attribute features may include skin color, hairstyle and other features, and the face attribute features may be determined according to the user's selection, for example, the user may choose to set the skin color to be white, and the hairstyle to be brown curly hair.

在得到目標稠密點雲數據後,可以基於該目標稠密點雲數據,生成目標虛擬人臉模型,目標虛擬人臉模型可以包含目標人臉的形狀以及表情特徵,然後結合人臉屬性特徵,可以生成符合用戶個性需求的目標虛擬人臉影像。After obtaining the target dense point cloud data, the target virtual face model can be generated based on the target dense point cloud data. The target virtual face model can include the shape and expression features of the target face, and then combined with the face attribute features, it can generate The target virtual face image that meets the user's individual needs.

本公開實施例中,在對初始虛擬人臉影像進行調整時,還可以結合用戶選定的人臉屬性特徵進行個性化地調整,從而使得目標虛擬人臉影像更貼合用戶的實際需求。In the embodiment of the present disclosure, when adjusting the initial virtual human face image, it may also be adjusted in combination with the facial attribute characteristics selected by the user, so that the target virtual human face image is more suitable for the actual needs of the user.

下面將以一具體實施例對人臉影像的處理過程進行闡述,包括以下步驟S901~S904:The processing process of the face image will be described below with a specific embodiment, including the following steps S901~S904:

S901,針對輸入的目標人臉,使用計算機讀取輸入的目標人臉的初始稠密點雲數據

Figure 02_image062
(其中
Figure 02_image062
表示稠密點雲中M個點的坐標值),再獲取標準虛擬人臉影像對應的標準稠密點雲數據和預設的初始形變係數(包括初始骨骼形變係數和初始混合形變係數); S901, for the input target face, using a computer to read the initial dense point cloud data of the input target face
Figure 02_image062
(in
Figure 02_image062
represent the coordinate values of M points in the dense point cloud), and then obtain the standard dense point cloud data corresponding to the standard virtual face image and the preset initial deformation coefficient (including the initial bone deformation coefficient and the initial mixed deformation coefficient);

S902,根據初始骨骼形變係數和初始混合形變係數對標準稠密點雲數據進行調整,得到調整後的稠密點雲數據

Figure 02_image064
(其中
Figure 02_image066
表示稠密點雲中M個點調整後的坐標值),具體可以通過上述公式(3)或者上述公式(4)進行調整; S902. Adjust the standard dense point cloud data according to the initial bone deformation coefficient and the initial mixed deformation coefficient to obtain adjusted dense point cloud data
Figure 02_image064
(in
Figure 02_image066
Indicates the adjusted coordinate values of M points in the dense point cloud), which can be adjusted by the above formula (3) or the above formula (4);

S903,計算目標人臉的初始稠密點雲數據

Figure 02_image062
和調整後的稠密點雲數據
Figure 02_image064
之間的差異值
Figure 02_image068
,並通過該差異值以及針對初始混合形變係數的約束項,對初始骨骼形變係數和初始混合形變係數進行調整; S903, calculate the initial dense point cloud data of the target face
Figure 02_image062
and adjusted dense point cloud data
Figure 02_image064
difference between
Figure 02_image068
, and adjust the initial bone deformation coefficient and the initial mixed deformation coefficient through the difference value and the constraint item for the initial mixed deformation coefficient;

S904,根據調整後的骨骼形變係數替換初始骨骼形變係數,以及根據調整後的混合形變係數替換初始混合形變係數,返回S902步驟繼續對骨骼係數和混合形變係數進行調整,直至目標人臉的初始稠密點雲數據

Figure 02_image062
和調整後的稠密點雲數據
Figure 02_image064
的差異值小於第一預設閾值,或者迭代次數超過預設次數。 S904, replace the initial bone deformation coefficient according to the adjusted bone deformation coefficient, and replace the initial mixed deformation coefficient according to the adjusted mixed deformation coefficient, and return to step S902 to continue adjusting the bone coefficient and the mixed deformation coefficient until the initial denseness of the target face point cloud data
Figure 02_image062
and adjusted dense point cloud data
Figure 02_image064
The difference value of is smaller than the first preset threshold, or the number of iterations exceeds the preset number.

本領域技術人員可以理解,在具體實施方式的上述方法中,各步驟的撰寫順序並不意味著嚴格的執行順序而對實施過程構成任何限定,各步驟的具體執行順序應當以其功能和可能的內在邏輯確定。Those skilled in the art can understand that in the above method of specific implementation, the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process. The specific execution order of each step should be based on its function and possible The inner logic is OK.

基於同一技術構思,本公開實施例中還提供了與人臉影像的處理方法對應的處理裝置,由於本公開實施例中的裝置解決問題的原理與本公開實施例上述處理方法相似,因此裝置的實施可以參見方法的實施,重複之處不再贅述。Based on the same technical concept, the embodiment of the present disclosure also provides a processing device corresponding to the processing method of the human face image. Since the problem-solving principle of the device in the embodiment of the present disclosure is similar to the above-mentioned processing method of the embodiment of the present disclosure, the device's For the implementation, please refer to the implementation of the method, and repeated descriptions will not be repeated.

參照圖11所示,本公開實施例提供一種人臉影像的處理裝置1000,該處理裝置包括:獲取模組1001,用於獲取目標人臉的初始稠密點雲數據,並基於初始稠密點雲數據生成目標人臉的初始虛擬人臉影像;確定模組1002,用於確定初始稠密點雲數據相對於標準虛擬人臉影像對應的標準稠密點雲數據的形變係數;調整模組1003,用於響應於針對初始虛擬人臉影像的調整操作,對形變係數進行調整,得到目標形變係數;生成模組1004,用於基於目標形變係數和標準稠密點雲數據,生成目標人臉對應的目標虛擬人臉影像。Referring to FIG. 11 , an embodiment of the present disclosure provides a face image processing device 1000, the processing device includes: an acquisition module 1001, which is used to acquire the initial dense point cloud data of the target face, and based on the initial dense point cloud data Generate the initial virtual face image of the target face; determine the module 1002 for determining the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual face image; adjust the module 1003 for responding For the adjustment operation of the initial virtual face image, the deformation coefficient is adjusted to obtain the target deformation coefficient; the generation module 1004 is used to generate the target virtual face corresponding to the target face based on the target deformation coefficient and standard dense point cloud data image.

在一種可能的實施方式中,形變係數包含至少一個骨骼係數和至少一個混合形變係數中的至少一項;其中,每個骨骼係數用於對與該骨骼係數關聯的第一稠密點雲構成的骨骼的初始位姿進行調整;每個混合形變係數用於對與該混合形變係數關聯的第二稠密點雲對應的初始位置進行調整。In a possible implementation manner, the deformation coefficient includes at least one of at least one bone coefficient and at least one mixed deformation coefficient; wherein each bone coefficient is used for the bone formed by the first dense point cloud associated with the bone coefficient The initial pose of the mixture is adjusted; each mixture deformation coefficient is used to adjust the initial position corresponding to the second dense point cloud associated with the mixture deformation coefficient.

在一種可能的實施方式中,確定模組1002在用於確定初始稠密點雲數據相對標準稠密點雲數據的形變係數時,包括:基於當前形變係數對標準稠密點雲數據進行調整,得到調整後的稠密點雲數據,初始的當前形變係數為預先設置的;基於調整後的稠密點雲數據和初始稠密點雲數據,確定第一損失值;基於第一損失值以及預設的形變係數的約束範圍,調整當前形變係數;基於調整後的當前形變係數,返回執行對標準稠密點雲數據進行調整的步驟,直至對當前形變係數的調整操作符合第一調整截止條件的情況下,根據當前形變係數得到初始稠密點雲數據相對於標準稠密點雲數據的形變係數。In a possible implementation, when the determination module 1002 is used to determine the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data, it includes: adjusting the standard dense point cloud data based on the current deformation coefficient, and obtaining the adjusted The dense point cloud data, the initial current deformation coefficient is preset; based on the adjusted dense point cloud data and the initial dense point cloud data, determine the first loss value; based on the first loss value and the constraints of the preset deformation coefficient Range, adjust the current deformation coefficient; based on the adjusted current deformation coefficient, return to the step of adjusting the standard dense point cloud data until the adjustment operation of the current deformation coefficient meets the first adjustment cut-off condition, according to the current deformation coefficient Obtain the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data.

在一種可能的實施方式中,調整模組1003在用於響應於針對初始虛擬人臉影像的調整操作,對形變係數進行調整,得到目標形變係數時,包括:響應針對初始虛擬人臉影像的調整操作,確定針對初始虛擬人臉影像的目標調整位置,以及針對目標調整位置的調整幅度;按照調整幅度,對與目標調整位置關聯的形變係數進行調整,得到目標形變係數。In a possible implementation manner, when the adjustment module 1003 adjusts the deformation coefficient in response to the adjustment operation on the initial virtual human face image to obtain the target deformation coefficient, it includes: responding to the adjustment operation on the initial virtual human face image The operation is to determine the target adjustment position for the initial virtual face image and the adjustment range for the target adjustment position; according to the adjustment range, adjust the deformation coefficient associated with the target adjustment position to obtain the target deformation coefficient.

在一種可能的實施方式中,生成模組1004在用於基於目標形變係數和標準稠密點雲數據,生成目標人臉對應的目標虛擬人臉影像時,包括:基於目標形變係數,對標準稠密點雲數據進行調整,得到目標稠密點雲數據;基於目標稠密點雲數據,生成目標虛擬人臉影像。In a possible implementation, when the generation module 1004 is used to generate the target virtual human face image corresponding to the target face based on the target deformation coefficient and standard dense point cloud data, it includes: based on the target deformation coefficient, standard dense point cloud data The cloud data is adjusted to obtain the target dense point cloud data; based on the target dense point cloud data, the target virtual face image is generated.

在一種可能的實施方式中,生成模組1004在用於基於目標稠密點雲數據,生成目標虛擬人臉影像時,包括:確定與目標稠密點雲數據對應的虛擬人臉模型;基於預選的人臉屬性特徵和虛擬人臉模型,生成目標虛擬人臉影像。In a possible implementation, when the generation module 1004 is used to generate a target virtual human face image based on the target dense point cloud data, it includes: determining a virtual human face model corresponding to the target dense point cloud data; Face attribute features and a virtual face model to generate a target virtual face image.

在一種可能的實施方式中,獲取模組1001在用於獲取目標人臉的初始稠密點雲數據,並基於初始稠密點雲數據展示目標人臉的初始虛擬人臉影像時,包括:獲取目標人臉對應的第一人臉影像,以及預設風格的多張第二人臉影像分別對應的稠密點雲數據;基於第一人臉影像和預設風格的多張第二人臉影像分別對應的稠密點雲數據,確定目標人臉在預設風格下的稠密點雲數據;基於目標人臉在預設風格下的稠密點雲數據,生成並展示目標人臉在預設風格下的初始虛擬人臉影像。In a possible implementation manner, when the acquiring module 1001 is used to acquire the initial dense point cloud data of the target face and display the initial virtual face image of the target face based on the initial dense point cloud data, it includes: acquiring the target face The first face image corresponding to the face, and the dense point cloud data corresponding to the multiple second face images of the preset style; the multiple second face images based on the first face image and the preset style respectively Dense point cloud data, determine the dense point cloud data of the target face in the preset style; based on the dense point cloud data of the target face in the preset style, generate and display the initial virtual person of the target face in the preset style face image.

在一種可能的實施方式中,獲取模組1001在用於基於第一人臉影像和預設風格的多張第二人臉影像分別對應的稠密點雲數據,確定目標人臉在預設風格下的稠密點雲數據時,包括:提取第一人臉影像的人臉參數值,以及預設風格的多張第二人臉影像分別對應的人臉參數值;其中,人臉參數值包含表徵人臉形狀的參數值和表徵人臉表情的參數值;基於第一人臉影像的人臉參數值,以及預設風格的多張第二人臉影像分別對應的人臉參數值和稠密點雲數據,確定目標人臉在預設風格下的稠密點雲數據。In a possible implementation, the acquisition module 1001 is used to determine the target face in the preset style based on the dense point cloud data corresponding to the first face image and multiple second face images of the preset style. When using the dense point cloud data, it includes: extracting the face parameter values of the first face image, and the face parameter values corresponding to multiple second face images of the preset style; wherein, the face parameter values include Face shape parameter values and parameter values representing facial expressions; face parameter values based on the first face image, and face parameter values and dense point cloud data corresponding to multiple second face images of preset styles , to determine the dense point cloud data of the target face in the preset style.

在一種可能的實施方式中,獲取模組1001在用於基於第一人臉影像的人臉參數值、以及預設風格的多張第二人臉影像分別對應的人臉參數值和稠密點雲數據,確定目標人臉在預設風格下的稠密點雲數據時,包括:基於第一人臉影像的人臉參數值,以及預設風格的多張第二人臉影像分別對應的人臉參數值,確定第一人臉影像和預設風格的多張第二人臉影像之間的線性擬合係數;根據預設風格的多張第二人臉影像分別對應的稠密點雲數據和線性擬合係數,確定目標人臉在預設風格下的稠密點雲數據。In a possible implementation manner, the acquisition module 1001 uses the face parameter values based on the first face image and the face parameter values and dense point clouds respectively corresponding to multiple second face images of preset styles. When determining the dense point cloud data of the target face in the preset style, it includes: face parameter values based on the first face image, and face parameters corresponding to multiple second face images of the preset style value to determine the linear fitting coefficient between the first face image and multiple second face images of the preset style; the dense point cloud data and linear fitting coefficients corresponding to the multiple second face images of the preset style Combined coefficient to determine the dense point cloud data of the target face in the preset style.

在一種可能的實施方式中,獲取模組1001在用於基於第一人臉影像的人臉參數值,以及預設風格的多張第二人臉影像分別對應的人臉參數值,確定第一人臉影像和預設風格的多張第二人臉影像之間的線性擬合係數時,包括:獲取當前線性擬合係數,初始的當前線性擬合係數為預先設置;基於當前線性擬合係數和預設風格的多張第二人臉影像分別對應的人臉參數值,預測第一人臉影像的當前人臉參數值;基於預測的當前人臉參數值和第一人臉影像的人臉參數值,確定第二損失值;基於第二損失值以及預設的線性擬合係數對應的約束範圍,調整當前線性擬合係數;基於調整後的當前線性擬合係數,返回執行預測當前人臉參數值的步驟,直至對當前線性擬合係數的調整操作符合第二調整截止條件的情況下,基於當前線性擬合係數得到第一人臉影像和預設風格的多張第二人臉影像之間的線性擬合係數。In a possible implementation manner, the acquisition module 1001 is used to determine the first face parameter value based on the face parameter value of the first face image and the face parameter values corresponding to multiple second face images of a preset style. When the linear fitting coefficient between the face image and multiple second face images of the preset style includes: obtaining the current linear fitting coefficient, the initial current linear fitting coefficient is preset; based on the current linear fitting coefficient Predict the current face parameter values of the first face image based on face parameter values corresponding to multiple second face images of the preset style; based on the predicted current face parameter values and the face of the first face image Parameter value, determine the second loss value; based on the second loss value and the constraint range corresponding to the preset linear fitting coefficient, adjust the current linear fitting coefficient; based on the adjusted current linear fitting coefficient, return to perform prediction of the current face The step of parameter value, until the adjustment operation to the current linear fitting coefficient meets the second adjustment cut-off condition, obtain the first human face image and the plurality of second human face images of the preset style based on the current linear fitting coefficient. The linear fit coefficient between.

在一種可能的實施方式中,稠密點雲數據包括稠密點雲中各個點的坐標值;獲取模組在用於根據預設風格的多張第二人臉影像分別對應的稠密點雲數據和線性擬合係數,確定目標人臉在預設風格下的稠密點雲數據,包括:基於預設風格的多張第二人臉影像分別對應的稠密點雲中各個點的坐標值,確定平均稠密點雲數據中對應點的坐標值;基於預設風格的多張第二人臉影像分別對應的稠密點雲中各個點的坐標值、和平均稠密點雲數據中對應點的坐標值,確定預設風格的多張第二人臉影像分別對應的坐標差異值;基於預設風格的多張第二人臉影像分別對應的坐標差異值和線性擬合係數,確定第一人臉影像對應的坐標差異值;基於第一人臉影像對應的坐標差異值和平均稠密點雲數據中對應點的坐標值,確定目標人臉在預設風格下的稠密點雲數據。In a possible implementation, the dense point cloud data includes the coordinate values of each point in the dense point cloud; the dense point cloud data and linear Fitting coefficient, determine the dense point cloud data of the target face in the preset style, including: based on the coordinate values of each point in the dense point cloud corresponding to multiple second face images of the preset style, determine the average dense point The coordinate value of the corresponding point in the cloud data; based on the coordinate value of each point in the dense point cloud corresponding to the multiple second face images of the preset style, and the coordinate value of the corresponding point in the average dense point cloud data, determine the preset The coordinate difference values corresponding to the multiple second face images of the style; based on the coordinate difference values and linear fitting coefficients corresponding to the multiple second face images of the preset style, determine the coordinate difference corresponding to the first face image value; based on the coordinate difference value corresponding to the first face image and the coordinate value of the corresponding point in the average dense point cloud data, determine the dense point cloud data of the target face in the preset style.

在一種可能的實施方式中,人臉參數值由預先訓練的神經網路提取,神經網路基於預先標註人臉參數值的樣本影像訓練得到。In a possible implementation manner, the face parameter values are extracted by a pre-trained neural network, and the neural network is trained based on sample images with pre-marked face parameter values.

在一種可能的實施方式中,處理裝置還包括訓練模組1005,訓練模組1005用於按照以下方式預先訓練神經網路:獲取樣本影像集,樣本影像集包含多張樣本影像以及每張樣本影像對應的標註人臉參數值;將多張樣本影像輸入神經網路,得到每張樣本影像對應的預測人臉參數值;基於每張樣本影像對應的預測人臉參數值和標註人臉參數值,對神經網路的網路參數值進行調整,得到訓練完成的神經網路。In a possible implementation manner, the processing device further includes a training module 1005, and the training module 1005 is used to pre-train the neural network in the following manner: obtain a sample image set, the sample image set includes multiple sample images and each sample image The corresponding tagged face parameter values; input multiple sample images into the neural network to obtain the predicted face parameter values corresponding to each sample image; based on the predicted face parameter values and tagged face parameter values corresponding to each sample image, Adjust the network parameter values of the neural network to obtain a trained neural network.

關於裝置中的各模組的處理流程、以及各模組之間的交互流程的描述可以參照上述方法實施例中的相關說明,這裡不再詳述。For the description of the processing flow of each module in the device and the interaction flow between the modules, reference may be made to the relevant description in the above method embodiment, and details will not be described here.

對應於圖1中的人臉影像的處理方法,本公開實施例還提供了一種通訊,使得所述處理器111執行以下指令:獲取目標人臉的初始稠密點雲數據,並基於初始稠密點雲數據生成目標人臉的初始虛擬人臉影像;確定初始稠密點雲數據相對於標準虛擬人臉影像對應的標準稠密點雲數據的形變係數;響應於針對初始虛擬人臉影像的調整操作,對形變係數進行調整,得到目標形變係數;基於目標形變係數和標準稠密點雲數據,生成目標人臉對應的目標虛擬人臉影像。Corresponding to the processing method of the face image in FIG. 1 , the embodiment of the present disclosure also provides a kind of communication, so that the processor 111 executes the following instructions: acquire the initial dense point cloud data of the target face, and based on the initial dense point cloud The initial virtual face image of the target face is generated from the data; the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual face image is determined; in response to the adjustment operation for the initial virtual face image, the deformation The coefficient is adjusted to obtain the target deformation coefficient; based on the target deformation coefficient and standard dense point cloud data, the target virtual face image corresponding to the target face is generated.

本公開實施例還提供一種計算機可讀儲存媒體,該計算機可讀儲存媒體上儲存有計算機程式,該計算機程式被處理器運行時執行上述方法實施例中所述的人臉影像的處理方法的步驟。其中,該儲存媒體可以是揮發性或非揮發的計算機可讀取儲存媒體。An embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the method for processing a human face image described in the above-mentioned method embodiments are executed. . Wherein, the storage medium may be a volatile or non-volatile computer-readable storage medium.

本公開實施例還提供一種計算機程式產品,該計算機程式產品承載有程式代碼,所述程式代碼包括的指令可用於執行上述方法實施例中所述的人臉影像的處理方法的步驟,具體可參見上述方法實施例,在此不再贅述。An embodiment of the present disclosure also provides a computer program product, the computer program product carries a program code, and the instructions included in the program code can be used to execute the steps of the method for processing a human face image described in the method embodiment above, for details, please refer to The foregoing method embodiments are not described in detail here.

其中,上述計算機程式產品可以具體通過硬體、軟體或其結合的方式實現。在一個可選實施例中,所述計算機程式產品具體體現為計算機儲存媒體,在另一個可選實施例中,計算機程式產品具體體現為軟體產品,例如軟體開發包(Software Development Kit,SDK)等等。Wherein, the above-mentioned computer program product can be realized by means of hardware, software or a combination thereof. In an optional embodiment, the computer program product is embodied as a computer storage medium. In another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.

所屬領域的技術人員可以清楚地瞭解到,為描述的方便和簡潔,上述描述的系統和裝置的具體工作過程,可以參考前述方法實施例中的對應過程,在此不再贅述。在本公開所提供的幾個實施例中,應該理解到,所揭露的系統、裝置和方法,可以通過其它的方式實現。以上所描述的裝置實施例僅僅是示意性的,例如,所述單元的劃分,僅僅為一種邏輯功能劃分,實際實現時可以有另外的劃分方式,又例如,多個單元或組件可以結合或者可以整合到另一個系統,或一些特徵可以忽略,或不執行。另一點,所顯示或討論的相互之間的耦合或直接耦合或通訊連接可以是通過一些通訊接口,裝置或單元的間接耦合或通訊連接,可以是電性,機械或其它的形式。Those skilled in the art can clearly understand that for the convenience and brevity of description, the specific working process of the above-described system and device can refer to the corresponding process in the foregoing method embodiments, which will not be repeated here. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices and methods may be implemented in other ways. The device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or can be Integrate into another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.

所述作為分離部件說明的單元可以是或者也可以不是實體上分開的,作為單元顯示的部件可以是或者也可以不是實體單元,即可以位於一個地方,或者也可以分佈到多個網路單元上。可以根據實際的需要選擇其中的部分或者全部單元來實現本實施例方案的目的。The unit described as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may also be distributed to multiple network units . Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.

另外,在本公開各個實施例中的各功能單元可以整合在一個處理單元中,也可以是各個單元單獨實體存在,也可以兩個或兩個以上單元整合在一個單元中。In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately, or two or more units may be integrated into one unit.

所述功能如果以軟體功能單元的形式實現並作為獨立的產品銷售或使用時,可以儲存在一個處理器可執行的非揮發的計算機可讀取儲存媒體中。基於這樣的理解,本公開的技術方案本質上或者說對現有技術做出貢獻的部分或者該技術方案的部分可以以軟體產品的形式體現出來,該計算機軟體產品儲存在一個儲存媒體中,包括若干指令用以使得一台計算機設備(可以是個人計算機,伺服器,或者網路設備等)執行本公開各個實施例所述方法的全部或部分步驟。而前述的儲存媒體包括:隨身碟、移動硬碟、唯讀記憶體(Read-Only Memory,ROM)、隨機存取記憶體(Random Access Memory,RAM)、磁碟或者光碟等各種可以儲存程式代碼的媒體。If the functions are implemented in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor. Based on this understanding, the technical solution of the present disclosure can be embodied in the form of a software product, which is stored in a storage medium, including several The instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure. The aforementioned storage media include: flash drive, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc., which can store program codes. media.

最後應說明的是:以上所述實施例僅為本公開的具體實施方式,用以說明本公開的技術方案,而非對其限制,本公開的保護範圍並不局限於此,儘管參照前述實施例對本公開進行了詳細的說明,本領域的普通技術人員應當理解:任何熟悉本技術領域的技術人員在本公開揭露的技術範圍內,其依然可對前述實施例所記載的技術方案進行修改或可輕易想到變化,或者對其中部分技術特徵進行等同替換;而這些修改、變化或者替換,並不使相應技術方案的本質脫離本公開實施例技術方案的精神和範圍,都應涵蓋在本公開的保護範圍之內。因此,本公開的保護範圍應所述以請求項的保護範圍為準。Finally, it should be noted that: the above-mentioned embodiments are only specific implementations of the present disclosure, and are used to illustrate the technical solutions of the present disclosure, rather than to limit them, and the protection scope of the present disclosure is not limited thereto. The present disclosure has been described in detail in this example, and those skilled in the art should understand that any person familiar with the technical field can still modify or modify the technical solutions described in the foregoing embodiments within the technical scope disclosed in the present disclosure. Changes can be easily imagined, or equivalent replacements can be made to some of the technical features; and these modifications, changes or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present disclosure, and should be included in the scope of the technical solutions of the embodiments of the present disclosure. within the scope of protection. Therefore, the protection scope of the present disclosure should be based on the protection scope of the claims.

S101:獲取目標人臉的原始稠密點雲數據,並基於原始稠密點雲數據生成目標人臉的初始虛擬人臉影像 S102:確定初始稠密點雲數據相對於標準虛擬人臉影像對應的標準稠密點雲數據的形變係數 S103:響應於針對初始虛擬人臉影像的調整操作,對形變係數進行調整,得到目標形變係數 S104:基於目標形變係數和標準稠密點雲數據,生成目標人臉對應的目標虛擬人臉影像 S201:獲取目標人臉對應的第一人臉影像,以及預設風格的多張第二人臉影像分別對應的稠密點雲數據 S202:基於第一人臉影像和預設風格的多張第二人臉影像分別對應的稠密點雲數據,確定目標人臉在預設風格下的稠密點雲數據 S203:基於目標人臉在預設風格下的稠密點雲數據,生成並展示目標人臉在預設風格下的初始虛擬人臉影像 S301:提取第一人臉影像的人臉參數值,以及預設風格的多張第二人臉影像分別對應的人臉參數值;其中,人臉參數值包含表徵人臉形狀的參數值和表徵人臉表情的參數值 S302:基於第一人臉影像的人臉參數值、以及預設風格的多張第二人臉影像分別對應的人臉參數值和稠密點雲數據,確定目標人臉在預設風格下的稠密點雲數據 S401:獲取樣本影像集,樣本影像集包含多張樣本影像以及每張樣本影像對應的標註人臉參數值 S402:將多張樣本影像輸入神經網路,得到每張樣本影像對應的預測人臉參數值 S403:基於每張樣本影像對應的預測人臉參數值和標註人臉參數值,對神經網路的網路參數值進行調整,得到訓練完成的神經網路 S3021:基於第一人臉影像的人臉參數值,以及預設風格的多張第二人臉影像分別對應的人臉參數值,確定第一人臉影像和預設風格的多張第二人臉影像之間的線性擬合係數 S3022:根據預設風格的多張第二人臉影像分別對應的稠密點雲數據和線性擬合係數,確定目標人臉在預設風格下的稠密點雲數據 S501:基於當前形變係數對標準稠密點雲數據進行調整,得到當前稠密點雲數據,初始的當前形變係數為預先設置的 S502:基於調整後的稠密點雲數據和目標人臉的初始稠密點雲數據,確定第一損失值 S503:基於第一損失值以及預設的形變係數的約束範圍,調整當前形變係數,基於調整後的當前形變係數,返回執行對標準稠密點雲數據進行調整的步驟,直至對當前形變係數的調整操作符合第一調整截止條件,得到初始稠密點雲數據相對於標準稠密點雲數據的形變係數 S601:響應針對初始虛擬人臉影像的調整操作,確定針對初始虛擬人臉影像的目標調整位置,以及針對目標調整位置的調整幅度 S602:按照調整幅度,對與目標調整位置關聯的形變係數進行調整,得到目標形變係數 S801:基於目標形變係數,對標準稠密點雲數據進行調整,得到目標稠密點雲數據 S802:基於目標稠密點雲數據,生成目標虛擬人臉影像 1000:人臉影像的處理裝置 1001:獲取模組 1002:確定模組 1003:調整模組 1004:生成模組 1005:訓練模組 1100:電子設備 111:處理器 112:儲存器 1121:記憶體 1122:外部儲存器 113:匯流排 S101: Obtain the original dense point cloud data of the target face, and generate an initial virtual face image of the target face based on the original dense point cloud data S102: Determine the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual face image S103: In response to the adjustment operation for the initial virtual face image, adjust the deformation coefficient to obtain the target deformation coefficient S104: Based on the target deformation coefficient and standard dense point cloud data, generate a target virtual face image corresponding to the target face S201: Obtain the first face image corresponding to the target face, and the dense point cloud data respectively corresponding to multiple second face images of the preset style S202: Based on the dense point cloud data corresponding to the first face image and multiple second face images of the preset style, determine the dense point cloud data of the target face in the preset style. S203: Based on the dense point cloud data of the target face in the preset style, generate and display the initial virtual face image of the target face in the preset style S301: extract the face parameter value of the first face image, and the face parameter values corresponding to the plurality of second face images of the preset style; wherein, the face parameter value includes a parameter value and a representation characterizing the shape of the face Parameter values of facial expressions S302: Based on the face parameter values of the first face image, and the face parameter values and dense point cloud data respectively corresponding to a plurality of second face images of the preset style, determine the denseness of the target face under the preset style point cloud data S401: Obtain a sample image set, the sample image set includes multiple sample images and the labeled face parameter values corresponding to each sample image S402: Input multiple sample images into the neural network to obtain the predicted face parameter value corresponding to each sample image S403: Based on the predicted face parameter value and marked face parameter value corresponding to each sample image, the network parameter value of the neural network is adjusted to obtain the trained neural network S3021: Based on the face parameter values of the first face image and the face parameter values corresponding to the multiple second face images of the preset style, determine the first face image and the multiple second face images of the preset style Linear fit coefficient between face images S3022: Determine the dense point cloud data of the target face in the preset style according to the dense point cloud data and linear fitting coefficients respectively corresponding to the multiple second face images of the preset style. S501: Adjust the standard dense point cloud data based on the current deformation coefficient to obtain the current dense point cloud data, the initial current deformation coefficient is preset S502: Based on the adjusted dense point cloud data and the initial dense point cloud data of the target face, determine a first loss value S503: Adjust the current deformation coefficient based on the first loss value and the constraint range of the preset deformation coefficient, and return to the step of adjusting the standard dense point cloud data based on the adjusted current deformation coefficient until the adjustment of the current deformation coefficient The operation meets the first adjustment cut-off condition, and the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data is obtained S601: In response to the adjustment operation on the initial virtual face image, determine the target adjustment position for the initial virtual face image, and the adjustment range for the target adjustment position S602: Adjust the deformation coefficient associated with the target adjustment position according to the adjustment range to obtain the target deformation coefficient S801: Based on the target deformation coefficient, adjust the standard dense point cloud data to obtain the target dense point cloud data S802: Based on the dense point cloud data of the target, generate a virtual human face image of the target 1000: Face image processing device 1001: Get the module 1002: Determine the module 1003: adjust the module 1004: generate module 1005: training module 1100: Electronic equipment 111: Processor 112: Storage 1121: Memory 1122: external memory 113: busbar

圖1示出了本公開實施例所提供的一種人臉影像的處理方法的流程圖。 圖2示出了本公開實施例所提供的一種稠密點雲數據表示的人臉的三維模型的示意圖。 圖3示出了本公開實施例所提供的一種生成初始虛擬人臉影像的方法流程圖。 圖4示出了本公開實施例所提供的一種確定目標人臉在預設風格下的稠密點雲數據的方法流程圖。 圖5示出了本公開實施例所提供的一種訓練神經網路的方法流程圖。 圖6示出了本公開實施例所提供的一種具體地確定目標人臉在預設風格下的稠密點雲數據的方法流程圖。 圖7示出了本公開實施例所提供的一種確定形變係數的方法流程圖。 圖8示出了本公開實施例所提供的一種調整形變係數的方法流程圖。 圖9示出了本公開實施例所提供的一種針對虛擬人臉影像的調整界面示意圖。 圖10示出了本公開實施例所提供的一種生成目標人臉的目標虛擬人臉影像的方法流程圖。 圖11示出了本公開實施例所提供的一種人臉影像的處理裝置的結構示意圖。 圖12示出了本公開實施例所提供的一種電子設備的示意圖。 Fig. 1 shows a flow chart of a method for processing a face image provided by an embodiment of the present disclosure. Fig. 2 shows a schematic diagram of a three-dimensional model of a human face represented by dense point cloud data provided by an embodiment of the present disclosure. Fig. 3 shows a flow chart of a method for generating an initial virtual face image provided by an embodiment of the present disclosure. Fig. 4 shows a flow chart of a method for determining dense point cloud data of a target face in a preset style provided by an embodiment of the present disclosure. Fig. 5 shows a flowchart of a method for training a neural network provided by an embodiment of the present disclosure. Fig. 6 shows a flow chart of a method for specifically determining the dense point cloud data of a target face in a preset style provided by an embodiment of the present disclosure. Fig. 7 shows a flowchart of a method for determining deformation coefficient provided by an embodiment of the present disclosure. Fig. 8 shows a flowchart of a method for adjusting deformation coefficient provided by an embodiment of the present disclosure. FIG. 9 shows a schematic diagram of an adjustment interface for a virtual face image provided by an embodiment of the present disclosure. Fig. 10 shows a flowchart of a method for generating a target virtual face image of a target face provided by an embodiment of the present disclosure. FIG. 11 shows a schematic structural diagram of an apparatus for processing a human face image provided by an embodiment of the present disclosure. Fig. 12 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.

S101:獲取目標人臉的原始稠密點雲數據,並基於原始稠密點雲數據生成目標人臉的初始虛擬人臉影像 S101: Obtain the original dense point cloud data of the target face, and generate an initial virtual face image of the target face based on the original dense point cloud data

S102:確定初始稠密點雲數據相對於標準虛擬人臉影像對應的標準稠密點雲數據的形變係數 S102: Determine the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual face image

S103:響應於針對初始虛擬人臉影像的調整操作,對形變係數進行調整,得到目標形變係數 S103: In response to the adjustment operation for the initial virtual face image, adjust the deformation coefficient to obtain the target deformation coefficient

S104:基於目標形變係數和標準稠密點雲數據,生成目標人臉對應的目標虛擬人臉影像 S104: Based on the target deformation coefficient and standard dense point cloud data, generate a target virtual face image corresponding to the target face

Claims (15)

一種人臉影像的處理方法,包括:獲取目標人臉的初始稠密點雲數據,並基於所述初始稠密點雲數據生成所述目標人臉的初始虛擬人臉影像;確定所述初始稠密點雲數據相對於標準虛擬人臉影像對應的標準稠密點雲數據的形變係數;響應於針對所述初始虛擬人臉影像的調整操作,對所述形變係數進行調整,得到目標形變係數;基於所述目標形變係數和所述標準稠密點雲數據,生成所述目標人臉對應的目標虛擬人臉影像;其中,所述形變係數包含至少一個骨骼係數和至少一個混合形變係數中的至少一項;其中,每個骨骼係數用於對與該骨骼係數關聯的第一稠密點雲構成的骨骼的初始位姿進行調整;每個混合形變係數用於對與該混合形變係數關聯的第二稠密點雲對應的初始位置進行調整。 A method for processing a human face image, comprising: acquiring initial dense point cloud data of a target human face, and generating an initial virtual human face image of the target human face based on the initial dense point cloud data; determining the initial dense point cloud The data is relative to the deformation coefficient of the standard dense point cloud data corresponding to the standard virtual face image; in response to the adjustment operation for the initial virtual face image, the deformation coefficient is adjusted to obtain the target deformation coefficient; based on the target The deformation coefficient and the standard dense point cloud data generate a target virtual face image corresponding to the target face; wherein the deformation coefficient includes at least one of at least one bone coefficient and at least one hybrid deformation coefficient; wherein, Each bone coefficient is used to adjust the initial pose of the bone composed of the first dense point cloud associated with the bone coefficient; each mixed deformation coefficient is used to adjust the corresponding second dense point cloud associated with the mixed deformation coefficient Adjust the initial position. 如請求項1所述的處理方法,其特徵在於,所述確定所述初始稠密點雲數據相對於所述標準稠密點雲數據的形變係數,包括:基於當前形變係數對所述標準稠密點雲數據進行調整,得到調整後的稠密點雲數據,初始的所述當前形變係數為預先設置的;基於所述調整後的稠密點雲數據和所述初始稠密點雲數據,確定第一損失值; 基於所述第一損失值以及預設的形變係數的約束範圍,調整所述當前形變係數;基於調整後的所述當前形變係數,返回執行對所述標準稠密點雲數據進行調整的步驟,直至對所述當前形變係數的調整操作符合第一調整截止條件的情況下,基於所述當前形變係數得到所述初始稠密點雲數據相對於所述標準稠密點雲數據的形變係數。 The processing method according to claim 1, wherein the determining the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data includes: modifying the standard dense point cloud based on the current deformation coefficient The data is adjusted to obtain adjusted dense point cloud data, and the initial current deformation coefficient is preset; based on the adjusted dense point cloud data and the initial dense point cloud data, a first loss value is determined; Adjust the current deformation coefficient based on the first loss value and the preset constraint range of the deformation coefficient; return to the step of adjusting the standard dense point cloud data based on the adjusted current deformation coefficient, until If the adjustment operation on the current deformation coefficient meets the first adjustment cut-off condition, the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data is obtained based on the current deformation coefficient. 如請求項1所述的處理方法,其特徵在於,所述響應於針對所述初始虛擬人臉影像的調整操作,對所述形變係數進行調整,得到目標形變係數,包括:響應針對所述初始虛擬人臉影像的調整操作,確定針對所述初始虛擬人臉影像的目標調整位置,以及針對所述目標調整位置的調整幅度;按照所述調整幅度,對與所述目標調整位置關聯的所述形變係數進行調整,得到所述目標形變係數。 The processing method according to claim 1, wherein, in response to the adjustment operation on the initial virtual human face image, adjusting the deformation coefficient to obtain the target deformation coefficient includes: responding to the initial virtual face image The adjustment operation of the virtual human face image is to determine the target adjustment position for the initial virtual human face image, and the adjustment range for the target adjustment position; according to the adjustment range, the The deformation coefficient is adjusted to obtain the target deformation coefficient. 如請求項1所述的處理方法,其特徵在於,所述基於所述目標形變係數和所述標準稠密點雲數據,生成所述目標人臉對應的目標虛擬人臉影像,包括:基於所述目標形變係數,對所述標準稠密點雲數據進行調整,得到目標稠密點雲數據;基於所述目標稠密點雲數據,生成所述目標虛擬人臉影像。 The processing method according to claim 1, wherein generating a target virtual face image corresponding to the target face based on the target deformation coefficient and the standard dense point cloud data includes: based on the The target deformation coefficient is adjusted to the standard dense point cloud data to obtain the target dense point cloud data; based on the target dense point cloud data, the target virtual human face image is generated. 如請求項4所述的處理方法,其特徵在於,所述基於所述目標稠密點雲數據,生成所述目標虛擬人臉影像,包括: 確定與所述目標稠密點雲數據對應的虛擬人臉模型;基於預選的人臉屬性特徵和所述虛擬人臉模型,生成所述目標虛擬人臉影像。 The processing method as described in claim 4, wherein generating the target virtual human face image based on the target dense point cloud data includes: Determine a virtual human face model corresponding to the target dense point cloud data; generate the target virtual human face image based on the preselected human face attributes and the virtual human face model. 如請求項1所述的處理方法,其特徵在於,所述獲取目標人臉的初始稠密點雲數據,並基於所述初始稠密點雲數據生成所述目標人臉的初始虛擬人臉影像,包括:獲取所述目標人臉對應的第一人臉影像,以及預設風格的多張第二人臉影像分別對應的稠密點雲數據;基於所述第一人臉影像和所述預設風格的多張第二人臉影像分別對應的稠密點雲數據,確定所述目標人臉在所述預設風格下的初始稠密點雲數據;基於所述目標人臉在所述預設風格下的初始稠密點雲數據,生成所述目標人臉在所述預設風格下的初始虛擬人臉影像。 The processing method as described in claim 1, wherein the acquisition of initial dense point cloud data of the target face, and generating an initial virtual face image of the target face based on the initial dense point cloud data includes : Obtain the first human face image corresponding to the target human face, and the dense point cloud data respectively corresponding to a plurality of second human face images of the preset style; based on the first human face image and the preset style The dense point cloud data corresponding to a plurality of second face images respectively, determine the initial dense point cloud data of the target face in the preset style; based on the initial dense point cloud data of the target face in the preset style Dense point cloud data to generate an initial virtual face image of the target face in the preset style. 如請求項6所述的處理方法,其特徵在於,所述基於所述第一人臉影像和所述預設風格的多張第二人臉影像分別對應的稠密點雲數據,確定所述目標人臉在所述預設風格下的初始稠密點雲數據,包括:提取所述第一人臉影像的人臉參數值,以及所述預設風格的多張第二人臉影像分別對應的人臉參數值;其中,所述人臉參數值包含表徵人臉形狀的參數值和表徵人臉表情的參數值;基於所述第一人臉影像的人臉參數值、以及所述預設風格的多張第二人臉影像分別對應的人臉參數值和稠密點雲數據,確定 所述目標人臉在所述預設風格下的初始稠密點雲數據。 The processing method according to claim 6, wherein the target is determined based on the dense point cloud data respectively corresponding to the first face image and multiple second face images of the preset style The initial dense point cloud data of the face in the preset style includes: extracting the face parameter values of the first face image, and the faces corresponding to the plurality of second face images in the preset style A face parameter value; wherein, the face parameter value includes a parameter value characterizing the shape of a face and a parameter value characterizing a facial expression; based on the face parameter value of the first face image and the preset style The face parameter values and dense point cloud data corresponding to multiple second face images are determined The initial dense point cloud data of the target face in the preset style. 如請求項7所述的處理方法,其特徵在於,所述基於所述第一人臉影像的人臉參數值、以及所述預設風格的多張第二人臉影像分別對應的人臉參數值和稠密點雲數據,確定所述目標人臉在所述預設風格下的初始稠密點雲數據,包括:基於所述第一人臉影像的人臉參數值,以及所述預設風格的多張第二人臉影像分別對應的人臉參數值,確定所述第一人臉影像和所述預設風格的多張第二人臉影像之間的線性擬合係數;根據所述預設風格的多張第二人臉影像分別對應的稠密點雲數據和所述線性擬合係數,確定所述目標人臉在所述預設風格下的初始稠密點雲數據。 The processing method as described in claim 7, wherein the face parameter values based on the first face image and the face parameters respectively corresponding to the plurality of second face images of the preset style Value and dense point cloud data, determine the initial dense point cloud data of the target face in the preset style, including: based on the face parameter value of the first face image, and the preset style The face parameter values corresponding to the plurality of second face images respectively determine the linear fitting coefficients between the first face image and the plurality of second face images of the preset style; according to the preset The dense point cloud data and the linear fitting coefficients respectively corresponding to the plurality of second face images of the style are used to determine the initial dense point cloud data of the target face in the preset style. 如請求項8所述的處理方法,其特徵在於,所述基於所述第一人臉影像的人臉參數值,以及所述預設風格的多張第二人臉影像分別對應的人臉參數值,確定所述第一人臉影像和所述預設風格的多張第二人臉影像之間的線性擬合係數,包括:獲取當前線性擬合係數,初始的所述當前線性擬合係數為預先設置;基於所述當前線性擬合係數和所述預設風格的多張第二人臉影像分別對應的人臉參數值,預測所述第一人臉影像的當前人臉參數值;基於預測的當前人臉參數值和所述第一人臉影像的人臉參數值,確定第二損失值; 基於所述第二損失值以及預設的所述線性擬合係數對應的約束範圍,調整所述當前線性擬合係數;基於調整後的所述當前線性擬合係數,返回執行預測當前人臉參數值的步驟,直至對所述當前線性擬合係數的調整操作符合第二調整截止條件的情況下,基於所述當前線性擬合係數得到第一人臉影像和預設風格的多張第二人臉影像之間的線性擬合係數。 The processing method as described in claim 8, wherein the face parameter value based on the first face image and the face parameters respectively corresponding to a plurality of second face images of the preset style Value, determining the linear fitting coefficient between the first face image and the plurality of second face images of the preset style, including: obtaining the current linear fitting coefficient, the initial current linear fitting coefficient It is preset; based on the current linear fitting coefficient and the face parameter values corresponding to the plurality of second face images of the preset style, predicting the current face parameter value of the first face image; based on The predicted current face parameter value and the face parameter value of the first face image determine a second loss value; Based on the second loss value and the preset constraint range corresponding to the linear fitting coefficient, adjust the current linear fitting coefficient; based on the adjusted current linear fitting coefficient, return to perform prediction of current face parameters value, until the adjustment operation to the current linear fitting coefficient meets the second adjustment cut-off condition, based on the current linear fitting coefficient, the first human face image and multiple second person images of the preset style are obtained Linear fit coefficient between face images. 如請求項8所述的處理方法,其特徵在於,所述稠密點雲數據包括稠密點雲中各個點的坐標值;所述根據所述預設風格的多張第二人臉影像分別對應的稠密點雲數據和所述線性擬合係數,確定所述目標人臉在所述預設風格下的初始稠密點雲數據,包括:基於所述預設風格的多張第二人臉影像分別對應的所述稠密點雲數據中各個點的坐標值,確定平均稠密點雲數據中對應點的坐標值;基於所述預設風格的多張第二人臉影像分別對應的所述稠密點雲數據中各個點的坐標值、和所述平均稠密點雲數據中對應點的坐標值,確定所述預設風格的多張第二人臉影像分別對應的坐標差異值;基於所述預設風格的多張第二人臉影像分別對應的所述坐標差異值和所述線性擬合係數,確定所述第一人臉影像對應的坐標差異值;基於所述第一人臉影像對應的坐標差異值和所述平均稠密點 雲數據中對應點的坐標值,確定所述目標人臉在所述預設風格下的所述初始稠密點雲數據。 The processing method as described in claim 8, wherein the dense point cloud data includes the coordinate values of each point in the dense point cloud; the plurality of second human face images corresponding to the preset style respectively The dense point cloud data and the linear fitting coefficient determine the initial dense point cloud data of the target face in the preset style, including: corresponding to multiple second face images based on the preset style The coordinate values of each point in the dense point cloud data, determine the coordinate values of the corresponding points in the average dense point cloud data; the dense point cloud data respectively corresponding to a plurality of second face images based on the preset style Coordinate values of each point in and the coordinate values of corresponding points in the average dense point cloud data determine the coordinate difference values corresponding to the plurality of second face images of the preset style; based on the preset style The coordinate difference values corresponding to the plurality of second face images and the linear fitting coefficients are respectively determined to determine the coordinate difference values corresponding to the first face images; based on the coordinate difference values corresponding to the first face images and the average dense point The coordinate values of the corresponding points in the cloud data determine the initial dense point cloud data of the target face in the preset style. 如請求項7所述的處理方法,其特徵在於,所述人臉參數值由預先訓練的神經網路提取,所述神經網路基於預先標註人臉參數值的樣本影像訓練得到。 The processing method according to claim 7, wherein the face parameter values are extracted by a pre-trained neural network, and the neural network is trained based on sample images with pre-marked face parameter values. 如請求項11所述的處理方法,其特徵在於,按照以下方式預先訓練所述神經網路:獲取樣本影像集,所述樣本影像集包含多張樣本影像以及每張樣本影像對應的標註人臉參數值;將所述多張樣本影像輸入神經網路,得到每張樣本影像對應的預測人臉參數值;基於每張樣本影像對應的預測人臉參數值和標註人臉參數值,對所述神經網路的網路參數值進行調整,得到訓練完成的神經網路。 The processing method as described in claim 11, wherein the neural network is pre-trained in the following manner: a sample image set is obtained, and the sample image set includes multiple sample images and labeled human faces corresponding to each sample image Parameter value; the multiple sample images are input into the neural network to obtain the predicted face parameter value corresponding to each sample image; based on the predicted face parameter value and the marked face parameter value corresponding to each sample image, the The network parameter values of the neural network are adjusted to obtain a trained neural network. 一種人臉影像的處理裝置,其特徵在於,包括:獲取模組,用於獲取目標人臉的初始稠密點雲數據,並基於所述初始稠密點雲數據生成所述目標人臉的初始虛擬人臉影像;確定模組,用於確定所述初始稠密點雲數據相對於標準虛擬人臉影像對應的標準稠密點雲數據的形變係數;調整模組,用於響應於針對所述初始虛擬人臉影像的調整操作,對所述形變係數進行調整,得到目標形變係數; 生成模組,用於基於所述目標形變係數和所述標準稠密點雲數據,生成所述目標人臉對應的目標虛擬人臉影像;其中,所述形變係數包含至少一個骨骼係數和至少一個混合形變係數中的至少一項;其中,每個骨骼係數用於對與該骨骼係數關聯的第一稠密點雲構成的骨骼的初始位姿進行調整;每個混合形變係數用於對與該混合形變係數關聯的第二稠密點雲對應的初始位置進行調整。 A processing device for a human face image, characterized in that it includes: an acquisition module for acquiring initial dense point cloud data of a target human face, and generating an initial virtual person of the target human face based on the initial dense point cloud data Face image; determination module, used to determine the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual human face image; adjustment module, used to respond to the initial virtual human face The adjustment operation of the image is to adjust the deformation coefficient to obtain the target deformation coefficient; A generating module, configured to generate a target virtual human face image corresponding to the target face based on the target deformation coefficient and the standard dense point cloud data; wherein the deformation coefficient includes at least one bone coefficient and at least one blending At least one of the deformation coefficients; wherein, each bone coefficient is used to adjust the initial pose of the bone formed by the first dense point cloud associated with the bone coefficient; each mixed deformation coefficient is used to adjust the The initial position corresponding to the second dense point cloud associated with the coefficient is adjusted. 一種電子設備,包括處理器、儲存器和匯流排,所述儲存器儲存有所述處理器可執行的機器可讀指令,當電子設備運行時,所述處理器與所述儲存器之間通過匯流排通訊,所述機器可讀指令被所述處理器執行時執行如請求項1至12之中任一項所述的處理方法的步驟。 An electronic device, including a processor, a memory, and a bus bar, the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the processor and the memory pass through Bus communication, when the machine-readable instructions are executed by the processor, the steps of the processing method described in any one of claims 1 to 12 are executed. 一種計算機可讀儲存媒體,其上儲存有計算機程式,該計算機程式被處理器運行時執行如請求項1至12之中任一項所述的處理方法的步驟。 A computer-readable storage medium, on which a computer program is stored, and the computer program executes the steps of the processing method described in any one of claims 1 to 12 when the computer program is run by a processor.
TW110135050A 2020-11-25 2021-09-22 Method and apparatus for processing face image, electronic device and storage medium TWI780919B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011339586.6 2020-11-25
CN202011339586.6A CN112419144B (en) 2020-11-25 2020-11-25 Face image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
TW202221638A TW202221638A (en) 2022-06-01
TWI780919B true TWI780919B (en) 2022-10-11

Family

ID=74843582

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110135050A TWI780919B (en) 2020-11-25 2021-09-22 Method and apparatus for processing face image, electronic device and storage medium

Country Status (3)

Country Link
CN (1) CN112419144B (en)
TW (1) TWI780919B (en)
WO (1) WO2022111001A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419144B (en) * 2020-11-25 2024-05-24 上海商汤智能科技有限公司 Face image processing method and device, electronic equipment and storage medium
CN113409437B (en) * 2021-06-23 2023-08-08 北京字节跳动网络技术有限公司 Virtual character face pinching method and device, electronic equipment and storage medium
CN113808249B (en) * 2021-08-04 2022-11-25 北京百度网讯科技有限公司 Image processing method, device, equipment and computer storage medium
CN115953821B (en) * 2023-02-28 2023-06-30 北京红棉小冰科技有限公司 Virtual face image generation method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851123A (en) * 2014-02-13 2015-08-19 北京师范大学 Three-dimensional human face change simulation method
US20200058137A1 (en) * 2015-06-24 2020-02-20 Sergi PUJADES Skinned Multi-Person Linear Model
TW202032503A (en) * 2019-02-26 2020-09-01 大陸商騰訊科技(深圳)有限公司 Method, device, computer equipment, and storage medium for generating 3d face model
CN111710035A (en) * 2020-07-16 2020-09-25 腾讯科技(深圳)有限公司 Face reconstruction method and device, computer equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9552668B2 (en) * 2012-12-12 2017-01-24 Microsoft Technology Licensing, Llc Generation of a three-dimensional representation of a user
CN104504410A (en) * 2015-01-07 2015-04-08 深圳市唯特视科技有限公司 Three-dimensional face recognition device and method based on three-dimensional point cloud
CN108876893A (en) * 2017-12-14 2018-11-23 北京旷视科技有限公司 Method, apparatus, system and the computer storage medium of three-dimensional facial reconstruction
CN108629294A (en) * 2018-04-17 2018-10-09 华南理工大学 Human body based on deformation pattern and face net template approximating method
CN110163054B (en) * 2018-08-03 2022-09-27 腾讯科技(深圳)有限公司 Method and device for generating human face three-dimensional image
CN109376698B (en) * 2018-11-29 2022-02-01 北京市商汤科技开发有限公司 Face modeling method and device, electronic equipment, storage medium and product
CN112419144B (en) * 2020-11-25 2024-05-24 上海商汤智能科技有限公司 Face image processing method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851123A (en) * 2014-02-13 2015-08-19 北京师范大学 Three-dimensional human face change simulation method
US20200058137A1 (en) * 2015-06-24 2020-02-20 Sergi PUJADES Skinned Multi-Person Linear Model
TW202032503A (en) * 2019-02-26 2020-09-01 大陸商騰訊科技(深圳)有限公司 Method, device, computer equipment, and storage medium for generating 3d face model
CN111710035A (en) * 2020-07-16 2020-09-25 腾讯科技(深圳)有限公司 Face reconstruction method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN112419144B (en) 2024-05-24
CN112419144A (en) 2021-02-26
TW202221638A (en) 2022-06-01
WO2022111001A1 (en) 2022-06-02

Similar Documents

Publication Publication Date Title
TWI780919B (en) Method and apparatus for processing face image, electronic device and storage medium
US12054227B2 (en) Matching meshes for virtual avatars
US11270489B2 (en) Expression animation generation method and apparatus, storage medium, and electronic apparatus
US11625878B2 (en) Method, apparatus, and system generating 3D avatar from 2D image
US11868515B2 (en) Generating textured polygon strip hair from strand-based hair for a virtual character
US11430169B2 (en) Animating virtual avatar facial movements
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
WO2022143645A1 (en) Three-dimensional face reconstruction method and apparatus, device, and storage medium
CN111325846B (en) Expression base determination method, avatar driving method, device and medium
US20220148333A1 (en) Method and system for estimating eye-related geometric parameters of a user
WO2019226549A1 (en) Computer generated hair groom transfer tool
CN103208133A (en) Method for adjusting face plumpness in image
WO2022110851A1 (en) Facial information processing method and apparatus, electronic device, and storage medium
CN111833236B (en) Method and device for generating three-dimensional face model for simulating user
CN110148191A (en) The virtual expression generation method of video, device and computer readable storage medium
KR20230110787A (en) Methods and systems for forming personalized 3D head and face models
CN109035380B (en) Face modification method, device and equipment based on three-dimensional reconstruction and storage medium
WO2023169023A1 (en) Expression model generation method and apparatus, device, and medium
CN114742939A (en) Human body model reconstruction method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
GD4A Issue of patent certificate for granted invention patent